I collected a lot of interesting links about science in general.
First, two very good blog posts showing the limits and difficulties of making research and arriving at conclusions.
The first takes as a starting point the notion that black dogs are harder to be adopted at pet shelters:
"This is the problem with experimental science. Studies are done under two assumptions – that a sample represents the whole, and that during the experiment that sample is doing what it always does. When we look at Black Dog Syndrome, it seems like a pretty simple idea to test. Animal shelters keep records. How hard can it be to go through them and see how often and how fast black dogs are adopted? And yet studies stack up proving and disproving the same idea. A sample taken from the Midwest shows a trend exactly opposite to one in Los Angeles. A sample taken one year looks different from a sample taken another year. The sample doesn’t represent the whole, and there’s wide variation in what a sample of any given group of people do in one situation or another."
The second shows the inherent problems in the actual growth of the scientific community itself, among other things:
"Then things started getting more complicated. People started investigating more subtle effects, or effects that shifted with the observer. The scientific community became bigger, everyone didn’t know everyone anymore, you needed more journals to find out what other people had done. Statistics became more complicated, allowing the study of noisier data but also bringing more peril. And a lot of science done by smart and honest people ended up being wrong, and we needed to figure out exactly which science that was.
And the result is a lot of essays like this one, where people who think they’re smart take one side of a scientific “controversy” and say which studies you should believe. And then other people take the other side and tell you why you should believe different studies than the first person thought you should believe. And there is much argument and many insults and citing of authorities and interminable debate for, if not centuries, at least a pretty long time."
But these reminders that science is a human enterprise, which reminds us to always assess critically any claims do not mean that scientific studies have nothing to say, as is once again stated by Harry Collins (whose work I am increasingly relying on):
"In other words, Climategate demonstrated something that sociologists of science have know for some time—that scientists are mortals, just like all the rest of us. "What was being exposed was not something special and local but ‘business as usual’ across the whole scientific world," writes Cardiff University scholar Harry Collins, one of the original founders of the field of "science studies," in his masterful new book, Are We All Scientific Experts Now? But that means that Climategate didn’t undermine the case for human-caused global warming at all, says Collins. Rather, it demonstrated why it is so hard for ordinary citizens to understand what is going on inside the scientific community—much less to snipe and criticize it from the outside. They simply don’t grasp how researchers work on a day-to-day basis, or what kind of shared knowledge exists within the group."
The issue, of course, is that if science can be problematic, it doesn’t mean that we can go all relativistic (not in the physical sense) and conclude that any and every form of knowledge is equal to every other, particularly when some sorts of beliefs can be demonstrated to be immune to correction, as shown in another post:
"Last month, Brendan Nyhan, a professor of political science at Dartmouth, published the results of a study that he and a team of pediatricians and political scientists had been working on for three years. They had followed a group of almost two thousand parents, all of whom had at least one child under the age of seventeen, to test a simple relationship: Could various pro-vaccination campaigns change parental attitudes toward vaccines? (…) The result was dramatic: a whole lot of nothing. None of the interventions worked. The first leaflet—focussed on a lack of evidence connecting vaccines and autism—seemed to reduce misperceptions about the link, but it did nothing to affect intentions to vaccinate. It even decreased intent among parents who held the most negative attitudes toward vaccines, a phenomenon known as the backfire effect. The other two interventions fared even worse: the images of sick children increased the belief that vaccines cause autism, while the dramatic narrative somehow managed to increase beliefs about the dangers of vaccines. “It’s depressing,” Nyhan said. “We were definitely depressed,” he repeated, after a pause."
And finally, an important demonstration that science does make a difference:
"Since 2000, prevention and control measures have reduced global malaria mortality rates by 42%."
It is hard to either understate or solipsize around that…