The Good, the Bad, and the Ugly (science).

Today for your Weekly Overload, I decided it would be a good idea to discuss what to look out for when reading scientific papers. There are a lot of things that you should be on the lookout for, but here we will break down some of the main problems that arise from reading and interpreting “science”.

The three problems I'm going to talk about are bias, sample sizing, and lastly “proving too much”. All these problems are going to be referenced within this hypothetical scenario, but before that, this is important because what the experts say drives the culture and behavior of the people around you. I ask that you keep that in mind as I discuss these flaws that often operate behind the scenes of articles and scientific literature that you see.

Let’s say that I’m researching the durability of certain cars under normal conditions during driving. I really like Ford cars because I have a lot of memories, and I notice that they just work better than their competitors. I decide that I am going to take one Ford Focus, one Nissan Altima, and one Mitsubishi Lancer to hash it out on the road as to which one can drive the furthest on Interstate 5 in California before breaking down. This sounds like a decent experiment, right? My hypothesis is that the Ford will do better because I like Ford cars. I made an observation and a hypothesis, and I’m finding a way to test it, I have a way to measure what I am testing, and when my data is collected, I will publish my results.

The thing is though, this is a terrible way to design an experiment.

First off, my experiment is dripping with bias. It's not even that I believe, based on the merits of the car, that the Ford will outlast the other two. It's really obvious that I prefer Fords over any other car. This isn’t good because it will tempt me to make adjustments to either the experiment itself, or the data afterward, which would make it more likely that the Ford would “win”. It would not be an accurate assessment of the three companies or their workmanship on a California road as I pursued the truth. Enrico Fermi, the Nobel prize-winning physicist supposedly said, “If your data support your hypothesis, you’ve made a measurement, if it refutes the hypothesis, you’ve made a discovery.” Science is about making discoveries. There have been many studies where the results have completely aligned with the bias of the researchers and even caused harm. 

Here is an example: https://www.nytimes.com/interactive/2023/04/08/us/court-decision-invalidating-approval-of-mifepristone.html

The federal judge is alleging that a bias to promote the abortion drug mifepristone made the FDA clear it without its due attention and research. The plaintiffs are alleging, that the lack of accurate information adversely adjusted how they treat women and girls, failed to provide the doctors and the women with accurate side effect information, opened the doctors up to legal actions, and even prevented them from practicing “evidence-based medicine”. 

Next is sample sizing. A sample size is a representative amount of what is being studied that allows us to extrapolate data to the whole population. In my study, I tested one Ford car, 1 Nissan, and 1 Mitsubishi. But there aren’t just three cars on the road, there are millions. Therefore, for this study to accurately represent the world at large I would need to get hundreds if not thousands of cars to test on the road. A famous, (or rather, infamous) example of sample sizing is the 1998 MMR (measles, mumps, and rubella) vaccine study, conducted by the now-outcast Dr. Andrew Wakefield. This study has examples of bias as well, but a crucial part of the study, which claimed the MMR vaccines were causing spontaneous autism in children only used 13 participants. These participants did not represent an accurate model of the world outside the lab, as there are millions of children in America. 13 children in one study is not enough to decide that all vaccines cause autism, which is what certain groups in the public then went on to claim, refusing to vaccinate their children from any infectious diseases.

And this is what is called an argument that proves too much. The Wakefield study falsely asserted that the MMR vaccine caused autism in children. But what about all the other vaccines? What about all the other autistic children? None of these questions were asked yet some people refuse vaccines to this day.

Here is the paper debunking Wakefield's claims:

Rao TS, Andrade C. The MMR vaccine and autism: Sensation, refutation, retraction, and fraud. Indian J Psychiatry. 2011 Apr;53(2):95-6. doi: 10.4103/0019-5545.82529. PMID: 21772639; PMCID: PMC3136032.

I hope this has given you an increased understanding that science is messy, there are so many things that need to be checked and controlled before we know if two things are related to each other, or just appear so.

Comments

Popular posts from this blog