Welcome to Science with Shrike! Today we’re going to discuss ways to think about problems and explaining our natural world. Another way of saying this is that we’re going to compare and contrast conspiracy theories with hypotheses, and the utility of both. The goal is to move from conspiracy theories to hypotheses, though this is a challenge with past events.
Conspiracy Theories
While the media has tried to redefine “conspiracy theory” to mean “every idea that contravenes the current government/corporate narrative”, that’s not the definition we’re going to use.
Instead, we’ll go back to use the older definitions of what a conspiracy theory meant. One older version attributed to the Oxford English Dictionary is “the theory that an event or phenomenon occurs as a result of a conspiracy between interested parties; spec. a belief that some covert but influential agency (typically political in motivation and oppressive in intent) is responsible for an unexplained event." In common usage, the requirement for a conspiracy of parties is not absolute.
Thus, we have the various conspiracy theories on who killed JFK, moon landing was staged, construction of the pyramids, existence of the Bavarian Illuminati, secret causes of cancer, etc.
If we examine the claims in conspiracy theories, they are often challenging to impossible to prove false. The logic may vary from strong to circular to wishful thinking. Shrike would consider conspiracy theories to be rationale, or the reasoning why one might think something is true. Rationale is NOT proof that something works; it is reason to explore the subject matter and/or test hypotheses about the subject matter.
This is the main challenge people face, both with conspiracy theories, and beliefs about the natural world. We are tempted to treat rationale as proof, instead of a reason to test. Humans do a poor job of separating beliefs about testable phenomena (COVID vaccines prevent transmission) from beliefs about untestable phenomena (God and metaphysics), which leads us to a tendency to believe. And thanks to the fun of our psychology, once we believe something, it is challenging to change our mind.
This is the main difference between conspiracy theories and hypotheses. We try to falsify hypotheses, whereas we fortify conspiracy theories.
Hypotheses
An hypothesis is a falsifiable and reproducible statement about reality. The two hardest parts are developing the hypothesis so that separates your idea from most others, and being resilient enough that you can murder your own ego long enough to prove yourself wrong. Scientists have the most trouble with the second of these, especially when fame and money ride on the hypothesis being right, instead of wrong.
When constructing a hypothesis, a specific one is more helpful than a broad one. “There will be an increase in excess deaths in 2023” is less likely to be falsified than “There will be an increase of 10 excess deaths in 2023 because these people will die from being tangled in bed sheets”. With the former, failing to falsify your hypothesis doesn’t tell you much more about the world. Is there an increase in deaths because there’s a war in Ukraine, because there is COVID-associated mortality, vaccine-associated mortality, mass starvation, etc. With the latter, if you fail to falsify your hypothesis, you know 10 more people than expected got lethally tangled in their bedsheets.
In some cases, you may be able to mass-falsify hypotheses. Running with the last example, you might measure 1000 excess deaths instead of 10. Now you’ve ruled out all numbers outside of the error on your measurements. If you compared the number of deaths attributed to various causes of death between years, you could further rule out many other alternative explanations. Perhaps there was a decrease in the number of people dying from being tangled in their bedsheets and a decrease in the number of homicides using steam and other hot vapors. So you can falsify a number of hypotheses in one large batch. Now you can focus on falsifying more refined hypotheses.
Which hypothesis should you test? This is the value of hypothesis-generating work, and models. This is also where the ideas and rationale come into play. Focus on your problem first, and refine it down to one specific challenge. Your hypothesis is your best solution to that challenge. It’s ok if you have multiple hypotheses about how to solve that challenge. The next question is what separates those hypotheses from each other. Your goal is to identify how you would falsify one hypothesis without falsifying the others. These will be your tests. If you are looking at excess mortality, how will you distinguish people who died suddenly due to vaccine from people who died suddenly due to COVID from people who died suddenly from other causes?
In general, your work is more robust if you test the hypothesis instead of fit the hypothesis to the data. However, science is an iterative process, so you should learn from your data, and you need to account for existing data. However, this is now rationale supporting the hypothesis—you need to falsify both it and other interpretations.
Finding the line between giving up on a hypothesis, and refining it to get to the truth can be challenging. Let’s suppose your hypothesis is that the Pfizer mRNA vaccine causes sudden death in 1% of vaccine recipients. Let’s further suppose you measure COVID/vaccination rates in 1000 people who died suddenly, and both vaccination and COVID sudden death rates match the unvaccinated, never had COVID group, and your margin of error is 0.5%. Do you abandon your hypothesis that the mRNA vaccine causes sudden death? Or do you refine the hypothesis and propose a new hypothesis: the mRNA vaccine increases death only for men aged 15-45’? If that one is falsified, do you start checking age groups to see if you can get an effect anywhere?
This illustrates the challenge scientists face. When do you give up on your hypothesis, vs keep looking? Give up too soon, and you miss the discovery. Spend too long, and you waste your time chasing leads that go nowhere. The easy solution to this dilemma is when funding becomes the deciding factor. Running out of money often forces you to cease chasing leads.
On the other hand, self-awareness about your approach to testing your hypothesis helps you determine how emotionally invested you are. Are you methodically ruling out possibilities, or heroically trying to salvage your hypothesis? If you’re emotionally invested, you may need to take a step back, and recognize you might have a belief about this topic. That means you’re looking for validation, not proving yourself wrong. If you’re seeking validation, you will not be able to prove yourself wrong.
Science is not about validating yourself. While it is gratifying to be right, it remains a temptation.
Narratives
There is one more wrinkle to testing hypotheses. Being wrong all the time doesn’t sell. Having a cool narrative about how your work is changing the world does. This locks you into certain hypotheses, and gets you emotionally invested in your science. It is still possible to pivot, but becomes more challenging. There are papers from groups that say ‘we thought X, but now we realize it’s Y’, and move forward from there. If the paper showing X was a high profile paper because it showed X, and X is shown to be wrong, you might think they would retract it.
Nope.
Papers do not get retracted for being wrong. First, being wrong is part of science, and second, there is often vigorous debate on ‘right’ and ‘wrong’. Retractions are reserved for errors and misconduct. The difference between an error and being wrong is that being wrong means your interpretation is incorrect, not the experimental results. If the experiments cannot be repeated even by the same lab, or the wrong figure was shown, that is an error.
Reinterpreting results is one of the more exciting parts of science. It happens as a lab learns more about the process, and gets even more fun when you can say ‘here’s this weird thing in the field that no one has explained for the last 20 years. We figured it out.’, or ‘we have a new interpretation for all these data for the last 20 years.’
For a concrete example, take the discovery of Th17 T cells. One subset of your adaptive immune system, CD4+ T cells, tells the rest of your immune response if it should be hunting pathogens hiding inside other cells, hunting small pathogens outside of cells, or hunting large pathogens outside of cells. Th17 cells are the T cells that tell the immune system to hunt small pathogens outside of cells. It turns out that hunting pathogens inside of cells (Th1 T cells) looks very similar to Th17 T cells. In fact, prior to the discovery of Th17 T cells, some of the experiments used markers present in both Th1 and Th17. As a result, once people discovered Th17 cells, and realized what they thought were Th1 cells could be Th17 cells, it was open season on reinterpreting ALL of those data. As you may have guessed, that means more grants and papers showing which ones were wrong. This is one of the ways that science is self-correcting. Note that it takes a while.
What about all the old papers claiming Th1 cells were doing things that we now know were Th17 cells? None of them get retracted because the data were fine. The interpretation is now different. This is challenging for people new to a field because if you miss one or two key papers, you may not realize the interpretations are now different. It’s even more fun when you cite an old paper stating that it provides evidence for the current interpretation, even though they didn’t realize it at the time. Hapless grad students pore through the paper and get confused when they can’t find the current interpretation. Instead, they need to interpret the figures themselves, which can be hard if you’re not up on the field (or sometimes reading the authors’ minds).
This example shows that narratives can update and change as needed. But once you buy into a narrative, you get emotionally invested, which will make it harder for you to change. This is where taking some advice from the investment word— "don’t marry your bags”—becomes useful. Don’t marry your narratives in science. If you need to change the narrative, that’s ok. If you change the narrative every paper, that becomes less ok because people also want a narrative for your career. (i.e. don’t be a day trader when it comes to narratives). When you provide a research statement for a job or biosketch, you are telling the story of your work. If the story is ‘we tried a bunch of things that all contradict each other’, that is not as compelling as ‘we discovered X, which led to discovering Y and the exception to X, and Y, which is Z.’ So be careful with being wrong too often.
In some ways, with narratives, we have come full circle back to conspiracy theories. The big difference remains testability. Narratives in science link multiple hypotheses to make predictions about the world that can be tested (i.e. falsified). Conspiracy theory narratives are often hard to test, or rely on unfalsifiable statements.
If you are interested in building knowledge, and like conspiracy theories, the goal is to convert your conspiracy theories to testable hypotheses. Figure out which testable predictions your narrative makes. Which ones of those can other narratives also predict? You’re hunting for the set of testable (falsifiable) predictions unique to your theory. Once you find them and lay out the test and expected results, all that remains is testing the hypotheses. If you are wrong, you need to revise your hypotheses and your narrative. Then repeat the process.
One final note on testing hypotheses. You need to manipulate the system. ‘We expect to observe’ is helpful, but to test mechanisms, you need to perturb the system and see that it behaves as you predicted.
Now you have the tools to test conspiracy theories yourself!