RAFINO

The Retired Army Finance Organization
"Keeping the Finance Family Together"

Please Login Saturday, April 27, 2024

Epidemiology

2706, 29 Jul 2020
How Not To Be Deceived
There is a lot of information—and misinformation—floating around the internet these days about, well, about everything. But especially about Covid-19. It can be overwhelming and frustrating to know what to believe and whom to trust. I offer the following suggestions on how to evaluate information and sources of information, based on the assumption that we are all genuinely trying to do our best in these difficult times.
1) Recognize the limits of your own expertise.
Epidemiology is a specialized, highly technical area of study. True experts receive years of advanced training in order to understand it. Unless you have received that advanced training, you are not in a position to draw reliable conclusions based on the data you are seeing. That is not because you are stupid! If someone showed you the diagram of a new jet engine design and gave you a table of output specs and asked you your opinion, you would happily tell them that you don’t know enough about jet engines to form an opinion. And you would feel no shame in admitting that. Well, the trajectories of infectious diseases—especially new ones—are more complicated than jet engines. There is no reason any of us should think we are in a position to out-think the experts on this. In short: if someone’s explanation of the disease makes good sense to you, that doesn’t necessarily mean it is right. Often, especially for complicated topics like epidemiology, things that are easy to explain are not correct and things that are correct are not easy to explain or understand.
2) Focus on the credibility of your sources.
When we fly, we are forced to rely on the expertise of airplane engineers, maintenance crews, and pilots, since even if we were allowed to make a personal inspection of the engine, it would not be informative to us. We’re in a similar position when evaluating claims about Covid-19. We have to rely on the experts. But which experts should we trust when they make contradictory claims? Here are some general rules for evaluating the credibility of experts:
- Are they trained and credentialed?
A physician will tend to give better advice about the coronavirus than an economist or a chiropractor. But an epidemiologist or immunologist—someone who specializes specifically in infectious diseases—will be a safer bet even than a physician, because their training and expertise are more directly relevant.
- Are they representing an organization or speaking for themselves?
In general, someone speaking on behalf of a large organization with a large and diverse set of stakeholders will tend to be a better source than someone speaking on behalf of themselves.
- Do they have conflicts of interest?
To a certain extent, everyone has biases that influence their judgments. But some biases are more overt and stronger than others. We should be more skeptical of sources with overt political aims.
- Are they arguing against the status quo?
Sometimes experts get things wrong. Sometimes it is the lone voice arguing against the crowd that really understands what is going on. I’m sure we can all think of examples of that. But the reason those stories are compelling is because they are so rare. While you don’t need to dismiss dissident voices out of hand, you should realize that the standards for accepting outside views should be much, much higher. The prevailing views are generally based on lots of experts evaluating lots of data and coming to a conclusion over time. Sometimes those views are overturned and replaced by something better. Philosophers of science call that a paradigm shift. But paradigms only shift when there is a lot of compelling new data that is inconsistent with the prevailing view. Dissident voices shouldn’t be given equal footing with long established views. The standard for them to establish credibility should be much higher.
3) Evaluate the credibility of the information.
Although we are not epidemiologists, we will continue to get studies, models, and data thrown at us to support one point of view or another. Here are some general rules for evaluating the quality of findings. While there may be the occasional exception to each of these, these are the safest ways to evaluate the quality of information. If some piece of information you are receiving is violating several of the rules below, you should be skeptical.
- Formal research studies (e.g., clinical trials, lab experiments, modeling of large data sets) are better than anecdotes and testimonials.
- Peer-reviewed studies are better than non-peer-reviewed studies.
- Newer studies are better than older studies.
- Larger studies are better than smaller studies.
- Double-blind randomized trials are the gold standard in research. There is no better way to determine causality (e.g., whether a drug increases survival rates).
- No single study is ever definitive. We should always be looking for a preponderance of the evidence. If one study contradicts previous findings, that doesn’t make the new study right and everything previous wrong. The new finding needs to be replicated and evaluated in light of everything we already know.
- All findings are limited to the contexts in which they were conducted. This is one of the reasons studies sometimes produce conflicting results: sometimes the contexts were different. The further we get from the setting of a study, the riskier it gets to apply those findings (e.g., a study conducted where there was a low incidence of disease may or may not be valid in a setting where there is an active outbreak).
We’re all in this together. Our decisions affect others. Let’s do our best to make the best-informed decisions we can.

(if you wish to post a comment on this bulletin, please log in)

© 2024 - Retired Army Finance Organization