The 2023 book Misbelief by Dan Ariely belongs to a genre I would label “debunking Covid conspiracy theories.” The book is meant to explore the thought process of people who subscribe to conspiracy theories, especially about Covid and the Covid vaccines.
Thus I was surprised to encounter in the book two stories in which the author uncovered real conspiracies to hide information about Covid from the public.
Ariely, a professor of psychology at Duke University, played a bit part in promoting Covid lockdowns around the world. By his own description, he worked
…on projects related to Covid-19 with the Israeli government and a bit with the British, Dutch, and Brazilian governments as well…I was mostly working to try to get the police to use rewards to incentivize good mask-wearing behavior and observance of social distancing instead of using fines… (p. 4)
The first genuine conspiracy he describes involved the US Food and Drug Administration (FDA) manipulating data in the Vaccine Adverse Events Reporting System (VAERS).
The second involved a newspaper editor-in-chief refusing to report about vaccine side effects observed by a hospital. The author reports these situations matter-of-factly, and even gives the conspirators the benefit of the doubt, saying maybe they did the right thing!
Let’s look at the VAERS conspiracy (recounted on pp. 274-276). Ariely says he got this information directly from a person who works “in the information technology department of the FDA.” The agency, according to the story, determined that:
…foreign powers, mostly Russian and Iranian, had found a way to spread disinformation using VAERS. So when the FDA identified cases that had clearly come from such sources, it removed them from the system…
Not only did it delete this data, but it did so silently. Ariely only found out by accident: Parents of vaccine-injured children maintained their own copy of the VAERS data, downloaded from the FDA site. They noticed that cases appearing in their downloaded data later disappeared from the government copy of the database, and they told Ariely about this.
Supposedly the FDA tried to keep these actions secret because it “did not want to announce to the foreign powers that it was onto them,” the FDA employee told him. But to anyone reasonably well-versed in information technology, keeping such acts secret is an obvious mistake. The bad guys will figure out what is going on; the folks we are trying to protect are left in the dark about possible mischief affecting data they rely on. And that’s the most charitable assessment of their actions. It could be worse: the FDA might have removed valid information inadvertently (putting aside possible nefarious intentions at this point). How might that come about?
Since we don’t have details as to how the FDA found this bad data, we need to speculate. Here is the easiest scenario to imagine. A straightforward way to detect computer sessions originating in Russia or Iran is by IP (internet protocol) address. Did the FDA personnel identify the supposedly bogus entries by this method?
But there’s a flaw in that approach. Many computer users obfuscate their IP address for privacy reasons. Some popular browsers such as Tor and Brave do that automatically: each browser page gets detoured through servers in different locations. Those servers are located worldwide, including in Russia. Thus if a US-based individual using the Tor browser added an entry to VAERS, and the session was routed through Russia, the FDA might well have identified this incorrectly as misinformation.
Compare how the world of open-source software deals with malware. These software publishers routinely make information about vulnerabilities public, so that user organizations can both protect themselves and evaluate what damage might have been done. A publisher may wait a few days or weeks while they fix a bug and get it distributed, but then they disseminate the details.
A variety of US laws and regulations even require corporations to promptly reveal data breaches that happen to them. For example, the Securities and Exchange Commission mandates that public companies report “cybersecurity incidents” within four days of determining that the incident has a “material” effect on a company’s business.
VAERS is supposed to be a public resource. If FDA has a policy to remove entries, it should be transparent about its criteria, and make the data available for audit. Or it could just as easily have flagged the entries as “suspicious origin” and left them in the database. Then others could review their judgment and either confirm or dispute the classifications.
Let’s look at the second conspiracy Ariely recounts (pp. 277-280):
I was speaking with a doctor from a large health care organization…I couldn’t resist asking her what she thought about all the online chatter about unreported vaccine side effects. To my surprise, she agreed there was a problem. She said that she had observed a lot of side effects in her clinic that had not been reported and had been collecting such data from her patients…
Ariely at that point decided this was newsworthy. He met with the editor-in-chief of “a large newspaper,” told the editor about the situation, and suggested the editor get the doctor’s data and report about it. The reaction:
The editor told me he suspected that I was correct about the underreported side effects.
However, he had no intention of publishing anything about them…because he suspected that the misbelievers would use the published information in an unethical way and distort it…
I was disappointed that he did not publish the story, but I could see his point.
Ariely spends a few sentences philosophizing about what is the true responsibility of a newspaper – is it just to publish true information, or is it “to do this cost-benefit analysis for the society…?” But apparently he let the matter lie, acquiescing in real censorship of real information.
The debunker has debunked his own debunking project.