Remember March of 2020, before masks? Back then, as we became aware that the coronavirus was circulating around the country at an alarming clip, packed up our offices, and pulled our kids out of in-person school, the nation’s top experts urged us not to bother covering our nose and mouths.
Among the complex reasons for the hesitation was a simple one: distrust of the public. “I worry that if people put on masks, then they’ll think, OK, I’m protected, and they won’t wash their hands as vigorously or be careful not to touch their faces,” one expert told Slate’s What Next very early in the pandemic. The White House Coronavirus Task Force, the U.K. scientific council SAGE, and the World Health Organization cited similar concerns at the time, too. Masks would only provide a false sense of reassurance, reversing any public health gains they might offer. Of course, they were wrong—by summer 2020, we were wearing masks and also adhering to other safety measures.
Huge numbers of people put time, effort, and money into masking up—and in doing so, saved lives. But these efforts didn’t stop public health authorities from raising similar concerns about public behavior again and again. When vaccines first arrived on the scene in late 2020, public health officials and doctors urged us to get the shot as soon as we were eligible, and then, worrying about a “false sense of security,” preemptively warned us about returning to normal activities—to the point where “just because you’re vaccinated doesn’t mean you can … ” became a popular joke setup. Now, with the Biden administration pledging a billion-dollar investment into rapid at-home testing, some worry that the proliferation of the swabs, which can present false negatives or be misused, will cause an increase in cases—that people will feel too free to use them as an excuse to drop all precautions.
Advertisement
There’s a reason why this zombie idea won’t die: It’s baked into the culture of institutional medicine.
Throughout the pandemic, each time a public safety measure arrives on the scene, some experts fret that the masses will simply use the newfound sense of security as license to behave recklessly, canceling out or even reversing any benefits of the safety measure. The concept many medical experts can’t seem to loosen their grip on is known as “risk compensation.” It’s an idea that comes from the study of road safety and posits that people adjust their behavior in response to perceived risk: the safer you feel, the more risks you’ll take. Risk compensation makes intuitive sense and can be true to an extent. If you’re driving on a precarious cliff-side road without guardrails, you’d probably drive more cautiously. But some proponents of the idea make a stronger claim: that guardrails cause so much reckless driving that any potential safety benefits of guardrails are offset or even reversed. Under this reasoning, a road with guardrails would cause more accidents than a road without guardrails. Guardrails aren’t helpful; they’re counterproductive.
This paradoxical idea has been trotted out by health experts to caution against not just pandemic safety measures such as masks, but everything from child-safety caps on medication (which, the worry goes, could lead parents to leave pill bottles lying around carelessly) to diet soda (what if people chug the stuff and it makes the obesity epidemic worse?).
But whenever risk compensation has been subjected to empirical scrutiny, the results are usually ambiguous, or the hypothesis fails spectacularly. And when risk compensation does play a part in behavior, it tends to do so in small and specific ways—hardly cause for the alarm and fervor with which it is often applied, especially during the pandemic. It might be tempting to dismiss any single deployment of risk compensation language by medical authorities as an unfortunate messaging misstep. Yet a closer look reveals there’s a reason why this zombie idea won’t die: It’s baked into the culture of institutional medicine and American political thought. And it’s going to come for us again, and again, in the future.
Advertisement
How individuals change their behavior in response to perceived risk has been of interest to psychologists, safety regulators, and economists for decades. In the 1940s, as experts debated safety measures to reduce the soaring number of traffic accidents, some were concerned that designing safer roads or cars would merely cause riskier driving. The hypothesis was bandied about but never rigorously tested. But in 1975, University of Chicago economist Sam Peltzman elevated what might have remained armchair speculation to a powerful argument against safety regulations. Writing in the Journal of Political Economy, Peltzman hypothesized that 1960s-era federally mandated vehicle regulations such as seat belts were actually making the roads less safe because they encouraged so much reckless and careless driving. In his thinking, any safety advantage of the new regulations was being offset. He analyzed traffic accident data before and after the regulations and found that not only did the regulations fail to decrease fatal accidents, but traffic-related fatalities increased after regulatory action. That is, the safety measures “may come at the expense of more pedestrian deaths,” he concluded. Although seat belts were here to stay, Peltzman’s findings gave serious quantitative ammunition to the anti-regulatory enthusiasm of the 1970s.
Subsequent analyses of Peltzman’s work, however, found it riddled with errors. Other researchers showed his model couldn’t predict traffic fatality rates before regulation. As one critic wrote in 1977, Peltzman failed to perform even “rudimentary checks on the validity of his model.” Decades of traffic data now leave little doubt that, overall, safety regulations have indeed reduced traffic-related fatalities. These days you would, with good reason, not even consider getting behind the wheel of a car that did not have working seat belts.
Among the complex reasons for the hesitation was a simple one: distrust of the public. “I worry that if people put on masks, then they’ll think, OK, I’m protected, and they won’t wash their hands as vigorously or be careful not to touch their faces,” one expert told Slate’s What Next very early in the pandemic. The White House Coronavirus Task Force, the U.K. scientific council SAGE, and the World Health Organization cited similar concerns at the time, too. Masks would only provide a false sense of reassurance, reversing any public health gains they might offer. Of course, they were wrong—by summer 2020, we were wearing masks and also adhering to other safety measures.
Huge numbers of people put time, effort, and money into masking up—and in doing so, saved lives. But these efforts didn’t stop public health authorities from raising similar concerns about public behavior again and again. When vaccines first arrived on the scene in late 2020, public health officials and doctors urged us to get the shot as soon as we were eligible, and then, worrying about a “false sense of security,” preemptively warned us about returning to normal activities—to the point where “just because you’re vaccinated doesn’t mean you can … ” became a popular joke setup. Now, with the Biden administration pledging a billion-dollar investment into rapid at-home testing, some worry that the proliferation of the swabs, which can present false negatives or be misused, will cause an increase in cases—that people will feel too free to use them as an excuse to drop all precautions.
Advertisement
There’s a reason why this zombie idea won’t die: It’s baked into the culture of institutional medicine.
Throughout the pandemic, each time a public safety measure arrives on the scene, some experts fret that the masses will simply use the newfound sense of security as license to behave recklessly, canceling out or even reversing any benefits of the safety measure. The concept many medical experts can’t seem to loosen their grip on is known as “risk compensation.” It’s an idea that comes from the study of road safety and posits that people adjust their behavior in response to perceived risk: the safer you feel, the more risks you’ll take. Risk compensation makes intuitive sense and can be true to an extent. If you’re driving on a precarious cliff-side road without guardrails, you’d probably drive more cautiously. But some proponents of the idea make a stronger claim: that guardrails cause so much reckless driving that any potential safety benefits of guardrails are offset or even reversed. Under this reasoning, a road with guardrails would cause more accidents than a road without guardrails. Guardrails aren’t helpful; they’re counterproductive.
This paradoxical idea has been trotted out by health experts to caution against not just pandemic safety measures such as masks, but everything from child-safety caps on medication (which, the worry goes, could lead parents to leave pill bottles lying around carelessly) to diet soda (what if people chug the stuff and it makes the obesity epidemic worse?).
But whenever risk compensation has been subjected to empirical scrutiny, the results are usually ambiguous, or the hypothesis fails spectacularly. And when risk compensation does play a part in behavior, it tends to do so in small and specific ways—hardly cause for the alarm and fervor with which it is often applied, especially during the pandemic. It might be tempting to dismiss any single deployment of risk compensation language by medical authorities as an unfortunate messaging misstep. Yet a closer look reveals there’s a reason why this zombie idea won’t die: It’s baked into the culture of institutional medicine and American political thought. And it’s going to come for us again, and again, in the future.
Advertisement
How individuals change their behavior in response to perceived risk has been of interest to psychologists, safety regulators, and economists for decades. In the 1940s, as experts debated safety measures to reduce the soaring number of traffic accidents, some were concerned that designing safer roads or cars would merely cause riskier driving. The hypothesis was bandied about but never rigorously tested. But in 1975, University of Chicago economist Sam Peltzman elevated what might have remained armchair speculation to a powerful argument against safety regulations. Writing in the Journal of Political Economy, Peltzman hypothesized that 1960s-era federally mandated vehicle regulations such as seat belts were actually making the roads less safe because they encouraged so much reckless and careless driving. In his thinking, any safety advantage of the new regulations was being offset. He analyzed traffic accident data before and after the regulations and found that not only did the regulations fail to decrease fatal accidents, but traffic-related fatalities increased after regulatory action. That is, the safety measures “may come at the expense of more pedestrian deaths,” he concluded. Although seat belts were here to stay, Peltzman’s findings gave serious quantitative ammunition to the anti-regulatory enthusiasm of the 1970s.
Subsequent analyses of Peltzman’s work, however, found it riddled with errors. Other researchers showed his model couldn’t predict traffic fatality rates before regulation. As one critic wrote in 1977, Peltzman failed to perform even “rudimentary checks on the validity of his model.” Decades of traffic data now leave little doubt that, overall, safety regulations have indeed reduced traffic-related fatalities. These days you would, with good reason, not even consider getting behind the wheel of a car that did not have working seat belts.
The Insidious Idea About “Safety” That Keeps Putting Us in Danger
A concept that took hold in the ’70s has haunted everything from seat belts to masks—and experts won't let it die.
slate.com