Literary Hub

The Misleading Neoliberal Promise of the Risk Expert

By 1962, with the publication of Rachel Carson’s Silent Spring and the rise of the anti-nuke and environmental movements, it was clear that we were living with a new kind of risk, one that was not clear and exogenous like the USSR, but murky and endogenous, the product of our own activities. The borders of individual nations could not contain these new threats any more than the threats could be contained by individual generations. 

This is what had become invisible to us today, the way we’d accommodated these new threats and come to accept that they would always be with us—not because something could always go wrong but because even when everything went right the risk still lingered. It was the dawn of Beck’s Risk Society, and it left us in a bind. Because if the threat could not be eliminated then the choice was either to admit our helplessness and vulnerability, repent of modernity and start working on some new vision of the future to replace “progress”—or else find some way to justify the threat’s existence. 

The latter path was easier, and so we took it. We had made the transition to a mass consumption society and there was no way back, just as there was no way back from synthetic color. Already, products were just as good as they needed to be and no better. An acceptable level of failure was built into the system. Which brings us to one of the greatest inventions of the 1970s, high fives and hacky sacks notwithstanding: acceptable risk.

Henceforth, risk would no longer be considered in isolation but rather in relation to what it bought. For every modern advantage there existed an acceptable number of dead people to pay for it. It became a question of costs and benefits. As Robert N. Proctor writes in his book Cancer Wars, suddenly risk “is not something to be feared but to be embraced: risk is inevitable, necessary, even desirable in so far as it is ‘potential profit.’ . . . This is the entrepreneurial ethic writ large.” It was beginning to smell a lot like Reagan. 

There was an unpleasant cynicism to the new risk calculus, but its acceptance was made easier by the release of the Rasmussen Report in 1975. The Rasmussen Report was a reactor safety study that had been ordered three years earlier by the Atomic Energy Commission in anticipation of the renewal of the Price-Anderson Act. Its main contribution was to familiarize the public with the concept of probability.

Theoretically, any risk could be lived with so long as it was marginally less than slipping on a bar of soap.

Nuclear meltdowns might sound frightening, for instance, but in reality the risk of dying from one was six orders of magnitude less than dying in a car accident, four orders of magnitude less than being killed by a falling object, and three orders of magnitude less than being killed in a tornado.

The Rasmussen Report was later found to have serious methodological problems and was disavowed several years after its release by the Nuclear Regulatory Commission, a mere three months before the meltdown at Three Mile Island, but by then we’d already grown rather attached to this new mode of thinking. Theoretically, any risk could be lived with so long as it was marginally less than slipping on a bar of soap. 

What constituted an acceptable level of risk, however, remained debatable. How many dead people were you willing to tolerate for any given modern amenity? A complicated question. And one that could only be answered by . . . experts. 

Thankfully, the risk expert business was thriving. By 1957 the insurance industry had birthed the first professional organization dedicated to actuarial risk assessment for non-life insurance (a foretaste of the probabilistic thinking that would later come to dominate evidence-based medicine). In the 1960s and 1970s an entire cadre of risk experts emerged to cope with the worst-case scenarios introduced by the nuclear industry, where a full-on, level 3 probability risk assessment (to gauge the impact of a meltdown on the surrounding populace) could require more than 100 person-years of analysis.

With the meltdown of the Three Mile Island reactor in 1979 probabilistic risk assessment (PRA) became a part of the federal plant licensing process. A year later saw the establishment of the first professional society of risk analysis, and the following year they began publishing the burgeoning field’s first journal. At that time the society boasted 300 members. Six years later the number had quintupled. 

But now a new problem began to emerge, because when it came to what constituted “acceptable risk” even the experts disagreed. As Proctor notes, as late as 1983, 80 percent of industry scientists still believed in carcinogenic thresholds—compared to 37 percent of scientists employed by regulatory agencies. This was the point where expertise ended and politics began. Like so much else, it all depended on where you were coming from.

The transformation of risk from a purely scientific to a quasi-political concept began to be formalized in 1977, when Shell, Procter & Gamble, and Monsanto, together with 130 other chemical companies, assembled an advocacy group called the American Industrial Health Council to lobby Congress about occupational health policy and the regulation of carcinogens.

The AIHC urged Congress to distinguish between scientific and political considerations when weighing the risk of chemical exposure, and in 1980 explicitly suggested that $500,000 be devoted to study, as one scholar put it, “alternative means to elaborate scientific judgments on the quantitative aspects of human risk, especially in the case of chronic diseases”—in other words, ways to put a positive spin on diseases contracted at work.

The money was soon forthcoming, and three years later, at a gala D.C. dinner attended by 24 players from the chemical industry, the National Research Council released what came to be known as the “Red Book,” which laid out a new approach for risk-related policymaking in which scientific analysis was only one part. The second part addressed other factors, like financial considerations, political considerations, and ethics.

The Red Book came as a fulfillment of Reagan’s famous 1981 executive order, which stated that, henceforth, “Regulatory action shall not be undertaken unless the potential benefits to society for the regulation outweigh the potential costs,” and throughout the 1980s and 1990s it was adopted as the new international standard. 

The Red Book provided the framework for the controversy that followed, much of which took shape around the terminology—like “hazard” and “exposure”—because it was the product of these two values that ultimately determined the “dose,” and therefore the risk. Depending on which value received greater weight, different assessments of the same risk could vary dramatically. Industry folks liked to claim that the size of the hazard didn’t matter if the exposure was zero. Sure, nuclear reactors were hazardous, but so long as they never melted down, what was there to worry about? 

Environmentalists and public safety advocates, naturally, felt otherwise. Because what exactly was meant by “exposure”? Exposure of whom? For a long time, for instance, the benchmark for the risk of chemical exposure was the Vitruvian man—a 155-pound working male. But the body of a 155-pound working male was actually rather different from that of a 110-pound woman—and depending on the epigenetics even similar body types could react to the same exposure differently.

What’s more, studies were beginning to show that, not only was the fetus unusually sensitive to chemical exposure during certain periods of development but the effects of the exposure sometimes couldn’t be seen for decades. 

The same sort of debate took shape around what was meant by “cost” and “benefit.” Industry advocates tended to emphasize the short-term costs to themselves (of pollution regulations, for instance) and the short-term benefits to society (of cheap doohickeys), whereas environmentalists and public safety advocates emphasized the long-term costs to society and the short-term benefits to industry.

Not all of these values were equally easy to calculate, and since environmentalists and public safety advocates tended to focus on those that weren’t and industry advocates tended to focus on those that were, the latter always had the advantage. 

A similar asynchrony could be seen in the values aligned with each of these positions. As Proctor notes, in the view of environmentalists and public safety advocates the individual body was complex, fragile, passive, and in need of constant protection, while industry advocates viewed the individual body as active, virile, resilient, more than capable of detoxing whatever chemicals it encountered and repairing genetic damage as it occurred.

Put more simply, one side thought society was strong and the body was weak and the other thought the opposite. The disconnect went as deep as evolution itself as, in the environmental view, synthetic chemicals were a radical departure from the threats our bodies had evolved to handle, whereas in the industry view they were mere variations on threats that had been around forever. 

The age of disaster capitalism was upon us, spelling an end to what one scholar has called “the last strongholds of vernacular risk culture: the house, yard and nursery.”

Ultimately, industry always came out ahead, not just because industry had more money, and its stated costs and benefits were more calculable, but because its message was positive and paired better with what we wished were true. With the deregulation that followed, the job of managing risk became increasingly decentralized. Like never before, risk management became a matter of “personal responsibility.”

With this, the door was opened to an entirely new class of products expressly designed to help the consumer manage risk: personal fallout shelters, 9-volt smoke detectors, retractable seat belts, residential alarm systems, virus protection software, school shooter panic buttons, as well as a deluge of new insurance products and a cacophony of professional expertise on how to avoid heart attacks and raise healthy children.

The age of disaster capitalism was upon us, spelling an end to what one scholar has called “the last strongholds of vernacular risk culture: the house, yard and nursery.” Henceforth you couldn’t even be an expert in your own home. This was great for, say, parenting magazines. Maybe not so great for the mental health of parents. 

The final stroke arrived with the disgrace of the professional expertise for which we’d traded our more homely, vernacular ways of knowing, as reports began to appear claiming that large numbers of scientific studies—even in big-time journals like Science and Nature—could not be replicated. Meanwhile the reversal of previous findings about what was safe and what wasn’t had become routine.

The day after you finally threw out all your BPA water bottles, for instance, and replaced them with new, safer, more expensive BPA-free models, a study would appear claiming that BPA substitutes were equally dangerous as BPA. All of which left you wondering whether the entire thing wasn’t some kind of scam cooked up by the water bottle people. 

Exhaustion set in. Defeat. “Living in a world risk society,” Beck wrote, “means living with ineradicable non-knowing.” It was as if we had circled back to a more primitive world in which the reasons for things could not be known—except this time lacking even the meager consolation of blaming our misfortunes on the disfavor of the gods. 

Would we have chosen this? If the full picture had been made abundantly clear of all that we’d have to sacrifice for modernity’s delights? Or were we just being led along by our appetites? 

Probably we were just being led along by our appetites. And probably we would have chosen this path regardless. History, it seemed, was just a long, haranguing exposition of human nature’s follies. And history would carry on so long as human nature remained what it was. 

In the meantime we could still try to console ourselves with the wily promise of science and technology, in the hopes they would, per Zangger, keep us one step ahead of the risks that they themselves created. 

__________________________________

From The Sensitives: The Rise of Environmental Illness and the Search for America’s Last Pure Place. Copyright © 2020 by Oliver Broudy. Reprinted by permission of Simon & Schuster, Inc.

More from Literary Hub

Literary Hub3 min readPersonal Growth
Tina Turner on the Lessons We Learn Overcoming Adversity
Musical icon Tina Turner’s Happiness Becomes You, billed as a guide to “changing your life for good,” is available now. * Lit Hub: Are there any specific lessons from your spiritual practice that you’ve drawn on in this year of unusual adversity? Tin
Literary Hub1 min read
WATCH: Reyna Grande in Conversation with John Freeman
Click below to watch the virtual meeting of the Alta California Book Club, which Books Editor of Alta Journal David Ulin describes as: an opportunity for us to rethink the book club as a kind of ongoing process involving events, involving posts and i
Literary Hub4 min readCrime & Violence
What Jeffrey Sterling Wants Americans to Understand About Whistleblowers
Hosted by Paul Holdengräber, The Quarantine Tapes chronicles shifting paradigms in the age of social distancing. Each day, Paul calls a guest for a brief discussion about how they are experiencing the global pandemic. On Episode 138 of The Quarantine

Related Books & Audiobooks