Going viral

By Laura H. Kahn | January 17, 2012

We’ve been lucky. The avian influenza (H5N1) virus that first emerged in Hong Kong in 1997 — which killed six and caused 18 serious illnesses — has not acquired the ability to spread easily from person to person. Virtually all of the reported cases have involved contact with infected birds or bird products. Thus the outbreak required the depopulation of all the chicken farms and poultry markets in the region — no small project — in order to stop the outbreak from spreading.

The virus resurfaced in 2003 and, since then, has killed 339 out of 576 infected people, making it an astonishingly deadly microbe with an estimated mortality rate of around 60 percent. To put this into perspective, the 1918 influenza virus, which killed somewhere between 50 to 100 million people worldwide, had a mortality rate between 2 to 3 percent. And most influenza epidemics typically have mortality rates less than 0.1 percent.

Through H5N1, nature could have literally decimated humankind — but didn’t. It has been 15 years since the virus first emerged in human populations, and, fortunately, it has not evolved to become a civilization-ending scourge. That’s why we’ve been lucky.

Unfortunately, with all the hubris of the heroes in ancient Greek tragedies, scientists have decided to tempt the fates by creating the mother of all virulent influenza viruses — deadlier than smallpox (which had a mortality rate around 30 percent) and as communicable as a typical seasonal flu.

In September 2011, Ron Fouchier, from Erasmus Medical Center in the Netherlands, presented his research at a European Scientists Fighting Influenza conference in Malta. There, he described how he and his colleagues induced mutations into the H5N1 virus by forcing transmission in ferrets. They had to do this 10 times. The result: The deadly virus acquired the ability to become airborne — transmitting infection as efficiently as the flu. The virus reportedly killed a startling 75 percent of the ferrets it infected.

The experiments were conducted by Fouchier in the Netherlands and by Yoshihiro Kawaoka of the University of Wisconsin, Madison, and the University of Tokyo’s Institute of Medical Sciences. Both scientists were funded by the National Institute of Allergy and Infectious Diseases (NIAID); however, Fouchier’s research was performed by subcontract under Adolfo Garcia-Sastre’s NIAID-funded influenza center at the Mount Sinai School of Medicine in New York City. In both cases, Institutional Biosafety Committees (IBCs) approved the work. As Fouchier concluded the presentation of his creation in Malta, he said, “This is very bad news indeed.” That’s putting it mildly.

The response, predictably, has been outrage by the public and appeasement by the scientific community. Last month, after much deliberation, the US National Science Advisory Board for Biosecurity, overseen by the National Institutes of Health (NIH), asked the journals Science and Nature not to publish “experimental details and mutation data that would enable replication of the experiments.” This was the first time — in US history — that a government advisory panel requested that journals not publish experimental details from biomedical research. While it’s unlikely that terrorists would have the very specific knowledge or skill set to be able to replicate this research, doing experiments of this kind is — even in the tightest of biocontainment facilities — inherently risky and intricate. Accidents can and do happen.

RELATED:
International experts create framework for safer pathogen research

The controversy even prompted Secretary of State Hillary Clinton to speak at the Seventh Biological and Toxin Weapons Convention Review Conference in Geneva, Switzerland, in December. She spoke about the importance of maximizing the benefits of biomedical research while minimizing the risks and described a new transparency initiative. Meanwhile, a January 7 editorial in The New York Times argued, “In the future, it is imperative that any such experiments be rigorously analyzed for potential dangers — preferably through an international review mechanism, but also by government funding agencies — before they are undertaken.”

It’s a good suggestion, which is why the recommendation to review experiments before they get funded was already made in 2004 in a National Academy of Sciences report, “Biotechnology Research in an Age of Terrorism.” This report listed seven “experiments of concern” — also known as the “seven deadly sins of biomedical research” — in which microbial agents could be altered for malevolent purposes.

The seven experiments of concern are those that would:

  1. demonstrate how to make a vaccine ineffective
  2. confer resistance to antibiotics or antiviral agents
  3. enhance a pathogen’s virulence or make a non-virulent microbe virulent
  4. increase transmissibility of a pathogen
  5. alter the host range of a pathogen
  6. enable a pathogen’s ability to evade diagnostic or detection modalities
  7. enable the weaponization of a biological agent or toxin.

The NIH adopted some of the recommendations of the National Academy of Sciences report, such as creating the National Science Advisory Board for Biosecurity, but it did not implement others, such as expanding the mandates of the IBCs to include reviewing the dual-use potentials of proposed experiments. In the absence of a dual-use review mandate, the IBCs approved the controversial influenza H5N1 experiments. However, it’s unclear if expanding the IBCs’ mandates to include dual-use research oversight would even have improved the status quo: Not only is it not mandatory for institutions to actually have IBCs; previous evidence suggests that IBCs don’t have the expertise or time to properly conduct evaluations of research proposals’ dual-use potentials.

So what should be done?

The NIH needs to implement the “Biotechnology Research in an Age of Terrorism” report’s recommendation to initiate a review process if research meets any of the criteria of the seven experiments of concern. Were that the case, evaluators would have noted that Kawaoka’s experiment clearly falls into categories No. 4 and No. 5 on the list. In the case of Fouchier’s research, however, the experiment was funded by a subcontractor from a different NIAID grant; therefore, it’s not clear what kind of scrutiny it would have received. That’s why all research that meets one or more of the criteria of the seven experiments of concern should undergo a dual-use evaluation before it receives funding from the NIH. This process should apply to research being funded directly and indirectly as a subcontract. Mopping up the mess by censoring publication ex post facto is not an acceptable strategy.

RELATED:
New report to offer a responsible path forward for research with pandemic risks 

In a recent editorial in Nature, Peter Palese, a virologist at the Mount Sinai School of Medicine, argued that censoring the H5N1 research is counterproductive to public health. He recalled the controversy his work generated in 2005, when he and his colleagues resurrected the deadly 1918 influenza virus from the lung tissue of American soldiers who had died during that pandemic. Palese may be right about censorship; the ability of scientists to build on each others’ work is crucial. But he is wrong to compare his experiment to the H5N1 experiment. If one carefully studies the criteria of the seven experiments of concern, it’s evident that Palese’s experiment — unlike Fouchier’s and Kawaoka’s — doesn’t fit the bill. Palese’s work may not have helped America’s image politically, but it didn’t enhance virulence, increase transmission, or commit any of the other sins of biomedical research. And, again unlike Fouchier and Kawaoka, the National Science Advisory Board for Biosecurity allowed Palese’s research to be published in full.

Ultimately, the debate shouldn’t be about censorship; instead, it should come down to whether or not the controversial influenza H5N1 research should have been done at all. The scientists in this case wanted to see if they could make the H5N1 virus more communicable with a few genetic tweaks. But just because an experiment can be done, doesn’t mean that it should be done. Creating this doomsday virus, even in a secure laboratory, is a clear example of an experiment that should not have been done. The risk to life (on a mass scale) from something as simple and as common as human error in the lab was not worth the research. Engineering a highly communicable deadly virus causes far more peril than it does scientific insight. And, if the NIH had a better dual-use review system in place, the experiment likely would not have been funded.

Society places great trust in biomedical research, and the vast majority of the time, that trust is warranted. But when the research establishment breaches this trust by funding experiments with highly questionable utility, then society has a right to call foul and demand that its safety not be put in jeopardy. The NIH should implement a mandatory review of all research proposals that meet the criteria of the seven experiments of concern. If the risks outweigh the benefits, as in this case, then they simply shouldn’t get funded.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments