Grassroot efforts to impede bioterrorism

By Stephen M. Maurer | March 5, 2009

Governments trying to prevent the misuse of biological research face considerable challenges. The technologies needed to create biological weapons are freely available at academic laboratories and biotech companies around the world, and researchers are constantly trading information through a complex web of open science networks and markets. Trying to identify, much less monitor or control these activities seems hopeless. How then should society keep biological information and technologies out of the hands of terrorists?

The old Cold War methods tried to control science and markets from the outside. Given the power of these institutions, though, it probably makes more sense to enlist them as partners. That’s the approach my project at the University of California, Berkeley is taking to make one of biology’s fastest growing fields–synthetic biology–safer.

Over the past decade, synthetic biologists have used artificial DNA to revive extinct viruses (e.g. 1918 influenza) and create radically new and reengineered organisms. Shortly after 9/11, researchers and industry began asking themselves what could be done to keep this power away from terrorists. Strikingly, many of the best ideas didn’t require government intervention at all. Last April, synthetic biology’s leading trade association, the Industry Association Synthetic Biology (IASB), held a meeting in Munich to select the best ideas and organize grassroots initiatives to launch them. Now, nearly a year later, these projects are starting to take off.

Keeping terrorists out of the market

The simplest but most immediate threat is that terrorists could use synthetic biology to manufacture otherwise unobtainable pathogens such as 1918 influenza or smallpox. Fortunately, most companies that make synthetic DNA already screen incoming orders against the so-called select agent list of pathogens. The problem is that different companies have different screening procedures, and a few companies do no screening at all. Addressing these problems with an international treaty would take decades. Can a market-based solution do better?

The executives who run the new gene synthesis companies readily admit that customers want them to screen and are often indignant when they don’t. The fact that most companies already screen is encouraging: However imperfectly, consumers’ preferences must be getting through! The task now is to make these market signals even stronger. In practice, this means designing shrewd initiatives so that customers will know immediately when a supplier’s screening program is inadequate and, if necessary, take their business elsewhere.

IASB has made this strategy its top priority. The basic idea is to develop a seal of approval that customers can consult. This, in turn, will require an agreed code of conduct that defines responsible screening. IASB’s current draft of a code is ambitious and will require most companies to upgrade their programs. Members expect to finalize the document in July.

Improving screening software

Existing screening software routinely identifies large numbers of harmless genes which human experts must then examine before an order can be filled. This makes it impractical to screen for threats beyond the relatively short list of select agents. The day is coming, though, when companies will also need to screen for short DNA sequences (“oligos”) and/or genetically engineered threats that look radically different from any known organism. Some observers have argued that government should set standards for this next-generation screening software–or even try to write it. But nobody really knows what such a program would look like.

Fortunately, gene synthesis companies have learned a great deal from operating first-generation screening programs. The first step to improving screening software, therefore, will be to break down confidentiality so that companies can pool what they know. Several IASB members have already agreed to get the ball rolling by publishing their experiences. Thereafter, Berkeley and IASB plan to establish a password-protected site where companies can share information with each other and, in many cases, the public.

The biggest technological challenge will be to create a comprehensive database that says whether genes are harmful (“virulent”) or benign. This is a large undertaking and will almost certainly need government support in the long run. For now, though, IASB has found a way to get started. Member companies already pay employees to search the scientific literature for evidence of virulence each time their screening software identifies suspicious genes. This information is usually discarded, but preserving and sharing it would save money for everyone. (Corporate open-source software collaborations routinely follow this logic when they share the work of developing web servers and other projects.) The key is to create a state-of-the-art bioinformatics depository where this sharing can take place. IASB’s Markus Fischer is working with Berkeley to make this open source biosecurity project a reality. We expect to have a basic version of the site, known as VIREP, on line by early 2010. Several IASB members have already agreed to donate virulence data.

Experiments of concern

Improved screening will provide a reasonable hedge against synthetic biology’s most immediate threats. In the long run, though, it is worth asking whether today’s state-of-the-art biology experiments could inadvertently make advanced weapons more lethal or easier to acquire. Deciding which of these “experiments of concern” should go forward will be difficult. Furthermore, the fact that most projects offer benefits as well as dangers means that the decision won’t be easy. The problem associated with experiments of concern, at least for now, is that no one really knows what they look like or can write down any clear principles for deciding which ones should go forward. Common law courts the world over routinely make case-by-case decisions on the theory that principles will eventually emerge later. Synthetic biology will almost certainly have to follow this same strategy.

My Berkeley project is working to develop an online portal where scientists, Institutional Biosafety Committees, and other interested parties can get quick, expert, and impartial advice about whether “experiments of concern” should go forward. The project’s information technology manager Jason Christopher has spent much of the last year working with a commercial developer to write software for the portal and is now in the process of testing it. (The software will be freely available.) Early support from the synthetic biology and security communities has been impressive, with two dozen experts already volunteering to serve as reviewers.

The acid test will be seeing how many community members ask for advice. Here too, there is reason to be optimistic. The principle of seeking sanity checks before embarking on experiments of concern is already well-established in synthetic biology, most notably in Science‘s 2005 decision to obtain informal review from CDC director Julie Gerberding before running a paper announcing that the 1918 influenza virus had been artificially synthesized. Furthermore, two anonymous scientists have already agreed to submit their experiments to the portal as part of a trial run. Following this trial, the portal’s web site will begin taking inquiries from scientists and biosafety committees all over the world. Each new query will be examined by a panel of three volunteers including at least one biologist and one biosecurity expert. Most inquiries should receive a detailed response within two weeks.

What can government do?

It is not too soon to ask how government can coordinate and build on these grassroots efforts. The simplest option would be for government to make some voluntary measures mandatory. For example, U.S. authorities are considering draft regulations that would require federal grantees to follow the existing norm against doing business with gene synthesis companies that fail to screen orders. In some cases, though, it may also make sense for government to defer some regulation until the community has had a chance to find out which new ideas (notably the portal and next-generation screening software) work and which don’t. Finally, government should consider imaginative partnerships with existing industry efforts so that projects such as VIREP can go even further.

Synthetic biologists’ efforts to improve security at the community level are just beginning. The important thing, though, is that the community has turned the corner from talk to action. More than most people, synthetic biologists know that progress is impossible without trial, error, and a willingness to fail. Government could do much worse than to follow their example.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.


Topics: Biosecurity, Opinion

Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments