Defense officials have dreamt up a range of uses for neuroscience research aimed at monitoring brain function and affecting brain performance in humans and animals alike. Yet, the impact of this research is just beginning to become apparent. Georgia Tech researchers Jonathan Y. Huang and Margaret E. Kosal investigated the many implications of this research in "The Security Impact of the Neurosciences." Below, Kosal and three colleagues delve further into the implications of these trends in neuroscience research and discuss how to stem the technology's misuse.
Jonathan Huang and Margaret Kosal have done an excellent job in carefully framing this discussion. They courageously include mention of how "existing international agreements are inadequate to address the security implications of neuroscience research." The very notion of international agreements leads to an implicit belief that traditional arms control approaches could be relevant to this domain, when in fact, they are irrelevant. The pace of development of the technical areas--neuropharmacology, neuroimaging, and brain-machine interactions--that Huang and Kosal chose to address will outpace the hysteresis of the ponderous and arcane processes of traditional security control and disarmament.
The contents of a soon to be released National Academies study, "Military and Intelligence Methodology for Emergent Neurophysiological and Cognitive/Neural Science Research in the Next Two Decades," undertaken by a 19-member panel that I chair, will complement the scope that Huang and Kosal propose. The study does not address arms control issues directly, yet implicit is ample evidence that in the next 20 years, the pace of development of neuroscience technologies related to the military and intelligence communities will swamp traditional arms control measures. Even in the domain of biomedical ethics, the one area that is most relevant to arms control, the rights of individuals are bound in cultural variability and are likely to be out of reach for agreements.
In discussing these issues, it is possible for this roundtable's dialogue to stray toward a "liberal versus conservative" view of science, the brain, and the world. That would be unfortunate, and would likely result in a one-sided discussion, as people who work daily in the laboratories and hospitals doing emergent neuroscience clinical research would surely leave the discussion. The ethics issues at stake are real and worrisome; the chances of agreement on solutions are zero. Yet, the chances of agreement on the challenges are high. The pace of global research in the three areas under discussion (and additional areas discussed in detail in the Academies' report) is even faster than is reported today in the mainstream press, which sinfully exaggerates what is really going on. Another point of agreement will be the really worrisome work being done outside the public eye.
We need a fresh sort of discussion to address these challenges. Four noncontroversial matters make the case for this new approach:
Jonathan Huang and Margaret Kosal have prepared a helpful typology of neuroscience applications to national security functions. Although there is much to be said concerning each of the three categories they identify, I will confine this comment to neuropharmacology.
Calmatives agents are in theory an attractive approach not only for national security purposes but also for domestic policing. However, the time lag between the release of a substance and its effect on targeted individuals makes currently available opiates such as fentanyl poor candidates for such uses. It is not well understood, for instance, why the group holding the Moscow Theater hostages did not react when it became apparent that something was going on in the confines of the building, but perhaps they were so overcome with exhaustion that they did not register events quickly enough. Whatever the explanation, there is no assurance that future hostage-takers bent on suicide could be managed in this way. Governments will need to develop a faster acting agent before using calmatives in this way is an attractive option. Also, the open-air release of an agent is unlikely to be effective due to the dispersal of the aerosol, limiting the situations in which a calmative can be an effective non-lethal weapon.
Nor should one assume that even effective calmatives would truly stem violence. Reaction to their use might well stimulate aggressive reprisals by adversaries, so that ultimately in the course of a conflict there would be no reduction in violence. We should resist the illusion that there can be any technological fix to the hostilities inherent in a zone of combat.
The enhancement possibilities mentioned by Huang and Kosal should prompt us to recall that there must be a first user, and that that first use should be considered a clinical experiment. Governments have historically introduced performance-enhancing drugs to soldiers based on little evidence of their efficacy as compared to the tradeoffs, e.g., artificially extended wakefulness versus impaired judgment and reflexes (cocaine, caffeine, amphetamines). The effects of substances like modafinil, a stimulant, outside the laboratory need to be carefully examined in environments that simulate potential combat situations. The larger political and social question is how much enhancement future warfighters can legitimately be expected to accept as part of their preparation for service, especially since it will be some time before the long-term effects of neuropharmacology can be understood.
Not mentioned by Huang and Kosal are the possibilities that neuropharmacology may present for enhancing the accuracy and efficiency of interrogation procedures. In at least a few cases, the United States has used techniques that have conventionally been considered torture to obtain information from persons detained as suspected terrorists. A few days before these comments were written, documents surfaced in a U.S. Senate hearing on the treatment of detainees that tie techniques used against American POW’s during the Korean War by Chinese interrogators to an interrogation manual used by the U.S. military at Guantanamo Bay. The U.S. manual was based on a 1957 analysis of alleged "brainwashing" by the Chinese that resulted in false confessions of crimes by U.S. soldiers.
Public outrage about such revelations would seem to justify interest in chemical approaches to interrogation as an alternative to overtly violent techniques, for example, the use of a version of oxytocin to stimulate a trusting response in an individual. Research suggests that such an effect could be evoked from artificially stimulated neurochemical production or from drugs. It is a good bet that there will be attempts to develop such substances. In an odd way we may be about to circle back to the 1950s notion that LSD or other hallucinogens could be "truth serums." We need to tread carefully here. Although less overtly violent than techniques like water boarding, directed chemical changes in human consciousness under duress are subtly powerful invasions of the personality. As we well know from the experience with atomic weapons, once these genies are released under the acute duress and justification of conflict they are hard to put back.
Thanks to my colleagues for furthering this conversation. I look forward to the results of the National Academy of Sciences study that Christopher Green is chairing.
The intersection between the cognitive sciences (of which neuroscience may be seen as a subset) and national security offers many puzzles--scientific, ethical, policy, and practical. Jonathan Moreno's comments highlight the critical need for more research into the underlying physiological mechanisms of proposed neuropharmocological chemicals and their pharmacokinetics (how and how fast they are distributed, absorbed, processed, and excreted by the body). It is a challenge to understand the effect of a pharmacological chemical across a wide population of individuals, which would complicate using such an agent to deal with a notional hostage or insurgent situations. An aerobically fit 25-year-old male is likely to be affected very differently than a 75-year-old man, a 32-year-old pregnant woman, or an 8-year-old girl. The challenge and cost increases if each neurochemical has to be tested individually, and can be further confounded by the condition of subjects, such as increases in adrenaline or exhaustion, to which Moreno alluded. These challenges highlight the need for more basic research into the chemistry and biology of the brain and for the development of predictive models to understand and predict the underlying phenomena. To realize such research, robust funding for research in the basic sciences is critical.
The Defense Department has invested substantially in the scientific research community, including basic research support to academia. According to data publically available through the 2008 Presidential Budget Request, the Defense Advanced Research Projects Agency (DARPA) invested at least $372 million in the cognitive sciences in 2007, including areas such as cognitive computing and bio-revolution. Other basic research investments in cognitive sciences across Defense included: $13.3 million for the Army, $13.9 million for the Air Force, and $10.4 million for the Navy in 2007.
Outside of Defense, the two significant funders of basic research in the cognitive sciences are the National Science Foundation (NSF) and the National Institutes of Health (NIH). The NSF, with a nearly $6 billion budget in 2007, maintains active grant programs in perception, action, and cognition; cognitive neuroscience; neural systems; and collaborative research in computational neuroscience. The NIH reported $1.8 billion in appropriation in 2006 for its two agencies most relevant to brain research: the National Institute of Bio-Imaging and Bioengineering and National Institute of Neurological Disorder and Stroke. It's less clear how non-defense federal agencies appropriate basic research funding in the cognitive sciences.
Methods for inter- and intra-agency research coordination, as well robust public oversight, are needed across cognitive sciences research investment. Public oversight is particularly notable--and challenging--as it applies to potential applications of ethical concern, whether they are security related or not. When talking about the potential implications of emerging sciences such as the cognitive and neurosciences, partisan or political overtones can sometimes infect a discussion or policy. To counter this tendency, there is a critical need to ground possible scenarios in technical viability. Christopher delineates between the "achievable" and "unachievable" scientific goals, and this is one area that greatly concerns me.
Compared to biotechnology and even nanotechnology, civilian and military applications of cognitive sciences are nascent. Underlying some discussions of the applications of nanotechnology have been notional scenarios that fall into the "unachievable" category, e.g., concerns about self-replicating abiotic molecular assemblers and calls for international treaties limiting their production and use. These scenarios speak to the critical need for scientists and engineers to be involved in policy formation and public dialogue. Although stating that to readers of the Bulletin is speaking to the metaphorical choir.
I would like to reflect on the approaches taken by my colleagues, Jonathan Moreno and Margaret Kosal, in the beginning of this discussion. Specifically, they have addressed what they contend to be two important core "rate-limiters" in the future progress of neuroscience research that could have military application: funding and ethics. While I do not disagree with anything that Jonathan and Margaret say in principle, my view is that these two core issues make it clear that this discussion must become global. If not, we will miss the opportunity to address the fundamental issue: what are the military applications of neurosciences? The research conditions in U.S. labs are well known, and much if not most of the important militarily relevant neuroscience research will take place overseas. If this discussion becomes global and we engage with non-U.S. researchers, then we can really begin to address this question.
While Jonathan, Margaret, and I are all in violent agreement about the basics, we need to discover the subtle differences in our opinions to help us learn. To do this, I will purposely overstate in tone (not in facts) two key issues: First, I stand by my initial belief that the pace of discovery in the neurosciences is unlikely to accelerate faster than it is today and will not happen preferentially in the West. Second, we should not assume that the military applications of this research can be influenced by incremental additional financial support.
The pace of discovery in neurosciences in the West is widely believed to be driven by stable and moderately well-funded Centers of Excellence. The research with applicability to military use is well known and well publicized. Those of us with access to information about classified programs are underwhelmed by the lack of significant novel research. In fact, as part of the National Academy of Sciences commission I head, the 16 members of the committee, 6 staff, and 12 peer reviewers humbly received briefings and work product from 25 military and intelligence agencies, national laboratories, universities, nongovernmental organizations, and other private institutions doing neurosciences research directly or indirectly involved in work of potential military value, and reviewed hundreds of peer-reviewed publications. The committee decided that it had no scientific or substantive reason to write a classified annex. (The peer review of the commission's final report is complete and the report will be released as "Emerging Cognitive Neuroscience and Related Technologies.")
From the information that is accessible, we know the ongoing work is relatively well supported, as Margaret described. Should we expect any significant increase in a few years for "basic" research? The bigger problem is that the military and intelligence communities cannot understand the implications of any such research. As shocking as it may be, fewer than a dozen persons in intelligence and military constituencies understand the neurosciences involved, and they are happy to say so--hence, the plethora of outside reviews on the subject in the past two years. And government scientists are not well placed to accelerate application development from the basic neurosciences. Margaret has it just right: Only some form of increased exchange and communication between academia and the government in this area will work.
Additional funding alone won't help, in any event, for a separate reason. The goal of the basic neurosciences funded today is for the most part hypothesis testing, not hypothesis generating. Grants and awards are also appropriately constrained by the Health Insurance Portability and Accountability Act (HIPAA) and ethics. As Jonathan wisely pointed out, the "first experiments" in the neuroscience arenas most talked about--psychopharmacology and aides to interrogation--have to wait for approvals and reviews. The drugs and compounds being considered today are older, and the ones we might expect to be useful to modify emotional state will not be tested on humans in any context of reasonable funding or ethics. It is not in the realm of possibilities that next-generation drugs with safe and efficacious properties for military use will be developed absent a huge increase in funding. I am happy to go on record as stating that for both scientific and ethical reasons I oppose the development of such drugs.
Funding priorities require a solid roadmap that includes research that is certain to meet prescribed goals as contained in the rules of grant applications. From the landscape we have seen, it is unlikely that a disruptive technology could escape from a basic science laboratory. (Clayton Christensen, thank you for teaching us that disruptors by definition are today's technology turned inward). With careful planning and human-use approvals, it is just as unlikely that a new discovery with unintended consequences will pop-out to become a new military application, unless it is looked for systematically--almost (but not quite) an oxymoron. Screening for the unintended applications of a drug is not finding a surprise, it is reading carefully the data from good epidemiology.
It would be productive to turn this discussion. Basic science funding is likely to stay stable and little serious research will delve into the psychopharmacology of interrogation. Thus, a vector that we may wish to explore is the new data from several global laboratories that indicates culture matters in the decision to use any military application of asymmetric force. Battlefield commanders of all nations hold sacrosanct the right to determine the applications that may cause harm to those outside the bandwidth of an lethal dose 99+ weapon and generally don't intend to develop for use materials that could cause collateral harm to civilians and non-combatants. If governments or scientists were to try to develop a system to pre-screen neuroscientific cognitive manipulators, which would be HIPAA approved and tested, and robust in its core science, success would be as likely as it was with mines and cluster-bombs--meaning not likely. And if we did have such success, our enemies of the future would not care.
I'd like to change the topic for a moment. What strikes me about the reaction to the National Research Council report ("Emerging Cognitive Neuroscience and Related Technologies") in the past few days is the attention that is being paid to the intersection of neuroscience, conflict, and culture. Both traditional philosophers and pioneers in neuroscience have long suspected that there may be differences in the way people from different cultures both process information and the way they understand what it is to process information. The former point is a bit more straightforward but could be attributed to behavioral psychology (a tradition that runs from St. Augustine to B. F. Skinner), or evolutionary biology (including E. O. Wilson's sociobiology). There is no reason to rule out the possibility that powerful new techniques for reading many individual genomes could develop the field of "comparative cultural genomics" so that a physical substrate of these differences could be identified. Whatever degree of accuracy this data could provide would be of enormous interest to both national security officials and diplomats.
The second point calls to mind an M. C. Escher work. Perhaps the way different cultural groups typically understand what they are doing when they are, fill in the blank--experiencing, learning, reflecting, deliberating, praying, or otherwise engaged in some form of reverie--itself has an influence on the way information is processed and stored. What I have called comparative cultural genomics could tell us whether that is the case or not. But if it did, the task would seem to be incomparably more complicated if there were serious doubts about whether our science-based system for understanding neural activity is commensurable with other modes of understanding.
In other words, we might end up right back where we started. The most efficient option might be to "go native" rather than attempt some translation via neuroscience. And the only way to confirm our success would be to get along with the natives when we expect to, and not when we don't expect to. This point was brilliantly made by W. V. O. Quine in his classic, "Word and Object." In the end, our ability to know how others know is a theory, however sophisticated its expression, and what stands behind the theory remains a question of philosophical wonder.
In his latest commentary on culture and the brain, Jonathan Moreno addresses a crucial issue. So far, this discussion has mainly focused on technological advances brought forward by the rapid development in the neurosciences. These developments are impressive and raise all sorts of hopes and concerns. But as pointed out in the recent National Research Council report and in this roundtable, the debate takes place very much before the fact. It is plausible that research into fields such as neuropharmacology, neuroimaging, and brain-machine interactions may be on the verge of major conceptual breakthroughs, but transitioning this research from the lab to the field, be that civil or military, may not be easy or even feasible.
The ultimate lie detector is not right around the corner, drugs that make a person think, work, or fight forever focused and determined are yet to be seen, and the best ways to interact with the world still appear to be mediated via an extended body with control of that action developing out of recurrent embedded practice. Technology makes new things possible, but does it, by itself, change the name of the game? Does the levelling factor really consist of whether a particular neurotechnological interface is situated on the body or in the nervous system? We suspect that the great divide lies somewhere else. Although we do not yet live in a neurotechnological world, we may already be seeing the contours of a neurocosmological world, that is, a world where the notion that "you are your brain" is so commonplace that it appears immediately evident. This, we find, is a relatively new development.
In his 1994 book, The Astonishing Hypothesis, British scientist Francis Crick claimed: "The Astonishing Hypothesis is that 'you,' your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll's Alice might have phrased it: 'You're nothing but a pack of neurons.' This hypothesis is so alien to the ideas of most people today that it can truly be called astonishing." In the decade and a half since, this hypothesis has neither been proven nor disproven, but it has been silently transformed into a truism, making it a logical next step to initiate research into all sorts of new fields such as neuroeconomics, neuromarketing, neuroaesthetics, neurophilosophy, neuropragmatics, neuroenhancing and, indeed, neurosecurity. This move is not primarily about neurotechnology; rather, it is about what we propose to call neurocosmology. It suggests that the brain is becoming a battlefield, not only for drugs, but also for ideas.
To exaggerate just a bit--those who define what the brain is also define who "we" are, who "you" are, and who "they" are. In this logic, it is not surprising that the notion of cultural neuroscience is opening up as a whole new field that is surely about brains, but also about forming and challenging particular understandings of personal, social, and cultural identities. For instance, the discussion of where in the brain the self is located has recently become an arena for challenging the Western notion of the "right" concept of personal and social identity.
Equally, how a soldier or the enemy are viewed and treated is likely to differ depending on whether they are thought of as moral persons with autonomy and responsibility or as emergent properties of behaviour in a vast assembly of neurons. These discussions, which are not primarily about technology, will have a major impact on how we and they discuss, conceptualize, and act out conflicts in the future. We agree with Moreno that framing these discussions in terms of "brains" may, at a basic level, be a problematic detour. However, to the extent that discussions of the fundamental nature of "the human" increasingly take place with reference to brains, it appears, unfortunately, rather unavoidable.
My colleagues have provided much for me to respond to. I'll take up a few ideas: global cognitive science research and development, differentiating offensive from defensive research, and expanding the concept of technical security studies. The three overlap, and each deserves more in-depth consideration than I will present here.
In the United States and the West, cognitive science research funding has been steadily increasing during the last 30 years. Like the expanse of research that falls under the very general category of nanotechnology, the cognitive sciences are not confined to one or two single, narrow disciplines. During the past decade, the concept of "Nano-Bio-Cogno-Info (NBIC) convergence" has ebbed and swelled in popular and social science interest. While strong proponents, such as the National Science Foundation's Mike Roco, have championed the concept, the scientific or technical communities have not formally or informally adopted NBIC convergence. Futurists and some social scientists, both academic and popular, are running with the idea as a way to envision revolutionary technologies. More technically robust consideration of this concept is needed from a science and engineering perspective. Organizational capacity and structure needs to be considered as well.
In thinking about the potential security impacts of the cognitive sciences, there is also a need to better understand the funding strategies, the institutions, the normative and ethical conceptions of certain research, and the role of public and private entities and sectors, and to disentangle rhetoric from reality. As Jonathan Moreno noted, this effort must reach beyond our own Western ways of thinking, including disciplinary stovepipes--both inside and outside of science. How these factors affect U.S. and Western pursuits of cognitive science research and application development may not be applicable to non-Western systems. Such efforts are more than purely academic pursuits of knowledge; this can be seen in the Defense Department's Minerva Project, which recognizes the strategic importance of cross-cultural studies for security. Robust models of the impact of cognitive science on international security require analysts to consider complicated and cutting-edge scientific and technical concepts, as those of us in the technical security studies community appreciate. For those of us working with traditional international relations theories and theorists, these questions of emerging security impact are also prime opportunities to test previous models and illustrate the importance and potential of technical security studies.
Directly relevant to the potential strategic role of the cognitive sciences in international security is the question of how one delineates offensive research from defensive research. And what metric is used to make this determination? Within one branch of international relations theory, standard metrics for nuclear and other Cold War technology have been proposed, such as stealth and mobility. Yet, in thinking about the neurosciences, experimental psychology, and human performance technology, it is less than evident what the metrics for differentiating benign research and development from malevolent applications will be. Biotechnology's dual-use conundrum may hint at the difficulty of "binning" advanced cognitive science research and development into offensive or defensive categories. This inherent ambiguity may challenge traditional international security models. It should not, however, be viewed as an impediment or a reason to curtail basic scientific research in the multidisciplinary fields of the cognitive sciences.
The cognitive sciences offer opportunities for new international security initiatives and policies that expand on basic research via social science methods and in the experimental laboratory. Like nanotechnology, the cognitive sciences may present an opportunity for proactive, Track II diplomacy among scientists around the world. This may also represent a means to further expand, in an evolutionary manner, aspects of Cooperative Threat Reduction, in which international parties start and pursue coordinated beneficial work, thereby establishing an international research foundation and a community of research practitioners to facilitate transparency in basic science and engineering. Creation, implementation, and effective execution of such initiatives should incorporate aspects of all three concepts I noted at the start of this piece.
Margaret Kosal made several interesting propositions in her last commentary. One sentence struck me as particularly important and may present an opportunity to integrate three orthogonal threads of this discussion that otherwise indicate a divergence in opinion. She wrote: "For those of us working with traditional international relations theories and theorists, these questions of emerging security impact are also prime opportunities to test previous models and illustrate the importance and potential of technical security studies."
I have argued that it may not be fruitful to investigate traditional arms control models, due to differences in value systems and research methods across cultures. Yet, it is important to at least seek pathways to mitigate improper ethics in neural research. Let me explain why. Jonathan Moreno has emphasized how human understanding might only be provable by existential discernment, not scientific reductionism. Andreas Roepstorff and Sita Kotnis add to this notion by emphasizing the emergent condition of neural research and how transferring laboratory results to military field applications may not be feasible, ethical, or wise. Indeed, as I suggested earlier, breakthrough discoveries will, at least in part, be made outside current Western arms control models. And, it will not be possible to modify these models by engaging foreign governments.
With these thoughts in mind, two parts of Kosal’s idea, the notion of "traditional international relations theories" and of "opportunities to test previous models," will present challenges, since the countries of greatest importance have neither embraced traditional Western theories of power relationships, nor are necessarily using Western research approaches in the neural sciences. Finally, to the extent that humanness is inextricably linked not only to neurons but also to a holographic and quantum consciousness that results in behavior--an idea that we have all agreed on, albeit in different ways--is it even appropriate to attempt to direct global research? Why should scientists attempt to regulate the conduct of research before we understand or agree on each other’s individual values and cultural interpretations of neural behavior, and on differing government rights of control?
To find a way out of the box we have created for ourselves, we need to first decide who wants to talk. To the extent that scientists negotiate agreements on the ethical boundaries of neural science as a way to shape oversight of laboratories, we need to know which states and labs are more likely to change. Outside of the Western world, the goals and the controls on neurotechnology research and dual-use applications are heterogeneous, as they are in other areas of research that have military applications. Thus, this dialogue probably shouldn’t start with governments that are perceived to disregard the importance of individual liberty over a social value.
Can we identify a common organizational structurethat exists above or separate froma single government policy arm that could engage in this discussion? Examples might include national academies of sciences. In both the United States and China, for example, the academies of sciences provide valuable, non-binding advice to government policy arms. Recent research reports about torture (completed by the U.S. National Research Council) and the need to address greenhouse emissions (completed by the Chinese Academy of Sciences and Tsinghua University) demonstrate that quasi-governmental organizations will at least engage controversy in their respective nations. What about the other countries--such as Brazil, Russia, Australia, Iran, and South Korea--where seminal work in neurotechnology is underway? Which of these countries should we engage first? If we can get there, we can begin the more fruitful discussion of how to engage them.
The recent New York Times report that Indian courts have begun accepting a version of electroencephalograms (otherwise known as EEGs) as evidence of deception or the absence of deception speak to the urgency of these discussions and to the importance of this roundtable. A few poorly informed American judges could also naively find this technology informative; certainly the issues concerning its utility are esoteric enough, and few people have been educated about them. Intelligence officials and/or operatives, operating under the cover of national security, might also decide that the potential gain in information outweighs the uncertainty of these technologies.
Considering the Defense Department's 1950s experiments with LSD and other hallucinogens, such a decision would not be surprising. Neuroscientists need to speak with one voice about the injustices that may be done given the current state of the science, whatever their views about the ultimate prospects for such technologies.
Much of our discussion during the last three months has revolved around the challenges of developing norms, ethical frameworks, and new security models for the neurosciences. As my colleague Jonathan Huang and I noted in our initial article, the challenges of emerging sciences, such as neuroscience, are unlikely to be addressed adequately or wholly by traditional arms control treaties. The breadth of technologies anticipated to arise from the cognitive sciences extends beyond the scope and the structural capacity of the Biological and Toxin Weapons Convention and the Chemical Weapons Convention. At the same time, the normative value of these international legal instruments remains powerful; the ineffectiveness of the institutions and constituencies that support these regimes, however, and their failure to deal with the challenges of “emerged” biotechnologies and changing chemical production methods do not portend success for a similarly structured cognitive sciences regime.
Policy makers need to support new ideas for dealing with these challenges, consider innovative non-traditional approaches, leverage the marketplace, and increase the involvement of the private sector. Arms control advocates should not interpret pursuing these routes as of the abandonment of traditional arms control treaties. We need all of the metaphorical tools in the toolbox.
Whatever approaches governments, funding sources, private markets, and scholars pursue, the questions raised by the ethical and security implications of advanced neuroscience applications necessitate intrinsically inter- and multi-disciplinary efforts. Scientists, engineers, and legal scholars will need to collaborate meaningfully with social scientists, anthropologists, and economists. As with many advanced and emerging technologies, differentiating reality from rhetoric is often the first step; the discussion and policy options also require a technically robust foundation to have merit and meaning. I commend the other participants in this discussion for differentiating between the technically possible, scientifically plausible, and speculative.
In their initial essay, "The Security Impact of the Neurosciences," Margaret Kosal and Jonathan Huang raised four questions:
When this discussion began, I thought I understood my thinking on these subjects, but throughout the past several months I've changed my mind in small, but important ways. Interacting with my fellow discussants has sensitized me to the many ways that we agree on these subjects. But I've also learned that all of us underestimate the pace of global change in neuroscience and the way that applications are emerging before the underlying science is understood.
What to do about this? I was initially convinced that the pace of neuroscience research would outstrip the ability of any government-centric international regime to regulate it. Arms control models are ponderous and largely disconnected from international bodies that can modulate or influence rates of research. Yet, I now realize that nongovernmental bodies, such as international professional forensic regimes, can accomplish these goals. This makes sense considering how neural imaging systems have become accepted methods of detecting deception independent of Western legal ethics and largely without the technology having undergone traditional scientific peer review. People have been convicted and sent to jail in India, China, Australia, Singapore, and Thailand on the basis of evidence garnered using neuroscience technologies that have few scientific and ethical underpinnings. In addition, mainstream hospitals around the world now prescribe a range of unproven treatments--such as applying neural stem cells to inflamed, traumatized spinal cord and neural tissues--and drugs for neurological conditions such as ALS and Alzheimer's. Even Olympic athletes have used off-label neuropharmaceuticals and neuro-"neutriceuticals," according to regular news reports.
The international regimes that are most likely to recognize and adapt to these unexpected challenges and to "control" these applications of neuroscience--deception detection, neuroimmunology, neural stem cell application, and the development of neural performance enhancing drugs--will be in the private and commercial sectors. Case law will regulate the application of cognitive imaging; patients are likely to continue heading overseas to treat spinal injuries; and clinical results, not laboratory research, will determine whether treatments gain traction. The emergence of the fringe pharmaceutical industry, which creates off-label and non-label performance enhancing drugs, will lead to new nongovernmental regimes that focus on testing and evaluating athletes.
So how do these trends affect our understanding of Margaret and Jonathan's questions? The initial implications of neuroscience research have been non-military, and they have been related to forensics and private-sector activity. In general, the impact has been felt most acutely in legal and medical circles, not on the battlefield, and have thus engaged nongovernmental organizations, such as international sports authorities, and pseudo-experts as neuro-witnesses. Beyond these lay forensic medical regimes, the systems most likely to govern the emerging medical uses of neuroscience are clinical referral and medical practice databases, a form of regulation that eliminates unsuccessful treatments over time.
Each area of concern I've raised in this discussion is complicated and rapidly changing, and science has not yet organized and analyzed research data to address each appropriately. We hear about them most often in news reports and personal anecdotes. Despite this, they are real enough to worry about. And the issue of greatest ethical interest and concern is that many neuroscience applications are being driven by nonscientists. An important next step would be for scientists who are doing neuroscience research to engage these parties more directly.