'); //--> Boston Globe Online: Print it!
Boston Globe Online: Print it!

THIS STORY HAS BEEN FORMATTED FOR EASY PRINTING



Defending DARPA

The government's strangest research might be its best

By Gareth Cook, 8/3/2003

EVEN BY WASHINGTON scandal standards, the ''terrorism futures'' scandal was strange and dramatic.

It started when two senators discovered an obscure military program designed to gauge the chances of various geopolitical developments, including terrorist attacks, by asking people to bet money on them. Within 48 hours-or, more precisely, two news cycles-the program was canceled and the man behind it, John Poindexter of Iran-contra fame, had tendered his resignation.

What most people don't know is that the Department of Defense is already funding a research program with far creepier implications. The $24 million enterprise called Brain Machine Interfaces is developing technology that promises to directly read thoughts from a living brain-and even instill thoughts as well.

The research, some of which is being done at the Massachusetts Institute of Technology, is already surprisingly advanced. Monkeys in a laboratory can control the movement of a robotic arm using only their thoughts. And last year scientists in New York announced they could control the skittering motions of a rat by implanting electrodes in its brain, steering it around the lab floor as if it were a radio-controlled toy car.

It does not take much imagination to see in this the makings of a ''Matrix''-like cyberpunk dystopia: chips that impose false memories, machines that scan for wayward thoughts, cognitively augmented government security forces that impose a ruthless order on a recalcitrant population. It is one thing to propose a tasteless market for gambling on terrorism. It is quite another to set some of the nation's top neuroscientists to work on mind control.

But though they differ in degree, the Brain Machine Interface program and the terrorism futures market share many features. They are shocking. They are bizarre. And they are far more worthy of taxpayer money then at first they seem. The terrorism futures idea, the subject of near hysterical media coverage, is rooted in well-established economic principles. The Brain Machine Interface program, which may well be next in the spotlight, could offer help to the paralyzed and is no more likely to bring about a virtual police state than technologies that are already available.

With Congress clamoring for much stricter oversight of the Defense Advanced Research Projects Agency (DARPA), which funds both programs, the episode is less a drama of Poindexter and a band of mad bureaucrats than it is a reminder of how important it is for the government to spend some of its resources on the outlandish. DARPA is the scientific equivalent of supporting avant-garde artists that the free market doesn't reward. Money from DARPA, and other small government agencies such as the Office of Naval Research, has produced profound scientific advances, Nobel Prizes, and technologies-such as the Internet-that have changed the world.

''It is important to have horizons longer than three years and the chance to try out bold ideas,'' said Tomaso Poggio, one of the MIT scientists involved in Brain Machine Interfaces. More traditional funding agencies can be so conservative, Poggio said, that ''people sometimes joke that you have to have done the experiment before you can write the proposal.''

The terrorism futures market, or the ''Policy Analysis Market'' as it was officially known at DARPA, was based on the idea that market systems can be remarkably accurate at distilling knowledge. A long-running experiment known as the Iowa Electronic Markets allows people to buy and sell ''shares'' in presidential candidates, each of which will ultimately be worth, in cents, the percentage of the vote the candidate takes on Election Day. The final share prices, measured on midnight before election day, consistently outperform large national polls in predicting the election. These kind of artificial markets have also predicted the box office takes of upcoming movies with an accuracy that beats individual experts.

There are many theories about why this approach works, but one part of it is the simple fact that those who are best at predicting trends are rewarded, encouraging them to remain in the market, whereas the losers tend to drop out.

Like the futures market, the Brain Machine Interface program grew out of DARPA's long involvement in information processing. DARPA is the successor to ARPA, an office that was created in 1958, in the wake of Sputnik, to push forward scientific research with potential military applications. ARPA laid the foundation for what is today the Internet, and also contributed to a wide variety of computer applications currently in use.

More recently, DARPA officials have focused on a subject that captivates science fiction writers and leading neuroscientists alike: Can human knowledge-that is, the information contained in our neurons-be transferred into the kind of information used by computers? If machines could read human thoughts directly, for example, the military could then hook a pilot's brain directly into the controls of a jet, allowing him to maneuver far more nimbly than today.

At the same time, if computer-coded information could be downloaded into the brain, then commanders, indeed everyone on the battlefield, could keep a stream of the latest intelligence present in their mind. Of course, these are outlandishly ambitious ideas, especially when scientists don't even know how people remember what they are for breakfast. The most ambitious potential applications, which tend to be emphasized when a research program is under fire, lie for now in the realm of only slightly plausible fiction.

''When you push basic research, you try to speak about what it might do in the long term,'' said Poggio, who is a neuroscientist at MIT's McGovern Institute for Brain Research. ''But there is always the danger that people will take this too seriously.''

More probable than the military applications is the possibility that the research will yield ways for the severely injured, even those who are locked in a totally paralyzed body, to move around and communicate.

The recent demonstration that a monkey can control a cursor on a screen, or a robotic arm, using only its brain counts as dramatic progress. Before that, ''people really doubted whether anything like this could work,'' said Michael S. Gazzaniga, director of the Center for Cognitive Neuroscience at Dartmouth College.

DARPA's brain-machine work, which is unclassified and will eventually be published in scientific journals, attracts scientists because it explores some of the central questions in neuroscience, such as the nature of consciousness and memory, and the neural code the brain uses to store and process information.

Poggio, who is working with Christof Koch of the California Institute of Technology and James DiCarlo of MIT's McGovern Institute, is examining how the brains of monkeys respond to different images. If the scientists can recognize the pattern of neuron-firing that occurs when a monkey recognizes a spoon as opposed to an orange, they will have read its mind, in a sense.

Poggio said the team further plans to try stimulating the neurons of monkeys to see if it can create an illusion. If a monkey is shown an orange while its neurons are being electronically stimulated to fire in the pattern associated with seeing a spoon, what will it see?

Researchers are also interested in the fact that people are not conscious of everything they experience, said Koch. For example, when talking on the phone, a person tunes out most other noises. Finding how the neurons fire differently when a person is conscious of a sound or image could yield the sort of insights into the human mind and the nature of thought that have escaped philosophers for millennia-such as the difference between perception and awareness.

If this type of research continues to advance, it will obviously pose ethical challenges. Any new technology brings with it a large number of subtle trade-offs.

''We already have a technology that cheaply, effortlessly controls people,'' said Koch. ''It is called television.''

But when scandal sets in, as we saw last week, there seems to be no room-and no time-for a broader discussion. The question of the week became: When will Poindexter go? It should have been: Is there a way to use the latest scientific insights, as offensive as they may seem, to make the world a safer, better place?

Gareth Cook can be reached at cook@globe.com.

This story ran on page E1 of the Boston Globe on 8/3/2003.
© Copyright 2003 Globe Newspaper Company.