Mind Reading at the Airport
The TSA is experimenting with machines for airports that can determine whether travellers have hostile intent for their flights.
At airport security checkpoints in Knoxville, Tenn. this summer, scores of departing passengers were chosen to step behind a curtain, sit in a metallic oval booth and don headphones.
With one hand inserted into a sensor that monitors physical responses, the travelers used the other hand to answer questions on a touch screen about their plans. A machine measured biometric responses — blood pressure, pulse and sweat levels — that then were analyzed by software. The idea was to ferret out U.S. officials who were carrying out carefully constructed but make-believe terrorist missions.
The trial of the Israeli-developed system represents an effort by the U.S. Transportation Security Administration to determine whether technology can spot passengers who have “hostile intent.” In effect, the screening system attempts to mechanize Israel’s vaunted airport-security process by using algorithms, artificial-intelligence software and polygraph principles.
Anyone familiar with the literature on polygraph testing should already have their ears perked up at this program–polygraphs are notoriously bad at detecting lies, and there remains no scientific basis for their accuracy. So a system based on “polygraph principles” used to determine somebody’s “intent” in causing harm on a airline flight should be cause for some concern. Call me crazy, but I think that anybody who’s asked to step aside “for a few questions” is going to have some trouble maintaining their pulse, blood pressure, and sweat levels at normal levels when they’re asked to step aside for a few questions.
Even worse in my mind, though, is the system’s planned operation, which doesn’t only rely on biometric measures but algorithms developed from the answers to 15-20 questions–based on the nationality of the passenger.
Here is the Cogito concept: A passenger enters the booth, swipes his passport and responds in his choice of language to 15 to 20 questions generated by factors such as the location, and personal attributes like nationality, gender and age. The process takes as much as five minutes, after which the passenger is either cleared or interviewed further by a security officer.
At the heart of the system is proprietary software that draws on Israel’s extensive field experience with suicide bombers and security-related interrogations. The system aims to test the responses to words, in many languages, that trigger psycho-physiological responses among people with terrorist intent.
The technology isn’t geared toward detecting general nervousness: Mr. Shoval says terrorists often are trained to be cool and to conceal stress. Unlike a standard lie detector, the technology analyzes a person’s answers not only in relation to his other responses but also those of a broader peer group determined by a range of security considerations. “We can recognize patterns for people with hostile agendas based on research with Palestinians, Israelis, Americans and other nationalities in Israel,” Mr. Shoval says. “We haven’t tried it with Chinese or Iraqis yet.” In theory, the Cogito machine could be customized for specific cultures, and questions could be tailored to intelligence about a specific threat. [Emphasis added.]
I don’t know about you, but I can come up with, oh, probably about 100 reasons off the top of my head why responses geared toward a “broader peer group”–especially only 15-20 questions–probably aren’t going to work. Even within a “broader peer group”, there are going to be marked differences between individuals based on personal experience, smaller cultural groups, etc. Something tells me that an Egyptian Muslim and an Egyptian Christian might have markedly different answers to a similar set of questions. Far too divergent, I would argue, for a small set of questions to determine the difference. Even a large set of questions might be useless, too, as too many answers would probably require algorithms far too complex to produce reliable results.
While I’m certainly sympathetic to the idea that we should focus on identifying people who are potential threats, rather than banning water bottles and toy guns from flights, this mechanized system strikes me as a very, very bad idea. There doesn’t seem to be any way to reliably test its performance. The company has used terrorist “role-players” to test the system, “role-players” aren’t terrorists, no matter how good of actors they are. Without any reliable way to test the accuracy of the system, you’re just left with an inconvenience for travelers and a waste of the TSA’s time.