Shahzad on Terror Watch List Since 1999?!
On last night’s edition of OTB Radio, “Times Square Terror, Gulf Oil Spill, and British Elections,” Dave Schuler wondered what it is that the Department of Homeland Security is doing with its time. After all, Faisal Shahzad fits the terrorist profile perfectly and yet they were apparently oblivious to him.
Now, CBS reports, “would-be Times Square bomber Faisal Shahzad appeared on a Department of Homeland Security travel lookout list — Traveler Enforcement Compliance System (TECS) — between 1999 and 2008 because he brought approximately $80,000 cash or cash instruments into the United States.”
It’s worth being a bit skeptical of the report. After all, DHS didn’t exist until November of 2002. And, according to a December 2002 TSA PowerPoint titled “TSA Watch Lists,” we only had 16 people on the “no transport” list as of September 11, 2001. (And, of course, none of the 9/11 hijackers were on the list.) So, presumably, if Shahzad was on our government’s radar screen, it was something other than that list and by some predecessor agency that later folded into DHS.
The interesting question to me is: Which would be worse?
That is, would it be better if our ridiculously expensive and cumbersome Homeland Security apparatus wasn’t monitoring Shahzad, despite his being a Pakistani national of the right age set constantly traveling back to his homeland? Or that they’d been following him for eleven years but didn’t catch him until well after he finished his plot, avoiding disaster only through his incompetence?
And I don’t know whether to laugh or cry at this:
The system has been recently called inefficient by members of Congress. In late March, Senator Joe Lieberman of Connecticut and Susan Collins of Maine criticized the system in a letter to DHS, writing that, “Current functionality does not allow interoperability among databases, fast searching of information, modern interfaces for users of the system, or sufficient security to protect critical terrorist travel data.”
A modernization of the system began in 2008 and is expected to be completed by 2015.
So, it takes a multi-billion dollar agency seven years to modernize a computerized list? Something any number of people that I know personally could accomplish in, oh, two weeks?
And, by definition, any advanced technical project begun in 2008 will be obsolete long before 2015. Indeed, it’s likely that a commercial, off-the-shelf product will emerge well before 2015 better than anything conceived in 2008.
We’re naturalizing foreign nationals who appear on terrorist watch lists? Talk about not letting the left hand know what the right hand is doing!
I pointed that out in the aftermath of the failure of the FBI’s project to consolidate its databases. IMO technology, politics, the budgeting process, and bureaucracy all conspire to prevent government agencies from modernizing on a timely basis.
I have dealt with the Federal Reserve and the U. S. Customs Service in some attempts at improving their processes. I had the same experiences in both cases.
Yeah, you’d think something like that would come up in the background investigation supposedly done before even permitting permanent residency.
Modernize the lists? You can get the SDN list (the so called no buy list) in a delimited or XML file in either a Windows 32 or DOS 16 bit archive, as well as PDF and ASCII formats. What more is needed?
We’re naturalizing foreign nationals who appear on terrorist watch lists?
Crimmeny. I thought he had become a citizen in the early 1990s.
However, even if DHS can’t keep track of people like Shahzad, it is good to know that the TSA can harass small children.
Laugh or cry, indeed.
I lot of these database could be pumped into flat files in two weeks, and indexed with a google engine (at one point they were selling boxes for other people’s internal use). It would be very workable for agents, too, as a “wide net” search.
Of course we both know that the government (a) couldn’t decide to do that in two weeks, and (b) that simple approach would be expanded by every department looking to get their fingers in the specification.
The problem is that they likely have to make it inter-operate with closed-source programs that weren’t designed for interoperability. So everyone is trying to build scrapers and loaders for their various different systems.
Compounding that is the fact that every one of these systems will contain different data elements, or the same elements in different formats, so they will all have to conform to some huge, bloated, and mostly useless single data format (probably XML), that will have to be capable of representing all of it.
As if that weren’t enough, then you have to negotiation transfer and access, NFS, FTP or HTTP? Who’s the authority for access control? How often does everybody push and pull? And worst of all, how do you handle merging data that was changed on 2 separate systems between syncing?