Cameras and Scanning: A Case Study

Detroit shows how modern technology can lead to a virtual police state.

police cameras surveillance monitoring detroit

Yesterday, I questioned whether we should be alarmed about “FBI and ICE Using Drivers License Photos for Facial Recognition Scans.” A follow-up report from the NYT, looking at how a similar program is being used in Detroit, provides strong push-back.

Twenty-four hours a day, video from thousands of cameras stationed around Detroit, at gas stations, restaurants, mini-marts, apartment buildings, churches and schools, streams into the Police Department’s downtown headquarters.

The surveillance program, which began in 2016, is the opposite of covert. A flashing green light marks each participating location, and the point of the popular initiative, known as Project Green Light, has been for the cameras to be noticed and help deter crime. Detroit’s mayor, Mike Duggan, received applause when he promised at his State of the City address earlier this year that expanding the network to include several hundred traffic light cameras would allow the police to “track any shooter or carjacker across the city.”

—-“As Cameras Track Detroit’s Residents, a Debate Ensues Over Racial Bias

As I noted yesterday, I’m more concerned about the rise of the surveillance state, per se, than the pairing of the surveillance with ID photos. Detroit’s program strikes me as ominous: almost a literal police state.

But the report focuses on an issue that yesterday’s report in the Washington Post only touched upon: the disparate impact of the practice on racial minorities.

In Detroit, whose share of black residents is larger than in any other sizable American city, it is a racial disparity in the performance of facial recognition technology that is a primary source of consternation.

“Facial recognition software proves to be less accurate at identifying people with darker pigmentation,” George Byers II, a black software engineer, told the police board last month. “We live in a major black city. That’s a problem.”

Researchers at the Massachusetts Institute of Technology reported in January that facial recognition software marketed by Amazon misidentified darker-skinned women as men 31 percent of the time. Others have shown that algorithms used in facial recognition return false matches at a higher rate for African-Americans than white people unless explicitly recalibrated for a black population — in which case their failure rate at finding positive matches for white people climbs. That study, posted in May by computer scientists at the Florida Institute of Technology and the University of Notre Dame, suggests that a single algorithm cannot be applied to both groups with equal accuracy.
Mr. Byers and other critics spoke at a public hearing called by the Detroit Board of Police Commissioners after what the board called unprecedented public interest in two facial recognition items on its agenda. One item, specific to the new traffic light cameras, was approved last week. The other, a comprehensive “acceptable use” policy for facial recognition, has yet to be put to a vote.

That’s disturbing.

Not everyone who spoke was against the use of facial recognition.

“I’m the pastor getting the call from mothers whose son was shot or their baby got snatched up,” said Maurice Hardwick, a black pastor at a nondenominational ministry who founded a group that works with high school gang members. “People want to know two things: What happened to my child, my loved one? And who did this?”

Another Detroit resident, a white woman who walked with a cane, added: “If you’re afraid of the cameras, either you’re paranoid or you’ve got something to hide.”

I’m sympathetic to Hardwick’s view and afraid of the second. The notion that we should be afraid of state surveillance only if we have “something to hide” is fascist.

Still, the debate seems to be on cost-benefit analysis moreso than liberty:

Others were more concerned with a provision that would allow the police to go beyond identifying violent crime suspects with facial recognition and allow officers to try to identify anyone for whom a “reasonable suspicion” exists that they could provide information relevant to an active criminal investigation. There was also concern that the photograph of anyone who gets a Michigan state ID or driver’s license is searchable by state and local law enforcement agencies, and the F.B.I., likely without their knowledge.

Facial recognition, the Detroit police stress, has indeed helped lead to arrests. In late May, for instance, officers ran a video image through facial recognition after survivors of a shooting directed police officers to a gas station equipped with Green Light cameras where they had met with a man now charged with three counts of first-degree murder and two counts of assault. The lead generated by the software matched the description provided by the witnesses.

Back to the race issue:

When James White, an assistant police chief in charge of the Detroit Police Department’s technology, rose to respond to critics at the public hearing, he provided unexpected backup to the charge that the software comes with baked-in bias. He himself, the assistant chief said, had been misidentified as other African-American men by the facial recognition algorithm that Facebook uses to tag photos.

“On the question of false positives — that is absolutely factual, and it’s well-documented,” he said. “So that concerns me as an African-American male.”

The solution, Chief White said, is to exercise extra care. The department’s policy specifies that facial recognition will be used only to investigate violent crimes. Although the department has the ability to implement real-time screening of anyone who passes by a camera — as detailed in a recent report by the Georgetown Law Center on Privacy and Technology — there is no plan to use it, he said, except in extraordinary circumstances.

No one in Detroit, Chief White emphasized, would be arrested solely on the basis of a facial recognition match.

“Facial recognition technology isn’t where the work stops,” he said. “It’s where the work starts.”

While that seems reasonable enough, it’s obviously a standard that won’t always be followed. And false positives are still false positives.

Civil liberties advocates say that protection isn’t enough, especially because defendants are not typically informed that facial recognition has been used in their identification. In one of the few cases to have argued that such information should be disclosed because it is potentially exonerating, a Florida appeals court ruled that a black man, Willie Allen Lynch, had no legal right to see the other matches returned by the facial recognition program that helped lead to his drug-offense conviction. Mr. Lynch had argued that he was misidentified.

A January 2018 study by two M.I.T. researchers first focused public attention on the higher misidentification rates for dark-skinned women by three leading purveyors of facial recognition algorithms. One of the co-authors, Joy Buolamwini, posted YouTube videos showing the technology misclassifying famous African-American women, like Michelle Obama, as men. The phenomenon, Ms. Buolamwini wrote in a New York Times Op-Ed, is “a reminder that artificial intelligence, often heralded for its potential to change the world, can actually reinforce bias and exclusion, even when it’s used in the most well-intended ways.”

The companies examined in the paper subsequently improved their algorithms for that particular test. But a second paper this year found that Amazon’s software had more trouble identifying the gender of female and darker-skinned faces, prompting prominent artificial-intelligence researchers to call on the company to stop selling its software to law enforcement agencies. Amazon executives have disputed the study.

It is not clear why facial recognition algorithms perform differently on different racial groups, researchers say. One reason may be that the algorithms, which learn to recognize patterns in faces by looking at large numbers of them, are not being trained on a diverse enough array of photographs.

But Kevin Bowyer, a Notre Dame computer scientist, said that was not the case for a study he recently published. Nor is it certain that skin tone is the culprit: Facial structure, hairstyles and other factors may contribute.

In Dr. Bowyer’s experiments, the recognition algorithms could achieve the same degree of accuracy for white and black Americans, but only when the algorithm was tuned to a cutoff, say, of no more than one in 10,000 false matches for the two separate groups. Given that the norm is to use the same threshold for everybody, “those programs are seeing a higher false match rate for the population of African-Americans,” Dr. Bowyer said.

A dual-threshold system would not necessarily solve the problem, he added. That would require law enforcement authorities to make a judgment about each individual’s race and apply the appropriately tweaked facial recognition software — which would in turn introduce human bias.

“Technically, it’s a very reasonable thing to say to do,” Dr. Bowyer said. “But how do you defend it, and once you put that knob out there for police to use, how do you make sure it’s not misused?”

One presumes that the technical problems can be alleviated, if not solved, with further tweaking. Still, the phenomenon is troubling.

Overall, I’m still not persuaded that the tool is inherently dangerous or a threat to our civil liberties. I remain more concerned about the ubiquity of monitoring cameras than I am with their pairing with photo databases. Alas, I suspect the demands to install more cameras—for safety, dontchaknow–will outstrip concerns over privacy.

FILED UNDER: Crime, Law and the Courts, Policing, Science & Technology, , , , , , , , , , , , , , ,
James Joyner
About James Joyner
James Joyner is Professor and Department Head of Security Studies at Marine Corps University's Command and Staff College. He's a former Army officer and Desert Storm veteran. Views expressed here are his own. Follow James on Twitter @DrJJoyner.

Comments

  1. EddieinCA says:

    As someone who has lived in London multiple times, none of this is either scary or ominous to me. In London, and many UK major cities, cameras cover almost every square inch of the city. Many crimes have been solved over the years by just “rolling the tape backwards” from the incident to the home of the perpetrator because he/she was on camera from the moment they left their homes.

    Given today’s technology, I just assume that I’m under surveillance every moment of every day. Sad for whomever is having to watch my life on a CCTV. I’m boring. I don’t even cheat at golf, and ALOT of people do that.

    2
  2. Dave Schuler says:

    Interesting use of the word “virtual”.

    3
  3. James Joyner says:

    @EddieinCA: I don’t love it but concede that it has upsides. I just wonder if the loss of privacy is worth solving a few crimes.

    @Dave Schuler: It was originally an unintended pun but I realized what I stumbled into and liked it.

    2
  4. Bill says:

    @EddieinCA:

    I’m boring. I don’t even cheat at golf, and ALOT of people do that.

    But do you cheat on your income taxes?

    It was said George Washington never lied. Then again GW didn’t play golf or have to file an income tax return.

    2
  5. grumpy realist says:

    Isn’t this just saying that the accuracy of AI in photo-matching happens to have much bigger error bars when it comes to women and people with darker skin?

    Is this something that is solvable by using a much larger training set?

    And I would hope that at some point you’re going to run the video tape backwards, look at the picture, and compare it to the person you’ve tentatively picked up, with the comparison being done by an actual human.

  6. mattbernius says:

    @EddieinCA:

    In London, and many UK major cities, cameras cover almost every square inch of the city. Many crimes have been solved over the years by just “rolling the tape backwards” from the incident to the home of the perpetrator because he/she was on camera from the moment they left their homes.

    Again, the data do not back this up — or rather, they do not back this up in cases where there wasn’t an already pretty limited set of data (i.e. working on a specific crime versus looking for random hits against a database).

    In the eight deployments between 2016 and 2018, 96% of the Met’s facial recognition matches misidentified innocent members of the public as being on the watch list.

    https://www.eetimes.com/document.asp?doc_id=1334831

    [Addendum #1 – Here is the Guardian’s write up on that study:
    https://www.theguardian.com/technology/2019/jul/03/police-face-calls-to-end-use-of-facial-recognition-software ]

    Which gets to:

    One presumes that the technical problems can be alleviated, if not solved, with further tweaking.

    I realize that this isn’t your area James, but the idea that this is simply a “tweaking” problem with technology would be not unlike me saying “Trumps current Iran/NK denuclearlization policies will start working once they get some further tweaking.”

    The problem is that many of the underlying technologies (both the imaging tech, the AI, and the AI training data) are simply not ready for prime time. That’s before we get to the issue of the way those technologies interact with laws that are simply not set up for the amount of false positives these systems return.

    Put a different way, as Radley Balko has done amazing work pointing out, we still have tons of people out there convicted on junk forensic science like bite mark analysis. This has the potential to be far worse (especially given the overall veneer of objectivity we assign to things like AI).

    [Addendum #2 – while it wasn’t the same program, it’s worth noting that the LAPD just stopped a pilot of it’s AI crime prediction tool because of fundamental flaws in the system:

    https://www.latimes.com/local/lanow/la-me-lapd-predictive-policing-big-data-20190405-story.html

    This is why a lot of people involved in the tech and Criminal Justice space are worried about these sorts of solutions.]

    5
  7. mattbernius says:

    BTW, just to be clear, my concerns are not limited to policing — the increased use of facial recognition in all walks of life should be concerning to all of us. For example, the planned piloting (pun intended) of facial recognition at US airports is really concerning:

    https://www.forbes.com/sites/kateoflahertyuk/2019/03/11/facial-recognition-to-be-deployed-at-top-20-us-airports-should-you-be-concerned/#2523e17c7d48

    1
  8. James Joyner says:

    @mattbernius:

    The problem is that many of the underlying technologies (both the imaging tech, the AI, and the AI training data) are simply not ready for prime time. That’s before we get to the issue of the way those technologies interact with laws that are simply not set up for the amount of false positives these systems return.

    That makes sense. I fully concede to not understanding how the tech works here.

    2
  9. OzarkHillbilly says:

    It’s a brave new world.

  10. EddieInCA says:

    @Bill:

    But do you cheat on your income taxes?

    Absolutely. What’s the point of having a Corporation, if you can’t cheat on your taxes?

    2
  11. mattbernius says:

    @James Joyner:

    That makes sense. I fully concede to not understanding how the tech works here.

    First, from the start you’ve been acknowledging that James — which is great! And its clear you’ve been reading more about this.

    This also gets to a key underlying problem — the people who are crafting and voting on this stuff often are not even to those two steps. Again, we make jokes about judges or congress people not understanding email — and the reality is many don’t. And then we hit technology that is far more complex with major promises being made by it’s well financed advocates. Plus its tech that’s being applied to “scary” problems — crime and counter-terrorism.

    That is a combination that can lead to really bad decisions with a LOT of unintended consequences. Which is especially problematic when the people who are most likely to be false id’d are coming from vulnerable communities.

    2
  12. Gustopher says:

    Researchers at the Massachusetts Institute of Technology reported in January that facial recognition software marketed by Amazon misidentified darker-skinned women as men 31 percent of the time.

    I’m so bad at recognizing people that my friends claim I’m faceblind, and even I am better than that.

    Unless… how would I know?

    That study, posted in May by computer scientists at the Florida Institute of Technology and the University of Notre Dame, suggests that a single algorithm cannot be applied to both groups with equal accuracy.

    Pet Peeve Time: “if person-is-brown do this, else do that” is a perfectly valid algorithm…

    1
  13. Gustopher says:

    @mattbernius:

    That is a combination that can lead to really bad decisions with a LOT of unintended consequences. Which is especially problematic when the people who are most likely to be false id’d are coming from vulnerable communities.

    I would like any attempts to use facial recognition at a large scale to be reviewed periodically, with an emphasis on how is it impacting various communities, and what the false positive and false negative experiences are, rather than just “did we catch more baddies?”

    I don’t expect that will happen.

    Instead, I expect that at some point we are going to all receive letters from the government at the end of each year, detailing all of our crimes, with the fines added up, and all the accuracy of credit reports.

    “I’m sure I wasn’t selling drugs on that corner in March, but they are willing to drop all major charges if I plead to loitering and pay $250, which is a lot less than a lawyer… and how did they know how much was in my emergency fund?”

    1
  14. mattbernius says:

    @Gustopher:

    “I’m sure I wasn’t selling drugs on that corner in March, but they are willing to drop all major charges if I plead to loitering and pay $250, which is a lot less than a lawyer… and how did they know how much was in my emergency fund?”

    With the exception of that final thing about emergency funds, this is pretty much where we are already.

    1
  15. Tyrell says:

    @Gustopher: Technology is very close to the use of the unique individual human patterns of pulse, nervous system, and other body processes that make up a sort of frequency. The technology will be able to receive these much like a radio or cellular signal and form an accurate image, and a behavioral profile of characteristics of that person.

    1
  16. OzarkHillbilly says:

    @mattbernius: In our unending search for the one true technological magic bullet that will solve all of society’s ills, we’re all gonna end up with a bullet hole in our foreheads.

    @Tyrell: Something tells me that if they aren’t anywhere near to getting facial recognition right, that particular fever dream is way beyond the horizon too.

    4
  17. Mister Bluster says:

    She didn’t have any privacy when she was just walking down the street 55 years ago.
    Look what happened to her!

  18. DrDaveT says:

    @James Joyner:

    I just wonder if the loss of privacy is worth solving a few crimes.

    I continue to boggle at the idea that anyone ever though they had privacy rights in public. ‘Public’ is the opposite of ‘private’. When you are in public, your actions are… public.

    When the government starts surveillance inside your home or business, I’ll be right there with you. In the meantime, I have no sympathy for people who think they have a right to get away with stuff because most of the time nobody is watching. You people who turn right from the left turn lane, I’m looking at you.

    2
  19. Tyrell says:

    @DrDaveT: “People will have to whisper in their own homes”

  20. Kit says:

    In one of the few cases to have argued that such information should be disclosed because it is potentially exonerating, a Florida appeals court ruled that a black man, Willie Allen Lynch, had no legal right to see the other matches returned by the facial recognition program that helped lead to his drug-offense conviction. Mr. Lynch had argued that he was misidentified.

    And this is why I’m opposed in practice, if not in theory. Such technology might make life safer. And I mean really safer, statistically safer, not just BS safer because of potential terrorist threats, or to soothe people’s nerves due to last week’s whatever. But Wille Allen Lynch’s life was not made safer, and one cannot help but suspect Florida’s true motive in deploying such systems: to ease the job of the police in shaking down the populace for money, and harassing minorities. As we were recently reminded, the police have no duty to actually protect and serve.

    1
  21. MarkedMan says:

    As Eddie pointed out this already exists and is well publicized in London. I’ll add major Chinese cities in that category, and the government actively promotes it as a “Look at this amazing thing we are doing to keep everyone safe”.

  22. MarkedMan says:

    I was a big fan of the show Person of Interest which involved an AI tapped into a huge surveillance network in NYC. The show was hard sci-fi in the purest way: assume a very limited number of perhaps impossible technological advances and then explore what they mean to our own reality. The basic premise of the show was one of the advances – that he AI could use its data to predict events with unbelievable granularity. But for the most part the technology actually existed at the time. For instance, our gang of heroes frequently used legitimate hacks to gain access to phones in real time, and the phones they showed were always ones susceptible to those hacks. (Admittedly they sped the whole thing up, compressing a hack that would take minutes into a few seconds.) One of the things that I always took as a “you just have to accept this because of real world logistical problems” was that the surveillance network only existed in NYC. Although they kinda sorta dealt with this in the last season, it turns out that such surveillance network was actually put together after 9/11, on a trial basis and only in NYC. It attempted to link together every camera available, from ATM machines to Bodega security cams. But we didn’t learn about this until near the end of the show, so it always made me wonder if their plot consultants were more tapped in then we realized.

    Interestingly, the real world problems of such a network limited its effectiveness. It turns out that a significant percentage of those cameras are broken or turned off or actually just plastic shells that look like security cameras. And the logistics of constantly keeping cameras online was hellish.

    POI side note: Deus Ex Machina is always a disappointment. The AI in the show was very constrained by its original coding parameters and was built very deliberately so it couldn’t alter them. Those limitations eventually explained the very limited and awkward method it use to communicate to the characters. But the most powerful limitation (mild spoiler) was that it had to do a clean reboot of itself every midnight and start over from scratch. The method the writers used to overcome this limitation is one of the best examples of NOT using Deus Ex Machina I can think of. It was clever, and plausible and a delight to nerds everywhere.