It seems that the cops were basically uploading images of suspects so that the cameras in the city were constantly scanning for people who were wanted (like a mug shot or an image of somebody stealing something) and then if a camera picked up a match it would send police the location of the suspect on a map.
Apparently Palantir was working with NOPD to secretly test predictive policing since 2012
The program began in 2012 as a partnership between New Orleans Police and Palantir Technologies, a data-mining firm founded with seed money from the CIA’s venture capital firm. According to interviews and documents obtained by The Verge, the initiative was essentially a predictive policing program, similar to the “heat list” in Chicago that purports to predict which people are likely drivers or victims of violence.
The partnership has been extended three times, with the third extension scheduled to expire on February 21st, 2018. The city of New Orleans and Palantir have not responded to questions about the program’s current status.
The key scandal to me — I live in NOLA — is that the city council had tons of debates and put in place a process and limitations on facial recognition to limit false positives. But the new cameras aren’t city owned. A private company sells the cameras to businesses. Then, if a crime happens, the police call the company and ask if they “witnessed” anything. Then, the company basically texts officers a location if they think their facial recognition software spots the suspect.
And since we’re apparently the demonstration city (again) for a company, it’s no cost to taxpayers. Maybe that makes it no different from typical police work to you. But even if the product worked perfectly, and it likely doesn’t, I don’t like the idea of the NOPD secretly working overtime to find loopholes around laws and regulations.
And that’s before you get to collecting evidence for trial. Defense attorneys probably won’t have a hard time getting these cases dismissed unless there’s tons of other evidence.
That’s what they’re saying now, but apparently, an app was developed that allowed police to create a watch list of suspects, upload their picture, and use the cameras to constantly scan for the images. When they got a hit, police received a direct notification via the app
Apparently much of this wasn’t documented, but for whatever reason, the police captain decided in April to end it for the time being, so now it’s back to the company notifying police, but they want city council to pass an ordinance so they can go back to police being directly notified
After the Washington Post began investigating this time around, city officials acknowledged the program and said they had “paused” it and that they “are in discussions with the city council” to change the city’s facial recognition technology law to permit this pervasive monitoring.
The ACLU is now urging the New Orleans City Council to launch a full investigation and reimpose a moratorium on facial recognition use until robust privacy protections, due process safeguards, and accountability measures are in place.
“Until now, no American police department has been willing to risk the massive public blowback from using such a brazen face recognition surveillance system,” said Nathan Freed Wessler, deputy director of ACLU’s Speech, Privacy, and Technology Project. “By adopting this system–in secret, without safeguards, and at tremendous threat to our privacy and security–the City of New Orleans has crossed a thick red line. This is the stuff of authoritarian surveillance states, and has no place in American policing.”
Can’t they still identify him as “the guy with the skeleton mask”? Maybe not “identify” but more of, build a profile.
The WaPo article goes into a lot more detail: https://archive.ph/2fmW1
It seems that the cops were basically uploading images of suspects so that the cameras in the city were constantly scanning for people who were wanted (like a mug shot or an image of somebody stealing something) and then if a camera picked up a match it would send police the location of the suspect on a map.
Apparently Palantir was working with NOPD to secretly test predictive policing since 2012
https://archive.ph/NxPbY
Not sure that it actually did ever expire
The key scandal to me — I live in NOLA — is that the city council had tons of debates and put in place a process and limitations on facial recognition to limit false positives. But the new cameras aren’t city owned. A private company sells the cameras to businesses. Then, if a crime happens, the police call the company and ask if they “witnessed” anything. Then, the company basically texts officers a location if they think their facial recognition software spots the suspect.
And since we’re apparently the demonstration city (again) for a company, it’s no cost to taxpayers. Maybe that makes it no different from typical police work to you. But even if the product worked perfectly, and it likely doesn’t, I don’t like the idea of the NOPD secretly working overtime to find loopholes around laws and regulations.
And that’s before you get to collecting evidence for trial. Defense attorneys probably won’t have a hard time getting these cases dismissed unless there’s tons of other evidence.
That’s what they’re saying now, but apparently, an app was developed that allowed police to create a watch list of suspects, upload their picture, and use the cameras to constantly scan for the images. When they got a hit, police received a direct notification via the app
Apparently much of this wasn’t documented, but for whatever reason, the police captain decided in April to end it for the time being, so now it’s back to the company notifying police, but they want city council to pass an ordinance so they can go back to police being directly notified
https://wp.api.aclu.org/press-releases/208236