Police Facial Recognition Can’t Recognize Black People 2023

Imagine being handcuffed for stealing watches. After hours in jail, state police utilize face recognition software on store CCTV to identify you as a thief. The program misidentified you as the thief.

Unfortunately, this is real. Three years ago, suburban Detroit Black parent Robert Williams experienced this. Unfortunately, Williams’ narrative repeats. Facial recognition technology wrongfully arrested a Black Georgian for handbag theft in Louisiana.

We found that face recognition technology (FRT) may exacerbate police racial disparities. Automated face recognition police disproportionately arrest Black individuals. The paucity of Black faces in the algorithms’ training data sets, a conviction that these systems are infallible, and officers’ prejudices magnifying these difficulties may explain this.

We recognize the benefits of automating the time-consuming, manual face-matching process. We understand the technology’s public safety benefits. This technology’s potential dangers need enforced protections to prevent unlawful overreaches.

FRT uses AI to identify people from images. Amazon, Clearview AI, and Microsoft create law enforcement algorithms for varied situations. Despite deep-learning advances, most face recognition systems fail to identify non-white persons in federal testing.

Civil rights campaigners say that the technology’s inability to differentiate darker faces may increase racial profiling and erroneous arrests. Inaccurate identification increases missed arrests.

New Orleans Mayor LaToya Cantrell claims this technology can solve crimes. Some see FRT as a vital tool for boosting police coverage in the face of countrywide manpower shortages. Despite its flaws, over 25% of local and state police forces and over half of federal law enforcement organizations use face recognition technology.

It threatens our constitutional right against unlawful searches and seizures.

San Francisco and Boston banned or restricted government use of this technology to protect civil rights. In 2022, President Biden announced the “Blueprint for an AI Bill of Rights.” The blueprint’s nonbinding principles aim to defend civil rights in AI technology creation and use.

This year, congressional Democrats revived the Facial Recognition and Biometric Technology Moratorium Act. This measure would stop law enforcement from using FRT until policymakers can balance constitutional concerns and public safety.

Protecting citizens from AI and FRT starts with the proposed AI bill of rights and moratorium. Both fail. The prohibition exclusively restricts federal authorities’ use of automatic facial recognition, not municipal or state governments.

Our studies and others reveal that even with error-free software, face recognition will likely contribute to inequitable law enforcement tactics unless protections are put in place for nonfederal usage.

First, many Black areas have disproportionate police contact. The demands and time constraints of police work and an almost blind faith in AI that minimizes user discretion in decision-making make algorithm-aided decisions less trustworthy, making communities served by FRT-assisted police more vulnerable to enforcement disparities.

In-field inquiries to identify stopped or detained people, video footage searches, and real-time surveillance camera scans are how police employ this technology. The cops input one image, and the program compares it to several photographs to create a suspect lineup in seconds.

Officers decide enforcement. People usually trust AI’s results. Additionally, automated tools are easier than visual comparisons.

AI-powered law enforcement aides mentally separate cops from residents. Officers can detach from their conduct when removed from decision-making. Users sometimes selectively adopt computer-generated suggestions, particularly Black crime stereotypes.

There’s no proof FRT reduces crime. As cities fight crime, officials seem to accept these racial prejudices. This puts rights at risk.

Software businesses and law enforcement must immediately reduce this technology’s dangers.

Companies need diverse designers to make reliable facial recognition software. Most U.S. software engineers are white guys. The software is better at spotting programmers. Engineers’ inadvertent “own-race bias” in algorithms is blamed for similar discoveries.

Designers subconsciously favor their own race’s face traits. The algorithm is tested on their race. Many U.S.-made algorithms “learn” by staring at more white faces, which doesn’t help them distinguish other races.

Diverse training sets lessen FRT bias. Algorithms train using photographs to compare images. Because Black individuals are overrepresented in mugshot databases and other law enforcement picture repositories, white males dominate training photos, skewing algorithms. Thus, innocent Black individuals are targeted and arrested by AI.

These goods manufacturers should consider employee and image diversity. Law enforcement remains responsible. To prevent new technology from increasing racial inequities and rights abuses, police must critically scrutinize their procedures.

Matches require standard similarity score minimums for police chiefs. The face recognition program ranks suspects by visual similarity after generating a list. Departments choose their own similarity score standards, which some experts say increases the risk of improper and missing arrests.

Law enforcement will adopt FRT, and we appreciate it. Without regulation and transparency, this technology may worsen racial gaps in enforcement outcomes like traffic stops and arrests.

Police need additional training on FRT’s flaws, human biases, and historical prejudice. Police and prosecutors should inform officers that they utilized computerized face recognition to get a warrant.

These rules will assist prevent needless arrests with FRT.

Leave a Reply