Clearview AI this week revealed that it has offered the Ukraine government free access to the company’s facial recognition AI (artificial intelligence) technology, potentially to uncover Russian assailants, identify refugees, combat misinformation, and identify the dead.
The news comes weeks into the Russian invasion of Ukraine and was shared in an exclusive to the Reuters news organization. These potential use cases for the Ukraine defense ministry again put the spotlight on facial recognition technology that has come under fire for the potential for misuse and privacy violations. These are all important considerations for CIOs and IT leaders to weigh as they consider the use of facial recognition, AI software, or really any personal data that requires solid governance and compliance practices.
In many jurisdictions, it is required to gain the permission of the owner of the image before it can be used. Clearview AI built its database of images by scraping social media platforms such as Facebook, Twitter and LinkedIn, never giving individuals the opportunity to opt out of their database.
The Positive Use Cases
There are plenty of already potentially positive use cases for facial recognition technology, from the face unlock features available to you on your smart phone to the finding missing children, to future use cases that add convenience such as identifying your face (no need for your passport or driver’s license) when you check in at an airport.
“Why do you need passports, for example,” asks Sagar Shah, an AI ethics specialist and client partner at Fractal.ai. “You just enter the airport and the system automatically knows who each person is. All the security protection and X-rays are automated.”
But any system that contains the personal information of millions of people also has the potential to be abused.
The Trouble With Clearview
Facial recognition AI has been fraught in recent years as have accused governments and other organizations misusing the technology. Critics of the technology have cited multiple concerns ranging from flawed performance in recognizing people with darker skin tones due to biased training data and algorithms, to the privacy issues that surface when cameras everywhere can recognize your face. These concerns have led to tech giants such as IBM, Amazon, and Microsoft banning sales of their face recognition software to law enforcement.
In November Facebook parent Meta went a step further, shutting down its facial recognition system and deleting more than a billion people’s individual facial recognition templates. But it may have been a case of closing the barn door after the horse had already escaped.
Among the issues in Clearview AI’s case is how it built its database of images — by scraping the ones posted on social media platforms including Facebook, Twitter, LinkedIn and YouTube. These social media companies have taken measures to end the practice by Clearview, but the company still has all the images it has scraped from these sites. The UK’s Information Commissioner’s Office fined Clearview AI £17 million for breaching UK data protection laws, alleging that the company failed to inform citizens that it was collecting their photos.
Clearview AI still sells its facial recognition software to law enforcement and celebrates law enforcement use cases on its website.
Clearview AI’s founder told Reuters that his company’s database also included more than 2 billion images from Russian social media service VKontakte, which could be useful in applications by the Ukraine government. He told Reuters that he had not offered the technology to Russia.
Omdia Research Director for AI and Intelligent Automation Natalia Modjeska says that the move to provide this software to Ukraine may be Clearview AI’s attempt to rehabilitate its reputation by capitalizing on the crisis with positive public relations.
It’s unclear, whether Ukraine will use Clearview according to the Reuters report, which also noted that the Ukraine Ministry of Digital Transformation had previously said it was considering offers of technology from US-based AI companies like Clearview.
Even if there may be positive use cases, facial recognition software can be used in violation of human rights. Fractal.ai’s Shah points out the example from Hong Kong a few years ago when China was using facial recognition software to identify the protesters.
“They used it to figure out, oh, this guy’s protesting, let’s send the police to their home,” Shah says.
What to Read Next:
Tech Giants Back Off Selling Facial Recognition AI to Police
Facebook Shuts Down Facial Recognition
The Problem with AI Facial Recognition