Blog: Facial Recognition Is Not yet a Threat

Article By : Lou Covey

The availability of facial-recognition technology to governments and law enforcement has been with us for many years and the new entrants in the field are far from comprehensive and accurate.

Facial-recognition technology is causing a great deal of handwringing in the news over the past few weeks, and a lot of it is over the controversial ClearviewAI app. The concern shows how little the general media, the general public, and law enforcement actually understand about the state of facial recognition and the capabilities of AI.

ClearviewAI scraped 3 billion images from social media sites like Facebook, Twitter, LinkedIn, and Venmo and connects them with names, addresses, and other crucial data available on the web. The app has been sold to 600 police departments across the world who can snap a picture on the street or take surveillance video and upload it on the app to the database and identify potential criminals.

It was developed by a computer-savvy Vietnamese immigrant in Australia, Hon Ton-That, whose previous claim to fame was an app that allowed users to put Donald Trump’s hair on anything they wanted. Ton-That claims that the app has helped police arrest hundreds of criminals, including child abusers and rapists, but the police departments aren’t supporting that claim. They are not talking about it at all.

This search function has already been available to law enforcement for several years with the image-search apps. You can snap a picture of yourself and have Google find other places where your photo shows up. Apple Photos also allows you to search for specific people in your database, and I’m absolutely sure that Apple isn’t keeping a similar database (wink wink, nudge nudge). So ClearviewAI is nothing new.

There are a lot of problems with this story for me. First, social media platforms prohibit other companies or individuals from scraping their sites for user data. Ton-That dismisses that problem by saying, “Lots of people do it and they know it.” That may be true, but once you admit you are doing it, you may not be allowed to do it anymore. Ton-That may have a point. Facebook may give them a pass as they did Cambridge Analytica. On top of that, ClearviewAI has got some serious money for this tech, including a few hundred thousand from ultra-conservative Peter Thiel, who is also a Facebook board member.

Second, you can very easily thwart this tech. Go into your social media accounts and remove visibility of your address, email, phone number, and birthday. Voila! You are out of the database that police can access. Pretty simple. I’m still surprised by how much data people share publicly. Heck, I don’t even put my phone number on my business cards anymore. There are plenty of ways to contact me without it.

Third is the issue of facial-recognition technology effectiveness. I’ve received demonstrations of a handful of Western technologies that claimed better than 90% effectiveness in determining emotion, age, gender, and race. None of them performed as accurately as the companies claimed. One didn’t even hit 65% profiling me (wrong age, wrong height, and apparently, I have resting bitch face, because it constantly said I was angry… and I wasn’t). Ton-That claims only 75% accuracy, so one out of four times, it gets it wrong. It also has the same weakness as all other facial-recognition apps in that it often returns false positives for people with dark skin. In those cases, its even odds that it will get the ID wrong.

The other side of the discussion is what China is doing with facial recognition, which is much larger than the ClearviewAI effort but still no closer to being comprehensive. China has created a database of criminals and people they consider political subversives but has not started collecting every face in China. It also has the biometrics of anyone legally coming into the country through passports, but that is a smaller sample than their criminal database. That brings up another issue.

If you have a document-issued identification certificate, like a driver’s license or a passport, the government already has your face and contact information. While states don’t share that information with the federal agencies, the feds do have all the passport information. But because less than 10% of citizens have passports, there is no comprehensive database of Americans, either.

Finally, the numbers just don’t work for me. As stated, they have scraped 3 billion photos from social media. There are currently, according to Statista, 2.65 billion social media users in the world, so they obviously have a few duplicates in that database. Only 85% of millennial users have ever posted a selfie, and over the age of 35, that number drops to 65%, so there is a significant number of people that are not in that database. Still, we are talking about a billion people at this point. But wait! There’s less.

The average number of selfies from a person using social media who does take selfies is about 4,000, with a high of 20,000 to a low of 4. That means that the number of people in the ClearviewAI database could be as low as 250,000 worldwide. Spread that over the populations of the 600 police departments claiming to use it (including New York City), and you have a very small number of people that can be identified at all.

An AI is only as effective as its database is comprehensive. ClearviewAI’s product is rudimentary at best, and its success is due more to its marketing and what they haven’t told their customers. The potential to keep watch on people who have already been placed in police databases is there, but the ClearviewAI product is, at best, hit and miss, and any result could probably be thrown out of court by a mediocre lawyer.

Leave a comment