Can we trust organizations with our facial images?

The benefits of facial recognition technology (FRT) are everywhere. In medical settings, therapists utilize facial scans to diagnose autism early. In the public safety sector, officers use FRT to apprehend dangerous criminals. The technology offers undeniable advantages, but privacy concerns and limitations must be addressed.

When your iPhone takes a picture instead of asking for a password, when Facebook tags you online, or when you pass through airport security, an AI compares your face to millions of others and bases an action on the information it finds. In seconds, the AI detects your face, captures an image, and analyzes your features. It converts your features into a mathematical formula and compares the unique faceprint to a database.

People are already uneasy about being tracked while carrying cell phones and find FRT even more invasive. Cell phones show where you are but do not reveal who you are with, what you are wearing, the vehicle you are driving, your age, or your gender. You can deactivate your cell phone, but it is impractical to conceal your face from video surveillance.

We are already seeing instances of cybercriminals holding sensitive data hostage. As our movements are tracked, we become vulnerable to blackmail and prey to stalkers.

Another issue with FRT relates to its accuracy. Facial recognition systems are not trained with diverse images across ages, genders, and ethnicities and, as a result, are prone to bias. As the software assists in security and threat assessment, it identifies people of colour as risks more than other groups because it generalizes facial traits from images in its database and applies them to new images. Men of colour are pulled aside and interrogated more often than anyone else.

Other accuracy issues stem from these systems being trained with images that do not represent real-world image orientations, perspectives, or lighting. For instance, systems are usually trained with pictures of people directly facing the camera. Yet, in real-world applications, cameras are mounted high and angled downward. Moreover, much of the footage is comprised of thermal images captured at night, which AI systems are not trained to interpret.

While FRT holds incredible potential, the data can lead to biased outcomes and be misused. Facial recognition systems are marketed as 99 per cent accurate, but that applies to the company’s limited training and testing – not real-world deployment. When people trust these systems blindly, they subject individuals to unwarranted interrogation and suspicion.

The need for regulations is clear. Currently, every jurisdiction has different laws and rules in place. Many post a “surveillance area” sign to inform people they are being watched. However, they do not tell people if the data is being saved or disseminated.

Today, whoever owns the camera has access to the surveillance.

Laws must balance the benefits and risks of FRT. Law enforcement needs access to this technology to identify perpetrators and rescue victims, but steps must be taken to prohibit the unauthorized dissemination of the data. Mandatory access logs and policies penalizing unauthorized access are essential if we are to trust organizations with our facial images.

Dr. Karen Panetta is a Fellow of the National Academy of Inventors, Dean of Graduate Education for the School of Engineering at Tufts University, and Nerd Girls founder.

-Advertisement-