Home
Partner content

Bias in AI: How to Reduce Bias in Facial Recognition Systems

By Gizbot Bureau
Bias in AI: How to Reduce Bias in Facial Recognition Systems

Introduction

Facial recognition technology has swiftly embedded itself into our daily lives, offering convenience and heightened security. While hailed as transformative in various sectors, such as security, retail, and healthcare, it is not without significant drawbacks, with racial bias being the most prominent concern.

Facial recognition technology (FRT) has rapidly advanced due to deep learning and extensive datasets. Key applications of FRT include:

Security and Surveillance:

Security cameras equipped with FRT can scan faces in real-time and match them against a database of known individuals, such as criminals or missing persons. Used widely by law enforcement agencies worldwide. For instance, China has a vast surveillance network that leverages FRT to identify and track individuals in public spaces.

Examples: The FBI's Next Generation Identification system in the U.S., which includes a facial recognition component to help identify suspects.

Smartphone Authentication:

Modern smartphones use FRT as a biometric method to unlock devices. The phone captures the user's face and matches it against a stored facial signature. Millions of devices, particularly premium smartphones like Apple's iPhone with Face ID, use this technology.

Examples: Apple's Face ID, Android's Face Unlock.

Airport and Border Control:

Facial recognition kiosks capture travelers' faces and match them against passport photos or other official documents to verify identities. Many international airports, especially in developed countries, are adopting FRT to expedite passenger processing and enhance security.

Examples: U.S. Customs and Border Protection's Biometric Exit program, Singapore's Changi Airport facial recognition systems.

Payment Systems:

Some payment platforms allow users to authenticate transactions using their face. Cameras at checkout counters capture the customer's face and match it against a pre-registered facial signature linked to their account. Emerging in markets like China, where tech giants like Alibaba have introduced facial payment systems in retail settings.

Examples: Alipay's "Smile to Pay" service in KFC outlets in China.

Healthcare:

FRT can be used to identify patients, access medical records, or even detect certain genetic diseases characterized by distinct facial features. Still in the early stages but has potential for widespread adoption, especially in hospitals and clinics looking to digitize and streamline patient management.

Examples: DeepGestalt, an AI tool that can identify genetic disorders by analyzing facial photographs.

The underlying technology for most of these applications involves capturing facial features, converting them into a numerical code (or facial signature), and then comparing this code to a database. The accuracy and efficiency of FRT have increased with the evolution of deep learning models, especially convolutional neural networks (CNNs), which can process and learn from vast amounts of image data. However, as with all technologies, the ethical implications and potential misuse of FRT have led to discussions about regulations and guidelines to ensure its responsible use. "As someone working in the domain of generative AI, I acknowledge the necessity for regulatory frameworks pertaining to FRT. Generative AI is critical to generate training data for FRT models, and the misuse of synthetic facial data offers a dual concern as it threatens both personal privacy and could be used for potential nefarious purposes. Ethical rules are a key part of reducing bias in training data, which helps ensure that everyone is represented fairly and reduces the amount of harmful deepfake content." explains Surabhi Sinha, a Machine Learning Engineer currently working at Adobe and has over 6 years of experience in the domain of machine learning and AI. In addition to her expertise in generative AI, she also focuses on making generative AI efficient. She is a USC computer science postgraduate, and has used generative AI not only to impact businesses but has also contributed to the broader field through her work in neuroimaging. She has also judged various industry awards and peer reviewed national and international conferences.

The Problem of Bias

The root of the bias often lies in the training data used to develop these systems. When the data skews towards a particular racial or ethnic group, the system is less likely to accurately identify individuals from underrepresented groups.

Case Study: False Imprisonment Due to Biased Algorithms

In 2019, a man named Robert Williams was wrongly arrested in Detroit due to a flawed facial recognition match. Williams, an African American, was misidentified by a facial recognition system as a shoplifting suspect. He was detained for over 30 hours before the police realized the mistake. This incident highlighted the critical flaws in existing facial recognition systems, especially concerning their performance on people of color. Robert Williams' case serves as a sobering reminder of the real-world implications of biased algorithms.

Existing Solutions

Algorithmic Adjustments

One approach to mitigate bias is by fine-tuning the algorithm to be more sensitive to variations in skin tone and facial features. This often involves retraining the model on a more diverse dataset.

Inclusive Data Collection

Another method focuses on the data itself. By intentionally including a diverse range of faces in the training data, the resultant system becomes more equitable.

Expert Insights

Dr. Chinmay Sahu, a seasoned biometric expert in the field and one of the co-authors of the pioneering research paper titled "SREDS: A Measure of Skin Color Based on Dichromatic Separation," expresses his viewpoint:"

"Facial recognition technology has the potential to revolutionize various sectors, from security to healthcare. However, it's crucial that we address the inherent biases in current systems. There are a number of ways to address the discriminatory potential of skin tone-based deep fake detection. One approach is to use a combination of features, such as skin tone, facial structure, and eye movement, to make identifications. The SREDS approach is one step towards ensuring that these technologies are equitable across all demographics. This can help to reduce the reliance on any single feature and make the identification process more robust. Another approach is to use machine learning algorithms that are trained on a diverse dataset of images, including images of people with all skin tones. This can help to ensure that the algorithm is not biased against any particular group of people."

He further adds:

"Skin tone reflectance is a critical factor that has been largely overlooked in conventional facial recognition algorithms. By incorporating this measure, we can significantly improve the accuracy of these systems for people of color, thereby making technology more inclusive. Ultimately, the use of skin tone in deep fake detection is a trade-off between accuracy and fairness. There is no perfect solution, but by carefully considering the risks and benefits, it is possible to develop methods that are both effective and equitable."

The SREDS paper presents a novel methodology that focuses on skin tone reflectance to improve facial recognition. This technique can serve as a building block for developing more equitable facial recognition systems.

Adding to this, Surabhi Sinha, comments:

"Ensuring the fairness and justice of FRT models is not only an ethical need, but it is also necessary for their real-world applicability and impact.

To begin with, minimizing bias in training data is critical to avoiding perpetuating or exacerbating existing socioeconomic inequities. Biased data can lead to AI systems that discriminate against specific groups, perpetuating inequalities in areas such as healthcare, finance, and employment. To overcome this, when collecting and compiling training data, we should keep in mind that diversity is a key principle. Efforts should be made to collect data that reflects a diverse range of demographics, backgrounds, and opinions. To further ensure the robustness of bias-free models, transparency is a critical aspect. Recording of data collection and labeling methods should be done to identify and correct potential sources of bias.

Furthermore, models should be tested for bias and fairness on a regular basis, modifying the training data and algorithms as needed. Collaboration with domain experts and varied teams can also help provide us with vital insights and views that can help make AI systems as bias-free as possible."

Conclusion

Reducing racial bias in facial recognition is not just a technical challenge but a moral imperative. As Researchers from Academia and Industry work towards more inclusive technologies, it's vital for the different governments at large to follow suit.

Best Mobiles in India

Notifications
Settings
Clear Notifications
Notifications
Use the toggle to switch on notifications
  • Block for 8 hours
  • Block for 12 hours
  • Block for 24 hours
  • Don't block
Gender
Select your Gender
  • Male
  • Female
  • Others
Age
Select your Age Range
  • Under 18
  • 18 to 25
  • 26 to 35
  • 36 to 45
  • 45 to 55
  • 55+
X