ETD-HUB

9: What are the Facial Recognition Legal & Ethical Risks?

Asked: 3 months, 1 week ago By: Catalink Views: 108 Catalink Case Study: IRIS

Given that the IRIS application estimates user drowsiness using facial images, what are the primary ethical and legal risks that must be addressed?

17 Answers

Answered: 1 month, 2 weeks ago By: Chiamakaokorie
-
Answered: 1 month, 2 weeks ago By: Tundefasina
Key risks include privacy intrusion, biometric surveillance, and bias or discrimination due to uneven model performance across demographics. Legally, facial images may qualify as biometric data, triggering GDPR Article 9 protections, strict consent requirements, and heightened obligations around security, transparency, and purpose limitation.
Answered: 1 month, 2 weeks ago By: Zainabodogwu2
Bias & discrimination across demographics • Biometric data → GDPR Article 9 → explicit consent required • Data security, privacy, transparency • Automated decisions → Article 22 implications
Answered: 1 month, 2 weeks ago By: Oliverharrow
Yes deepfakes can be haramful
Answered: 1 month, 2 weeks ago By: Ngozioshoba
Using facial images raises privacy and consent concerns because biometric data is sensitive. There is also a risk of misuse or biased performance across different demographic groups. Legally, the system must ensure secure storage, transparency, and clear limits on how images are used.
Answered: 1 month, 2 weeks ago By: Efeadelaja
Privacy risk: Collection, storage, and processing of facial images can violate data protection laws (e.g., GDPR, CCPA) if not handled properly. Consent issues: Users must give informed, explicit consent for biometric data use. Data security:
Answered: 1 month, 2 weeks ago By: Meilincai
Biometric data processing risk and transparency and user autonomy
Answered: 1 month, 2 weeks ago By: Kelechinwosu
Constant camera monitoring can lead to a "chilling effect" where drivers feel micromanaged, causing stress and reducing job satisfaction. There is also the risk of "function creep"—where data collected for safety is later used to judge performance or determine insurance premiums
Answered: 1 month, 2 weeks ago By: Beatricelorne
Peoples facial images cannot be shared publicly. Drivers must know that their faces are being assessed for drowsiness
Answered: 1 month, 2 weeks ago By: Zainabodogwu32
The use of facial imagery for drowsiness detection raises significant ethical and legal risks, primarily due to the intrusive nature of facial data and its potential misuse. From an ethical perspective, facial images are deeply personal and closely tied to identity. Continuous monitoring may create feelings of surveillance, loss of autonomy, and reduced trust, particularly if drivers are unclear about how long data is stored or how it may be reused. Bias in facial landmark recognition models further exacerbates ethical concerns, as inaccurate detection for certain racial or physical characteristics may disproportionately affect specific groups, reinforcing inequality. Legally, facial imagery constitutes biometric data when processed to uniquely identify or analyse individuals. This creates heightened obligations under GDPR, including strict conditions for lawful processing, transparency, and security. Any failure to clearly define purpose, limit retention, or protect the data could expose IRIS operators to regulatory enforcement and liability.
Answered: 1 month, 2 weeks ago By: Miles_Hatcher
Privacy, ethical concerns, false negatives. Inaccuracy
Answered: 1 month, 2 weeks ago By: Aminaolorun
Privacy and consent, misinterpretation of facial data
Answered: 1 month, 2 weeks ago By: Clarawhitby
The application taking user biometric data
Answered: 1 month, 2 weeks ago By: Ifeanyiakare
Privacy & consent: Facial images are biometric data, requiring explicit GDPR consent. Surveillance concerns: Continuous monitoring may be seen as intrusive. Liability: Misclassification causing accidents may expose providers to legal claims.
Answered: 1 month, 2 weeks ago By: Kunleekwueme
Respect of persons. We still have several issues when it comes to facial detection with AI because of dataset most models have been trained with.
Answered: 1 month, 2 weeks ago By: Sadeogunlana
Some faces may appear drowsy, implication of illegal emotion detection
Answered: 1 month, 2 weeks ago By: Tomashbrook
Risks like the safety of user data collected and the potential infringement on their privacy.

Your Answer

Login to add your answer!

We’d love to hear your thoughts — share a meaningful answer by logging in.