Uber Eats courier Pa Edrissa Manjang, a Black man, was reportedly paid by Uber on Tuesday after he was denied access to the app due to “racially discriminatory” facial recognition checks. Manjang has been using Uber Eats since November 2019 to pick up jobs delivering food on the company’s platform.
The discovery calls into question the suitability of UK legislation to address the growing use of artificial intelligence (AI). Specifically, the lack of openness around hastily introduced automated systems that claim to improve user safety and/or service efficiency might potentially lead to a blitzkrieg of personal injury, even if it may take years to provide compensation to anyone harmed by AI-driven bias.
Following many complaints regarding unsuccessful facial recognition checks since Uber launched the Real Time ID Check system in the UK in April 2020, the lawsuit was brought. Based on Microsoft’s facial recognition technology, Uber’s facial recognition system needs the account holder to submit a live selfie that is compared to a picture of them that is kept on file in order to confirm their identification.
FAILED ID CHECKSÂ
In response to Manjang’s complaint, Uber allegedly discovered “continued mismatches” in the face images he had taken in order to get access to the platform, and as a result, suspended and eventually deleted his account. In October 2021, Manjang brought legal action against Uber with the backing of the App Drivers & Couriers Union (ADCU) and the Equality and Human Rights Commission (EHRC).
Embrace the road ahead, and the meals you carry will fuel your success.
After several years of legal action, Uber was unable to get Manjang’s claim dismissed or have a deposit required in order to pursue the lawsuit. The strategy seems to have helped drag out the legal process, as the EHRC noted in autumn 2023 that the case illustrates “the complexity of a claim dealing with AI technology” and that it is still in the “preliminary stages.” November 2024 had been set aside for a 17-day final hearing.
Uber offered Manjang a money to settle, and Manjang accepted it. As a result, the hearing will not take place, and further information about the specifics of what went wrong and why will remain confidential. Additionally, the financial settlement’s terms are unknown. When we contacted, Uber declined to respond or provide specifics about what went wrong.
Microsoft was approached by us as well, but they declined to comment on the case’s conclusion.
Uber has settled with Manjang, but it has not acknowledged in public that any of its systems or procedures were flawed. According to its statement on the settlement, face recognition checks are backed up by “robust human review,” negating the possibility of courier accounts being canceled solely on the basis of AI assessments.
DATA PROTECTION AND EQUALITY LAWS.
The case raises concerns about the suitability of UK legislation to regulate the use of artificial intelligence.
Through a legal procedure founded on equality legislation, namely a discrimination claim under the U.K.’s Equality Act 2006, which identifies race as a protected feature, Manjang was ultimately able to get a settlement from Uber.
The Uber Eats courier needed to file a lawsuit “in order to understand the opaque processes that affected his work,” according to Baroness Kishwer Falkner, the head of the EHRC, who expressed her disapproval of this in a statement.
Manjang’s claim-relevant selfie data was acquired through the use of data access rights granted by the GDPR in the United Kingdom. Had he not been able to gather such convincing proof that Uber’s identity checks had been bogus, the business might not have decided to settle at all. Demonstrating the shortcomings of a proprietary system without granting users access to pertinent personal data would exacerbate the disadvantages of the platforms with far more resources.
Loopholes in Enforcement
Beyond the rights to access data, the GDPR is intended to give people other protections, such as protection from automated judgments that might have a substantial legal or other impact. Additionally, the legislation requires a legitimate reason for processing personal data and pushes system deployers to do data protection impact assessments in order to anticipate any risks. This need to compel more safeguards against dangerous AI systems.