Source: www.databreachtoday.com – Author: 1
Governance & Risk Management
,
Privacy
,
Standards, Regulations & Compliance
5-Year Ban Comes After Retailer Failed to Mitigate Security Surveillance Risks
Marianne Kolbasuk McGee (HealthInfoSec) •
December 20, 2023
U.S. drugstore chain Rite Aid will be banned from deploying facial recognition technology for security surveillance for the next five years under a settlement with the Federal Trade Commission.
See Also: JavaScript and Blockchain: Technologies You Can’t Ignore
The FTC on Tuesday said the retailer had failed to implement safeguards such as mitigating risks for inaccurate outputs based on race and gender.
Between December 2019 and July 2020, the chain, which consists of approximately 2,300 stores nationwide, deployed facial recognition technology to identify customers who may have shoplifted, recording thousands of false positives, the agency said. The retail chain first began implementing facial recognition systems in 2012, it said.
The FTC said in a federal court complaint that Rite Aid had failed to take reasonable measures to prevent harm to consumers, such as those erroneously accused by employees of wrongdoing, because facial recognition technology had falsely flagged the individuals as matching someone who had previously been identified as a shoplifter or “other troublemaker.”
Rite Aid in a statement said, “We fundamentally disagree with the facial recognition allegations in the agency’s complaint,” adding that the allegations about facial recognition misfires had stemmed from a “pilot program the company deployed in a limited number of stores.”
The settlement, which requires approval from a judge, is a modification of a $1 million 2010 consent order reached between Rite Aid and the FTC over allegations that the pharmacy had failed to protect financial and medical information of customers and employees (see: Rite Aid to Pay $1 Million in HIPAA Case).
Among the agency’s newer grievances with the pharmacy chain is that the facial recognition technology was more likely to generate false positives in stores located in predominately Black and Asian communities than in plurality-white communities. Bias in the technology and applications has been an ongoing concern, not least since a government-published study in 2019 found disparities in accuracy across race and gender. An October 2022 academic study found that law enforcement’s use of facial recognition technology has contributed to greater racial disparity in arrests.
The FTC said Rite Aid had contracted with two companies to help build an image database of individuals considered to be persons of interest. Rite Aid believed the individuals had “engaged in or attempted to engage in criminal activity at one of its retail locations” and acquired their “names and other information such as any criminal background data,” the FTC said.
Rite Aid’s collection of tens of thousands of images of individuals included many low-quality images from the retailer’s security cameras, employee phone cameras, and media reports.
“Rite Aid’s reckless use of facial surveillance systems left its customers facing humiliation and other harms, and its order violations put consumers’ sensitive information at risk,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in a statement.
As part of the settlement, Rite Aid and its third-party vendors must delete or destroy all photos and videos of consumers used or collected in connection with the operation of its facial recognition as well as related data, models or algorithms.
Rite Aid must also notify consumers when biometrics information is used for a biometric security or surveillance system and when the retailer takes an action against them based on an output generated by the system.
The company must also investigate and respond in writing to complaints about actions taken against consumers related to an automated biometric security or surveillance system.
Rite Aid did not immediately respond to Information Security Media Group’s request for further comment.
AI Concerns
The FTC’s proposed settlement with Rite Aid in the facial recognition case comes on the heels of the agency warning it would take action against companies developing AI products using algorithms that produce discriminatory outcomes.
In November, the commission voted to make it easier for the FTC to issue civil investigative demands in AI-related investigations and compel information and cooperation from organizations developing AI products, services and tools (see: FTC Votes to Enhance, Expand AI Investigation Processes).
The agency has urged organizations to embrace transparency frameworks and independent standards. In 2021, the commission published guidance on AI, reminding companies that the FTC Act requires statements to consumers to be “truthful, non-deceptive and backed up by evidence.”
Original Post url: https://www.databreachtoday.com/ftc-bans-rite-aid-from-using-facial-recognition-tech-a-23938
Category & Tags: –
Views: 0