WASHINGTON, D.C., United States — Meta Platforms is facing a lawsuit in the United States over alleged privacy violations linked to its AI-powered smart glasses, after reports that contractors in Kenya reviewed user-generated images and videos as part of the company’s Artificial Intelligence (AI) training process.
The legal complaint was filed by two US consumers who claim the technology company failed to adequately inform users that footage captured by its smart glasses could be accessed by human reviewers.
The case follows reporting by technology outlet TechCrunch, citing Swedish media investigations which alleged that employees working for a subcontractor in Kenya were tasked with reviewing content recorded by users of Meta’s smart glasses. Some of the material reportedly contained sensitive personal footage.
The review process, according to the reports, was intended to help train and improve the glasses’ artificial intelligence systems.
However, the revelations have triggered renewed debate over data protection and transparency in emerging wearable technologies.
Lawsuit alleges privacy violations
The lawsuit was filed in the United States by Gina Bartone of New Jersey and Mateo Canu of California, who argue that Meta misled consumers about how their data would be handled.
The plaintiffs are represented by the public-interest litigation firm Clarkson Law Firm, which has previously brought cases against major technology companies over data privacy practices.
In their complaint, the plaintiffs accuse Meta of violating consumer protection and privacy laws by marketing the glasses as secure while allegedly allowing human reviewers to access recordings.
The lawsuit specifically challenges marketing claims that described the device as “designed for privacy, controlled by you” and “built for your privacy.”
According to the filing, such statements created the impression that recordings would remain private to the user.
The plaintiffs argue they were not informed that contractors outside the United States could potentially review their content as part of AI training workflows.
Meta’s eyewear manufacturing partner Luxottica is also named in the lawsuit.
Global product, growing privacy concerns
Meta’s smart glasses, developed in partnership with Ray‑Ban, allow users to capture photos and videos, stream content and interact with AI assistants through voice commands.
The device forms part of Meta’s broader push into artificial intelligence and wearable computing.
According to reports cited by TechCrunch, more than seven million units of the smart glasses were sold in 2025, highlighting the scale at which personal data could potentially be collected.
The lawsuit alleges that recordings captured by the glasses may enter a data pipeline where they can be reviewed by human contractors to train or refine AI systems, without offering users a clear option to opt out.
Meta defends its data practices
Meta has rejected claims that it secretly reviews private recordings.
The company told Vivid Voice News that human review occurs only when users voluntarily submit content to its AI systems.
A company spokesperson said contractors may analyse such data in order to improve AI performance and the overall user experience.
Meta also cited its privacy policy and supplementary terms of service, which state that interactions with AI tools may be reviewed through automated systems or by human reviewers.
One version of the policy referenced by TechCrunch notes that Meta may examine interactions with its AI services, including messages or conversations, either automatically or manually.
“Ray-Ban Meta glasses help you use AI hands-free to answer questions about the world around you,” Meta spokesperson Christopher Sgro said.
Also Read: Russian man’s secret filming of African women spark outrage over consent and privacy
He added that the company employs contractors to review a limited portion of shared data and uses safeguards designed to filter sensitive information and protect user identities.
Meta also emphasised that media captured by the glasses typically remains stored on the user’s device unless the owner chooses to share it with Meta’s AI services or social platforms.
Growing scrutiny of AI wearables
The case reflects wider global concerns about privacy and surveillance risks associated with next-generation wearable technology.
Devices such as smart glasses, AI assistants and body-worn cameras can continuously capture images, conversations and environmental data, raising questions about consent, transparency and the role of human reviewers in machine-learning development.
Privacy advocates have increasingly warned that as companies race to develop AI-driven products, safeguards around data collection and human oversight must evolve at the same pace.
The lawsuit against Meta is expected to add to the mounting regulatory and legal scrutiny facing major technology firms as governments worldwide move to tighten rules governing artificial intelligence and consumer data protection.

