Perhaps Meta’s smart glasses have a privacy problem In some cases, recorded videos can be accidentally viewed by the people training the AI systems—oops!

The latest models of Meta smart glasses, developed in collaboration with Ray-Ban and Oakley, incorporate an artificial intelligence system designed to support and enhance the device's functionalities, including video recording and phone calls. The glasses also allow users to interact with the AI assistant to send messages or get an audio description of what the camera is capturing.

Content captured by users generally remains stored on the device it was recorded with, unless the user chooses to share it with Meta to contribute to service improvement. In some cases, however, shared data and content may be subject to manual review by human collaborators, aiming to refine and train the integrated artificial intelligence systems in the product. According to the company, reviewed materials are filtered or partially obscured to protect the privacy of those involved.

An investigation conducted by Swedish newspapers Svenska Dagbladet and Göteborgs‑Posten highlighted that these filters are not always effective: in some cases, reviewers were reportedly able to clearly see the faces of recorded individuals – a major privacy concern for users.

The case involving Meta's smart glasses

@cathypedrayes PSA: Meta glasses
A Swedish investigation found that Meta’s human AI workers (who work in Kenya to help train the AI) have seen sensitive videos of people changing, in bedrooms, financial information, etc. and it’s probably because people don’t realize this can happen. 
Meta does try to blur faces and details but the system isn’t perfect so just assume if your glasses can record it, a person might see it. Don’t record anything private, credit cards, information, changing clothes, etc. #smartglasses #technews #onlinesafety #AI #thingsyoushouldknow original sound - dj auxlord

Meta has long relied, in certain circumstances, on workers employed by contractor companies to improve the performance of its technological products, including videos recorded with its smart glasses. Workers interviewed by Swedish journalists, employed by a Nairobi-based company in Kenya called Sama, reported that they were tasked with teaching AI to interpret information contained in videos by manually labeling objects appearing in them.

According to their testimonies, during these operations they accessed content of all kinds, including intimate scenes. They also reported having access to transcriptions of some conversations between users and the AI, to verify that the system responded correctly to questions. Meta responded to the investigation stating that the possibility of videos recorded with its glasses being viewed by external human reviewers is indicated in its privacy policy.

The work of those who train AI systems

@evhandd

A new report just dropped claiming that the footage from user’s Meta Glasses is being sent to an annotation center in Kenya... Where employees WATCH THE VIDEOS TO HELP TRAIN META’S AI! And these employees say they have seen EVERYTHING

original sound - Evan

In recent years, several journalistic investigations have explored a topic that tech companies tend to talk little about and which, when it emerges, periodically reignites public debate: so-called data labeling, which involves labeling different types of data – such as audio, video, images, or text – to train artificial intelligence systems. These tasks are often low-paid and repetitive, sometimes exposing workers to violent or disturbing content, but they are essential to making AI more efficient – even though tech companies often tend to downplay it.

In recent years, this activity has also been accompanied by so-called "Reinforcement Learning with Human Feedback" (RLHF), i.e., using human evaluations to judge the outputs produced by AI, aiming to guide systems toward more accurate or useful responses, progressively improving performance. These tasks are requested on a large scale by major tech companies like Google, Microsoft, and OpenAI, and are often outsourced to third-party companies in developing countries, such as Kenya or Nepal, where labor costs are much lower. Additionally, reconstructing these activities is complex, as contractor companies often require the utmost confidentiality to prevent details about the functioning or development of the technologies they are working on from leaking.