Meta is putting a lot of effort into its AI products as it focuses on the next phase of computing, one in which smart AR glasses might replace your iPhone or Android phone. This won’t happen anytime soon, and AR glasses might initially accompany the iPhone and Android phones. The Orion demo shows Meta’s very early, very expensive, and non-commercially viable tech in action.
While Orion is being developed, Meta already has Ray-Ban smart glasses for you. They’re not iPhone replacements, but they can be great AI devices. Put them on, and you can interact with the AI using your voice, including asking it questions about your surroundings.
Ray-Ban smart glasses have cameras that let you take photos on demand, but it also happens unintentionally when you ask AI questions about something around you. That’s all great so far and in line with what other products can do when paired with genAI. But Meta isn’t willing to say whether it trains the AI with the images you capture. That’s a big problem and something to keep in mind if you value your privacy more than chatting with AI.
That’s not to say Meta is doing something it shouldn’t or that rival companies aren’t also doing the same thing. But the lack of clarity on the matter is concerning.
Tech. Entertainment. Science. Your inbox.
Sign up for the most interesting tech & entertainment news out there.
By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.
Meta could always ask Ray-Ban users for explicit consent to allow their photos to train AI. It could make things even better by only using photos tied to AI prompts. Meta could also always develop tech that anonymizes the data so that it only benefits AI.
Meta isn’t doing any of that. TechCrunch asked whether the company plans to train AI models on Ray-Ben images, and Meta execs did not confirm or deny such an interest:
“We’re not publicly discussing that,” said Anuj Kumar, a senior director working on AI wearables at Meta, in a video interview with TechCrunch on Monday.
“That’s not something we typically share externally,” said Meta spokesperson Mimi Huggins, who was also on the video call. When TechCrunch asked for clarification on whether Meta is training on these images, Huggins responded, “we’re not saying either way.”
As the blog explains, the issue is serious, as the Meta AI smart glasses will take many passive photos when answering questions about the user’s surroundings. The smart glasses will practically stream live video to the AI so it can analyze the images and provide answers.
You might check your fridge and ask the Ray-Ban Meta glasses to suggest recipes with the ingredients you own. There’s no issue with Meta getting a sequence of photos or live video from your fridge.
But the more you use the feature, the higher the likelihood of sending Meta images that could be more sensitive. They could include other people, documents, and settings you don’t want to become part of an AI model’s training.
As TechCrunch points out, Meta has already said it will train its AI on every public Instagram and Facebook post from American users. But those are meant to be public. Anyone sharing content on Meta’s platforms knows they’re releasing them into the wild where anything can happen.
It’s not the same thing when giving the AI a look at your surroundings. That’s not necessarily data that Meta should train the AI with.
Obviously, AI models need data to get better. That’s the only way the sophisticated AI assistants of the future will appear. But Meta could at least define a clear policy and ask users for content.
Then again, it’s not like others are always ready to be more direct about their intentions when it comes to collecting data for AI. Remember that OpenAI’s former CTO, Mira Murati, refrained from saying what data OpenAI used to train its text-to-video generation service, Sora.
Speaking of others, Google’s Google Lens now lets you upload photos and voice in your visual searches. Apple’s iPhone 16 will let you use a camera-related Visual Intelligence feature that lets you feed visuals to Apple Intelligence to get information about your surroundings. Of all of them, I’d certainly expect Apple to make a big deal about AI and user data privacy when the time comes to release Visual Intelligence.