Latest Breaking News
In reply to the discussion: Apple is reportedly planning to launch AI-powered glasses, a pendant, and AirPods [View all]Polybius
(21,689 posts)Theres a lot there, so Im going to respond point-by-point rather than brushing it off.
1. Prices come down. Privacy concerns dont.
True privacy concerns dont disappear. But they also arent static. They get addressed through design changes, policy, norms, and law. Thats exactly what happened with smartphones, dashcams, Ring doorbells, and body cams. None of those eliminated privacy concerns they forced clearer rules and expectations.
The Ray-Ban Meta Smart Glasses are operating inside an already-established legal framework about recording in public. They didnt create that framework.
2. You can disable the LED.
Yes, Im aware that people online experiment with hardware modifications. But thats not the same as normal use.
If someone:
Drills into a $300$400 device
Risks breaking it
Voids the warranty
Potentially damages internal components
Thats intentional tampering.
You can also:
Jailbreak phones
Disable shutter sounds
Install hidden camera apps
Modify drones
The existence of modding communities doesnt mean the default product is designed for secrecy. It means determined people can modify hardware which is true of almost any device with a camera.
If someone is willing to physically alter hardware to secretly record others, they were already willing to violate norms. The glasses didnt create that intent.
3. People dont notice the LED.
In bright sunlight, yes visibility of any small light is reduced. The same is true of:
A phone screen angled downward
A smartwatch recording
A GoPro clipped to clothing
No indicator system is perfect in every lighting condition. The relevant question is: Did the manufacturer attempt visible disclosure? In this case, yes.
And again a smartphone can record far more discreetly than someone turning their head directly at you with glasses that visibly light up.
4. AI companies gather data and share with authorities.
This is where the argument shifts from device ethics to broader distrust of tech companies and government. Thats a separate and legitimate policy debate.
But it applies equally to:
iPhones
Android phones
Social media uploads
Cloud backups
Email providers
If someone records a protest on a smartphone and uploads it to Instagram, that footage is also on corporate servers and accessible via lawful process. That risk isnt unique to smart glasses.
And importantly: users control whether media is uploaded or kept local. Not everyone is live-streaming everything to AI systems.
If the concern is mass surveillance or government overreach, thats about data governance laws, not about whether a camera is mounted on your face or in your hand.
5. Someone recording at a protest is a threat.
Anyone recording at a protest with any device creates that same dynamic. Phones already capture high-resolution, zoomed, stabilized video with far greater detail than smart glasses.
In fact, someone openly holding a phone above a crowd often captures more faces than someone wearing glasses casually looking around.
Again, the risk youre describing is tied to recording in general not uniquely to this product category.
6. Creeps will use it.
Creeps already:
Use phones
Hide cameras
Install spy devices
Misuse AirTags
Abuse drones
We dont ban all smartphones because some people take upskirt photos. We criminalize the behavior.
Technology doesnt eliminate bad actors. It sets default guardrails and relies on laws for enforcement.
7. Using AI counts against your trustworthiness.
Thats a broad generalization.
AI is used for:
Accessibility tools
Navigation assistance
Language translation
Image recognition for the visually impaired
Productivity support
Saying using AI makes you less trustworthy is like saying using a calculator makes you dishonest because some students cheat.
Intent matters. Context matters.
8. Wearing glasses will make everyone suspicious.
We already went through this phase with:
Bluetooth headsets
AirPods
Early smartwatches
Body cameras
At first, people reacted strongly. Over time, norms adjusted. Most people now assume someone wearing AirPods is listening to music not secretly recording.
If smart glasses ever become widespread, visible indicators and cultural familiarity will normalize their presence the same way smartphones did.
The core disagreement
Youre arguing from a worst-case lens:
What if someone disables safeguards?
What if data is misused?
What if the government abuses it?
What if a creep exploits it?
Those are valid concerns but they apply to nearly all modern recording technology.
Im arguing from a proportionality lens:
The legal environment hasnt changed.
The default hardware includes visible disclosure.
The vast majority of use cases are mundane.
Bad actors already have more powerful tools in their pockets.
If the issue is broader AI data practices or government overreach, thats a serious civic discussion. But thats not unique to these glasses.
The device itself doesnt automatically convert someone into mobile surveillance for AI bros. Its a camera in a different form factor operating under the same laws, norms, and risks that already exist.
We can debate regulation and corporate data policy. But treating the hardware category itself as inherently sinister assumes malicious intent by default, and thats a much bigger claim than this technology has tradeoffs.