A broad coalition of more than 70 civil liberties, domestic violence, reproductive rights, LGBTQ+, labor, and immigrant advocacy organizations has issued a formal demand to Meta Platforms Inc., urging the technology giant to scrap plans to integrate facial recognition technology into its smart glasses. The coalition warns that the feature, reportedly developed under the internal code name “Name Tag,” poses an unprecedented threat to public safety by empowering stalkers, abusers, and government agencies to identify strangers in real time and without consent. The coalition includes high-profile organizations such as the American Civil Liberties Union (ACLU), the Electronic Privacy Information Center (EPIC), Fight for the Future, Access Now, and the Leadership Conference on Civil and Human Rights. In a strongly worded letter addressed to Meta CEO Mark Zuckerberg, the groups argue that the deployment of biometric identification in inconspicuous consumer eyewear represents a point of no return for personal privacy in public spaces. The demand follows revelations of internal documents suggesting Meta intended to leverage political instability to minimize public backlash during the feature’s rollout. The Name Tag Project and the Strategy of Distraction The controversy centers on “Name Tag,” a feature designed for Meta’s Ray-Ban and Oakley smart glasses. According to internal documents first reported by The New York Times, the feature would utilize the artificial intelligence assistant embedded in the wearables to identify individuals within the wearer’s field of vision. Engineers at Meta’s Reality Labs have reportedly explored two primary iterations of the tool: a restricted version that identifies only those individuals with whom the wearer is already connected on Meta’s platforms, and a more expansive version capable of identifying any person with a public account on services like Instagram or Facebook. The most contentious aspect of the project involves the timing of its proposed launch. A May 2025 internal memo from Reality Labs indicated that Meta leadership aimed to debut the technology during a “dynamic political environment.” The memo explicitly noted that the company expected civil society groups—those most likely to oppose the rollout—to have their “resources focused on other concerns,” effectively betting that political turmoil would provide sufficient cover for a controversial product launch. Advocacy groups have characterized this strategy as “vile behavior.” The coalition’s letter accuses Meta of attempting to exploit rising authoritarianism and a perceived disregard for the rule of law to bypass ethical scrutiny. By timing the release of a surveillance tool to coincide with periods of high political engagement or crisis, the coalition argues, Meta is actively working to undermine the democratic process of public oversight. Privacy Risks and the Erasure of Public Anonymity The primary concern cited by the coalition is the total erosion of public anonymity. Unlike traditional surveillance cameras, which are often visible and regulated by local ordinances, smart glasses allow for "silent and invisible" identification. The coalition argues that there is no meaningful way for a bystander to opt out of being scanned and identified by a stranger wearing high-tech eyewear. EPIC, which sent separate letters to the Federal Trade Commission (FTC) and various state attorneys general, highlighted that real-time facial recognition would compound the existing privacy risks of Meta’s current hardware. The Ray-Ban Meta glasses already feature cameras capable of recording video; while a small LED light illuminates to indicate recording, critics note this light is easily obscured or ignored. The addition of "Name Tag" would transform a recording device into a proactive identification tool. The implications for vulnerable populations are particularly severe. The coalition warns that: Stalkers and Abusers: Domestic violence survivors could be identified and located in public by abusers using smart glasses to scan crowds. Law Enforcement Overreach: Federal agents, including those from Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP), could potentially use the data or the hardware to conduct warrantless surveillance of immigrant communities or protesters. Sensitive Locations: Individuals visiting reproductive health clinics, places of worship, or support group meetings could be identified and harassed, creating a chilling effect on the exercise of constitutional rights. A History of Legal Challenges and Broken Promises This is not Meta’s first foray into the controversial field of biometrics. The company has a long and litigious history regarding facial recognition, which the coalition points to as evidence that the company cannot be trusted to self-regulate. In November 2021, Meta announced it would shut down the face recognition system on Facebook, which automatically "tagged" users in photos. At the time, the company deleted the face recognition templates of more than one billion users. Jerome Pesenti, then-VP of Artificial Intelligence at Meta, framed the move as a "company-wide move away from this kind of broad identification," citing the need to weigh the technology’s benefits against growing societal concerns and the lack of clear regulatory frameworks. However, this retreat followed years of legal battles that cost the company billions of dollars: Illinois BIPA Settlement: Meta paid $650 million to settle a class-action lawsuit alleging it violated the Illinois Biometric Information Privacy Act (BIPA) by capturing and storing users’ biometric data without proper consent. Texas Biometric Lawsuit: In a more recent blow, Meta reached a $1.4 billion settlement with the state of Texas over similar allegations regarding the unauthorized capture of biometric identifiers. FTC Privacy Penalty: In 2019, the FTC imposed a record-breaking $5 billion penalty on Facebook for various privacy violations, including allegations that it deceived users about its use of facial recognition software. The coalition argues that the development of "Name Tag" contradicts Meta’s 2021 commitment to move away from broad identification systems. By pivoting from software-based photo tagging to hardware-based real-time identification, the groups claim Meta is attempting to reintroduce the same invasive capabilities under a different guise. Demands for Disclosure and Consultation The letter sent to Mark Zuckerberg on Monday outlines several specific demands. Beyond scrapping the "Name Tag" feature entirely, the coalition is calling for increased transparency regarding the impact of Meta’s existing wearable technology. The organizations are demanding that Meta: Disclose Harassment Instances: Reveal any known cases where its smart glasses or other wearables have been used in stalking, harassment, or domestic violence. Disclose Law Enforcement Ties: Provide a full accounting of any past or ongoing discussions with federal agencies—specifically ICE and CBP—regarding the use of Meta’s wearables or the biometric data harvested from them. Independent Oversight: Commit to a formal consultation process with civil society groups and independent privacy experts before integrating any form of biometric identification into future consumer devices. The groups emphasize that the risks inherent in "Name Tag" cannot be mitigated through "incremental safeguards" or "product design changes." They argue that the very nature of the technology—allowing one person to unilaterally de-anonymize another in a public space—is fundamentally incompatible with a free society. The Expanding Legal and Regulatory Front The pushback against Meta’s smart glasses comes at a time when the company is facing intensified legal pressure over its design choices across its entire product ecosystem. Recent court rulings have signaled a shift in how judges view the liability of tech giants. In March, a landmark "bellwether" trial in Los Angeles concluded with a jury finding Meta and Google’s YouTube negligent in the design of their platforms. The jury awarded $6 million in damages, concluding that the companies knew their platforms were addictive and dangerous to young users but failed to provide adequate warnings. This verdict has opened the door for thousands of similar lawsuits involving social media addiction. Furthermore, a significant ruling by the Massachusetts Supreme Judicial Court last week determined that Section 230 of the Communications Decency Act—a long-standing legal shield for tech companies—does not protect Meta from consumer protection lawsuits alleging deliberate "addictive design." The court ruled that features like infinite scroll and push notifications are design choices, not third-party content, and thus subject to state law. These legal precedents suggest that if Meta proceeds with "Name Tag" despite the known risks of stalking and privacy violations, it could face a new wave of "negligent design" litigation. Broader Implications for the AR and AI Industry The standoff between Meta and the 70-plus advocacy groups serves as a critical test case for the future of Augmented Reality (AR) and Artificial Intelligence (AI). As companies like Apple, Google, and Meta race to move computing from screens to the face, the ethical boundaries of these devices remain largely undefined. Industry analysts suggest that the integration of AI and biometrics is the "killer app" tech companies believe will drive mass adoption of smart glasses. However, the coalition’s resistance highlights a fundamental tension: the features that make these devices most useful to the wearer—such as the ability to instantly recall the name and background of a stranger—are the same features that make them most threatening to the public. As of the time of publication, Meta has not officially responded to the coalition’s letter or to requests for comment regarding the "Name Tag" project. EssilorLuxottica, the partner responsible for the Ray-Ban and Oakley frames, has also remained silent. The outcome of this dispute will likely set a precedent for the entire wearables industry. If a coalition of this scale can successfully halt the deployment of facial recognition by the world’s largest social media company, it may signal a shift toward a "privacy-first" approach to AR. Conversely, if Meta moves forward, it may trigger a rapid expansion of public surveillance, forever altering the nature of social interaction in the physical world. Post navigation The Silicon Valley Crosswalk Hack and the Growing Cybersecurity Risks to Municipal Infrastructure Geopolitical Volatility and Election Integrity: Iran Targets Silicon Valley as Domestic Policy Shifts Toward Restricted Voting