The FIDO Alliance, an industry association dedicated to reducing the world’s reliance on passwords, announced on Tuesday the formation of two new working groups aimed at establishing global security standards for "agentic AI." This initiative, bolstered by foundational contributions from Google and Mastercard, seeks to create a secure, interoperable framework for transactions conducted by autonomous AI agents on behalf of human users. As artificial intelligence evolves from simple chatbots into proactive agents capable of executing complex financial and logistical tasks, the industry faces a critical juncture: the need to prevent a new era of digital fraud, account takeovers, and unauthorized autonomous spending. The Shift Toward Agentic AI and the Security Vacuum The digital landscape is currently besieged by a trifecta of security threats: sophisticated malware, pervasive online impersonation, and high-frequency account takeovers. However, the emergence of agentic AI—AI systems designed to act as proxies for humans—introduces a novel category of risk. Unlike traditional software, these agents are delegated the authority to make decisions, such as booking travel, managing subscriptions, or purchasing goods, often without real-time human intervention for every micro-step. Andrew Shikiar, CEO of the FIDO Alliance, observed that while these agents are rapidly entering mainstream use, the existing security infrastructure of the internet is fundamentally ill-equipped for this paradigm. Most authentication models were built on the assumption that a human is directly interacting with a device. When an agent acts on a user’s behalf, the traditional "challenge-response" mechanisms, such as biometrics or physical security keys, become difficult to implement without creating significant friction or security loopholes. The FIDO Alliance’s new working groups aim to address this "security vacuum" by developing a protective baseline that can be adopted across various industries, including retail, banking, and telecommunications. The goal is to ensure that agent-initiated actions are resistant to phishing and cannot be hijacked by malicious actors to provide rogue instructions. A Chronology of Digital Authentication and the Path to Agentic Standards To understand the urgency of this initiative, one must look at the historical trajectory of digital security. For decades, the primary gatekeeper of digital identity was the password—a mechanism that dates back to the early days of computing in the 1960s. As the connected economy grew, the inherent flaws of passwords led to a multibillion-dollar fraud industry. In 2012, the FIDO Alliance was formed to move the industry toward more robust authentication, eventually leading to the development of "passkeys," which use cryptography to replace passwords. However, the rise of generative AI in 2022 and the subsequent shift toward agentic AI in 2024 have moved the goalposts once again. 1960s–2010s: The Era of Passwords. Security relied on shared secrets, which were easily phished or breached. 2012: Formation of the FIDO Alliance. The industry begins coordinating on hardware-backed and cryptographic authentication. 2022: The Generative AI Boom. Large Language Models (LLMs) demonstrate the ability to process complex instructions. 2024: The Rise of Agentic AI. AI begins moving from "answering" to "doing," necessitating a shift in how "intent" is verified. 2025 and Beyond: The Standardization Era. FIDO, Google, and Mastercard aim to finalize protocols that allow agents to transact securely. The current effort is seen as a preemptive strike to avoid the "password trap" of previous decades. By establishing foundational principles now, the industry hopes to build a "connected economy" where agentic interactions are secure by design rather than as an afterthought. Technical Foundations: AP2 and Verifiable Intent The FIDO Alliance’s work will be anchored by two major technical contributions: Google’s Agent Payments Protocol (AP2) and Mastercard’s Verifiable Intent framework. Both companies are contributing these as open-source tools to accelerate the standardization process, acknowledging that the speed of AI development requires a faster-than-usual consensus. Google’s Agent Payments Protocol (AP2) AP2 is designed to provide a cryptographic mechanism for verifying that a human user truly intended for a specific agent-initiated transaction to occur. It functions as a digital "letter of intent" that is cryptographically signed and verifiable by third parties. This prevents "agent hijacking," where a bad actor might intercept an AI agent’s workflow and redirect funds or change the terms of a purchase. Mastercard’s Verifiable Intent Framework Developed in collaboration with Google to work seamlessly with AP2, Mastercard’s framework focuses on the authorization and control of agent actions. It allows users to set granular permissions—essentially "guardrails"—for their AI agents. For example, a user could authorize an agent to spend up to $100 on a specific item but require a manual biometric check for any amount exceeding that threshold. Stavan Parikh, Google’s Vice President and General Manager of Payments, emphasized that these protocols utilize "selective disclosure." This means that while a transaction is cryptographically proven to be authorized, the privacy of the user is maintained. A merchant might receive proof of authorization and payment capability without needing access to the user’s full identity or the specific logic the AI agent used to make the decision. Data and Market Implications of Autonomous Commerce The economic stakes of securing AI agents are immense. According to industry reports, the global AI market is projected to reach over $1.3 trillion by 2032, with a significant portion of that growth driven by autonomous commerce and automated workflows. However, the cost of cybercrime is also rising, with estimates suggesting it could exceed $10 trillion annually by 2025. The complexity of the payments ecosystem—involving platforms, merchants, payment providers, and banks—means that a single point of failure in an AI agent’s logic could have a cascading effect. Mastercard’s Chief Digital Officer, Pablo Fourez, noted that when bad actors exploit emerging technologies, the cost of remediation and support is exceptionally high for financial institutions. By implementing standardized "verifiable intent," the industry can reduce the overhead associated with dispute resolution and fraud management. For merchants, these standards offer a path toward higher conversion rates. If consumers trust that their AI agents can autonomously hunt for deals and execute purchases safely, the volume of automated transactions is likely to surge. Conversely, without these protections, the "trust gap" could stifle the adoption of agentic tools. Official Responses and Industry Reactions The announcement has garnered attention from across the technology and financial sectors. While the initial working groups are led by FIDO, Google, and Mastercard, the broader industry is expected to join the effort to ensure interoperability across different operating systems and payment networks. Andrew Shikiar of FIDO highlighted that the goal is to create a "fit for purpose" foundation. "Preexisting models weren’t built to contemplate actions performed on a user’s behalf," he noted. This sentiment is echoed by cybersecurity analysts who argue that "identity" in the AI age must be redefined to include "delegated identity." Consumer advocacy groups have expressed cautious optimism, noting that while the technology promises convenience, the "privacy-preserving frameworks" mentioned by Google will be essential. The ability for users to have "recourse in the event of a dispute" is a critical component of the FIDO mandate, ensuring that if an agent makes a mistake—such as purchasing the wrong item or misinterpreting a price—there is a clear, standardized path for resolution. Analysis: The Challenges of Real-World Adoption Despite the technical promise of AP2 and Verifiable Intent, the road to global adoption is fraught with challenges. Technical standards typically take years to move from proposal to widespread implementation. The FIDO Alliance is attempting to compress this timeline, but several hurdles remain: Interoperability: For these standards to work, they must be supported by everyone from small online boutiques to global tech giants like Apple and Amazon. User Experience: If the security checks for AI agents are too cumbersome, users will bypass them. The industry must find a balance between "invisible security" and "informed consent." Regulatory Alignment: As governments in the EU and the US move toward stricter AI regulation (such as the EU AI Act), these industry standards will need to align with legal requirements regarding transparency and algorithmic accountability. Edge Cases: AI agents are inherently non-deterministic. Creating a cryptographic proof for an agent that might "hallucinate" or misinterpret a complex instruction is significantly harder than securing a static transaction. The example provided by Google’s Stavan Parikh—an agent waiting for a specific pair of sneakers to drop in price—illustrates the potential. If the agent successfully executes the purchase at $99.99, the system works. But if the agent misinterprets a "limited time offer" and spends $150, the "Verifiable Intent" framework must have the built-in logic to block the transaction or provide the user with an immediate way to void it. The Future of the Agentic Economy The initiative by the FIDO Alliance, Google, and Mastercard represents a proactive shift in the cybersecurity landscape. Rather than waiting for a major "agent-based" fraud crisis to occur, the industry is attempting to build the locks before the doors are even fully installed. As AI agents become more autonomous, the definition of a "transaction" will shift from a discrete event to a continuous process of delegated authority. The development of these standards is not merely a technical necessity but a foundational requirement for the next phase of the digital economy. If successful, the work of these groups will ensure that when a user tells an AI agent to "take care of it," they can do so with the confidence that their digital proxy is acting with their full, verified, and protected intent. Post navigation Navigating the Digital Frontline The Legal Implications of Online Activity and Information Sharing During Crises in the United Arab Emirates