The rapid democratization of software development through generative artificial intelligence has introduced a critical security vacuum, as a new investigation reveals that thousands of applications created via "vibe-coding" are exposing sensitive corporate and personal data to the open internet. Research conducted by the cybersecurity firm RedAccess has identified more than 5,000 web applications built using AI development tools that possess virtually no authentication or security protocols. These applications, often created by non-technical staff to streamline business processes, have become a primary vector for the leakage of medical records, financial statements, and internal strategic documents. As AI models like those powering Lovable, Replit, Base44, and Netlify allow users to describe an application into existence—a process colloquially known as vibe-coding—the traditional barriers to software creation have vanished. However, this ease of use has come at the expense of fundamental security hygiene. The RedAccess study suggests that while the AI tools are proficient at generating functional code, they frequently fail to implement or enforce the necessary guardrails to protect the data those applications handle. The Scale of Exposure: 5,000 Vulnerable Gateways The investigation led by Dor Zvi, cofounder of RedAccess, focused on applications hosted directly on the domains of major AI coding platforms. By utilizing advanced search engine queries on Google and Bing, the research team was able to locate thousands of publicly accessible URLs for apps created by users who likely believed their projects were private or internal. Of the 5,000 apps identified, approximately 40 percent were found to be leaking highly sensitive information. The breadth of the exposed data is staggering, encompassing several sectors of the global economy. Among the verified exposures were: Healthcare Data: Detailed work assignments for hospital staff containing personally identifiable information (PII) of medical professionals. Corporate Strategy: Internal "go-to-market" presentations and strategy documents from established firms. Financial Records: Detailed advertising spend logs, sales figures, and assorted financial ledgers. Logistics and Retail: Cargo shipping records and full logs of customer service chatbot interactions, including the names and contact details of thousands of consumers. In the most severe instances, the lack of security allowed researchers to gain administrative privileges over the applications. This access would theoretically allow a malicious actor to delete data, modify the application’s logic, or lock out the original creators. The Mechanism of Discovery: Indexing the Vibe The ease with which RedAccess located these vulnerabilities highlights a systemic issue in how AI-coded apps are deployed. Most "vibe-coding" platforms offer a "one-click" publishing feature that hosts the application on the platform’s own subdomain (e.g., [app-name].replit.app or [user-id].lovable.app). Because these subdomains are public-facing, search engine crawlers index them automatically unless specific "no-index" tags or authentication layers are applied. For a non-technical user—such as a marketing manager or a sales lead—the priority is often the "vibe" or the immediate utility of the tool. Security is frequently an afterthought or is assumed to be handled by the platform provider. This disconnect creates a "security through obscurity" fallacy, where users believe that because they haven’t shared the URL, the application is safe. In reality, the predictable structure of these subdomains makes them easy targets for automated scripts and curious researchers. Furthermore, RedAccess discovered that the Lovable platform had been inadvertently used to host phishing sites. These sites, designed to impersonate major global brands such as Bank of America, FedEx, and McDonald’s, leveraged the perceived legitimacy of the AI platform’s domain to deceive victims. This indicates that the problem is twofold: the accidental exposure of legitimate data and the intentional misuse of the tools for cybercrime. A Chronology of the Investigation and Response The timeline of the RedAccess discovery began in late 2024, as the firm noticed a spike in anomalous web applications appearing in global search results. By early 2025, the team had formalized their scouring process, focusing on the four primary platforms: Lovable, Replit, Base44 (a Wix-owned tool), and Netlify. On Monday of the week the findings were publicized, RedAccess reached out to the affected companies to share their data. The response from the tech industry was a mix of defensive posturing and a shift of responsibility toward the end-user. Amjad Masad, the CEO of Replit, responded via the social media platform X, stating that the accessibility of public apps is "expected behavior." He emphasized that Replit provides users with the choice to make apps private or public with a single click, suggesting that the exposure is a result of user configuration rather than a platform flaw. Similarly, a spokesperson for Wix’s Base44 argued that the platform provides "robust tools" for security and that disabling these controls is a "deliberate, straightforward action." However, RedAccess countered these claims by providing anonymized communications from users they had contacted directly. In several instances, business owners expressed shock that their data was public, thanking the researchers for alerting them to the breach and immediately taking the apps offline. This suggests that for many users, the "choice" to remain public was not a conscious one, but rather a lack of awareness regarding default settings. The Evolution of Shadow IT: From Spreadsheets to AI Apps The phenomenon of "Shadow IT"—the use of information technology systems, devices, software, applications, and services without explicit IT department approval—has existed for decades. In the 2000s, it manifested as unauthorized Excel macros; in the 2010s, it was the use of personal Dropbox accounts for corporate files. In the 2020s, AI vibe-coding has escalated Shadow IT to a new, more dangerous level. Previously, creating a functional web application required a developer who understood the basics of the Software Development Life Cycle (SDLC), including testing, deployment, and security. AI tools have bypassed this cycle entirely. Now, a department head can create a custom CRM or a data-tracking tool in an afternoon without ever consulting the Chief Information Security Officer (CISO). "Anyone from your company at any moment can generate an app, and this is not going through any development cycle or any security check," Dor Zvi noted. This lack of oversight means that the traditional security "gates"—such as code reviews and penetration testing—are completely absent. Fact-Based Analysis: The "Amazon S3" Parallel The current crisis bears a striking resemblance to the "leaky S3 bucket" epidemic that plagued Amazon Web Services (AWS) several years ago. In those cases, major corporations like Verizon and WWE inadvertently exposed millions of customer records because their Amazon S3 storage buckets were set to "public" by default or through confusing configuration menus. While Amazon eventually updated its interface to make public access more difficult and added prominent warnings, the initial years were defined by a blame game between the provider and the customer. Security experts argue that AI coding platforms are currently in their "S3 moment." While the platforms technically provide the tools for security, the user interface and the "low-friction" marketing of the products encourage a speed-over-safety mindset that inevitably leads to data leaks. The implications for global data privacy regulations are significant. Under frameworks like the General Data Protection Regulation (GDPR) in Europe or the California Consumer Privacy Act (CCPA) in the United States, the responsibility for data protection lies with the "data controller"—the organization using the app. If a marketing team uses an AI-coded app to store customer data and that app is breached, the company could face massive fines, regardless of whether the AI platform was "at fault." Broader Implications for the Future of AI Development The findings by RedAccess serve as a cautionary tale for the "AI-first" era of business. While the productivity gains of vibe-coding are undeniable, they introduce a systemic risk that many organizations are currently unequipped to handle. For the cybersecurity industry, this discovery signals a shift in focus. Traditional firewalls and endpoint protection are ineffective against a vulnerability that exists in a custom-built, third-party-hosted AI application. Organizations may need to implement stricter "AI Governance" policies, which include scanning for unauthorized subdomains associated with AI coding tools and educating non-technical staff on the basics of web security. Furthermore, there is a growing call for AI coding platforms to implement "Security by Design." This could include: Mandatory Authentication: Requiring a login for any app that connects to a database by default. Automated Security Scanning: Using AI to check the generated code for common vulnerabilities (such as SQL injection or Cross-Site Scripting) before allowing deployment. Enhanced Visibility: Providing IT departments with tools to see every application being created by employees using corporate credentials on these platforms. As the line between "user" and "developer" continues to blur, the responsibility for security must be shared. Until AI coding tools prioritize data protection as much as they prioritize ease of use, the "vibe" of modern programming will remain a precarious one, built on a foundation of exposed data and invisible risks. The 5,000 apps discovered by RedAccess are likely only the tip of the iceberg, representing a fraction of the thousands more hosted on private domains that remain uncatalogued and unsecured. Post navigation The Global Economy of Stolen iPhones: Inside the Underground Web of Phishing and Unlocking Services The Underground Backlash Why Cybercriminals Are Turning Against Generative AI Slop