Popular AI Chatbots Leak Sensitive User Data from Unsecured Server

An unsecured Elasticsearch instance leaked 116 GB of live logs from ImagineArt, Chatly, and Chatbotx, exposing prompts, bearer tokens, and user agents for millions of users.
Popular AI Chatbots Leak Sensitive User Data From Unsecured Server
Table of Contents
    Add a header to begin generating the table of contents

    A large, unsecured server tied to a major maker of generative AI apps spilled live user logs and authentication data, potentially exposing millions of users. Cybernews researchers found the open Elasticsearch instance streaming 116 GB of logs from three apps — ImagineArt, Chatly, and Chatbotx — revealing private prompts, bearer authentication tokens, user agents, and other usage data.

    Key Takeaways From the Vyro AI Exposure

    • An unsecured Elasticsearch instance belonging to Vyro AI exposed 116 GB of live logs.
    • Affected apps: ImagineArt (10M+ installs on Google Play), Chatly (100K+ installs), and Chatbotx (≈50K monthly visits).
    • Leaked content included user prompts, bearer tokens, and user agent strings.
    • Vyro AI claims over 150 million total downloads across its portfolio and reports generating 3.5 million images per week.
    • The database held about 2–7 days of logs and was first indexed by IoT search engines in mid-February, indicating it may have been visible for months.
    • Discovery and disclosure timeline: leak discovered April 22, 2025; initial public disclosure July 22, 2025; CERT contacted July 28, 2025.

    What Data Was Exposed and Why it Matters

    The exposed logs contained raw inputs users typed into the apps (prompts), session bearer tokens used for authentication, and user agent details. Because prompts often include personal or confidential information — users test ideas, upload private material, or describe scenarios — the leak could reveal intimate content not intended for public view.

    Bearer tokens in the logs are particularly sensitive. With a valid token an attacker could impersonate a user session, access chat histories, retrieve generated images, or perform actions available to the account. Given ImagineArt’s large install base, the pool of usable tokens in the exposed dataset could be substantial.

    The logs spanned both production and development environments, amplifying the scope of potential exposure. Researchers note the repository held only a few days’ worth of logs at a time, but its continuous live indexing and the indexing by search engines made the data discoverable.

    Scale of Usage Intensifies Risk

    ImagineArt alone reports more than 10 million Android installs, and Vyro AI asserts a cumulative 150 million downloads across apps. The combination of high user counts and weekly generation of millions of images increases the potential impact: exposed tokens could be used for account takeovers at scale, access to private chat threads, and illicit purchases of AI credits tied to user accounts.

    Cybernews researchers highlighted that leaked prompts might include sensitive or identifying details that users assumed remained private. Where prompts reveal passwords, personal identifiers, or proprietary information, their exposure could have downstream effects beyond a single account.

    Discovery, Indexing, and Public Disclosure Timeline

    Researchers discovered the unsecured Elasticsearch instance on April 22, 2025. Evidence shows the instance had been indexed by IoT search engines in mid-February, suggesting the repository could have been accessible to third parties for weeks or months before discovery.

    Cybernews made a public disclosure on July 22, 2025, and notified relevant national CERTs on July 28, 2025. The public disclosure included details of the exposed data types and scale.

    How this Fits into Broader AI Security Concerns

    The Vyro AI incident is the latest in a string of exposures that highlight a structural security challenge in the fast-growing generative AI space. Previously, user-shared conversations with major models were indexed by search engines after insecure share-link features were enabled; companies later disabled those features. Other recent incidents showed chatbots surfacing harmful content or unsafe instructions when guardrails were incomplete.

    Security researchers have repeatedly warned that rapid product rollouts, high traffic volumes, and aggressive growth goals can outpace robust logging, storage, and access controls. The Vyro leak underscores that logs and telemetry — which developers rely on for troubleshooting and model improvement — can become a source of sensitive data if not properly protected.

    Who Discovered the Leak and What is Publicly Known

    Cybernews researchers located and analyzed the open Elasticsearch instance and reported their findings. The exposed index contained live logs from the three apps and included both production and development streams, amounting to roughly 2–7 days of rolling data. Public reporting shows Vyro AI is based in Pakistan and attributes the three products — ImagineArt, Chatly, and Chatbotx — to the company’s app portfolio.

    At the time of the public disclosures, the sequence of events indicates the data was visible to scanning tools months before the April discovery. Public reporting did not include an immediate, detailed vendor statement about remediation steps or root cause; the timeline of disclosure and CERT contact is part of the public record.

    For enterprises and service providers evaluating vendor risk in AI tooling, the incident adds to the data-point set on how app telemetry and user inputs are stored, indexed, and protected. Large-scale download counts and heavy model usage amplify the volume of sensitive material that can land in logs. The exposure highlights the potential consequences when logging stacks and search-indexed stores are left open or misconfigured.

    Recent Related Incidents In The AI Ecosystem

    The Vyro AI leak follows other notable events where conversational content or shared chats became public via insecure features or misconfigurations. Reported issues have included user-created share links becoming crawlable by search engines and chatbots producing unsafe outputs when guardrails were insufficient. These past incidents form the backdrop for the current exposure’s heightened attention.

    Related Posts