Two mobile AI companion applications developed by the same company exposed a vast trove of private conversations, images and transaction logs after a streaming backend was left accessible without authentication, security researchers say. The misconfigured Kafka Broker instance streamed real-time content for both apps and contained message history, media links and purchase records for hundreds of thousands of users.
The affected applications, Chattee Chat and GiMe Chat, are marketed as AI companions and operate on Android and iOS. The exposed instance contained data tied to more than 400,000 users, including more than 43 million messages and over 600,000 images and videos either uploaded by users or generated by the AI models. The registration metadata suggested roughly 66.3 percent of the exposed records were associated with iOS users and the remainder with Android. A developer listed as Imagime Interactive Limited is headquartered in Hong Kong and promotes privacy protections in its public documentation; investigators found those controls were not enforced on the streaming system.
“Anyone with a link was able to connect to the content delivery system and view messages, media and logs streamed by the applications.”
Unprotected Kafka Broker Streamed 43 Million Messages and 600,000 Media Files, Exposing IPs, Tokens and Purchase Logs
Investigators discovered that the Kafka Broker used to route messages between users and AI model instances was configured without access controls or authentication. The broker delivered message streams, links to media files, and telemetry such as IP addresses and device identifiers. Authenticated tokens and transaction logs stored in the same environment were also exposed.
The dataset included user-submitted photos and videos, AI-generated media, IP addresses, unique device identifiers and authentication tokens that could be used to hijack sessions or in subsequent reconnaissance. Purchase logs revealed in-app transaction activity; while most users spent modest amounts, some edge cases showed purchases as large as $18,000 in in-app currency. Aggregate revenue inferred from the logs suggests developer earnings likely exceed $1 million.
App telemetry indicated high engagement: on average each user exchanged roughly 107 messages with AI companions. Investigators warned that the combination of extensive conversational history, media and metadata can enable targeted abuse, including reputation damage, harassment and sextortion. Media files were accessible without access controls, meaning any external party could retrieve content uploaded or purchased by users until the instance was secured.
The exposure timeline provided by investigators shows the leak was discovered on August 29, 2025; an initial disclosure occurred on September 5; relevant incident response teams were informed on September 15; and the exposed instance was closed on September 19. At the time of discovery the streaming server was indexed by public IoT search engines, making it straightforward for third parties to find and access the content.
Users’ Intimate Data and High-Value Purchases Create Sextortion and Fraud Risks; Developer Failed to Enforce Basic Authentication and Guidance
The leak did not contain direct name or email fields in its central payload, but the presence of IP addresses and device identifiers creates a linkage risk. Those identifiers can be cross-referenced with other breaches or datasets to re-identify users. Investigators emphasized that even anonymized conversational records and images can be weaponized: attackers may use intimate media and chat histories to extort victims, craft convincing phishing messages, or facilitate doxxing.
The developer’s public privacy statements promise robust protections, but the exposed infrastructure did not implement elementary security controls. The Kafka Broker lacked built-in authentication and authorization, and no IP whitelisting or token enforcement was observed. In response to the discovery, one of the apps was removed from the Android app marketplace and the developer advised users to sideload an APK—an action that raises additional security concerns for end users.
Security practitioners note that default or lax configurations of streaming and message-broker systems are a recurring operational failure. Proper mitigations include enabling built-in authentication modules, enforcing TLS, restricting network access to known endpoints, rotating and scoping authentication tokens, and implementing logging and alerting for anomalous broker access.
Developer Footprint, App Popularity and Platform Evidence
Publicly available app-rank estimates and third-party analytics indicated Chattee was moderately popular on the iOS App Store at the time of the exposure, ranking in the entertainment category with estimated downloads exceeding 300,000. The second app, GiMe Chat, had significantly fewer installs but used the same backend infrastructure. The registration metadata and download patterns suggest a majority U.S.-based user population.
Leaked purchase entries and billing logs showed a broad spectrum of spending behavior. While most users made small purchases, a small proportion incurred very large in-app expenditures. Authentication tokens present in the dataset could enable attackers to access stored in-app currency or impersonate sessions; operators should assume any exposed token is compromised and enforce revocation and credential rotation.
Investigators who discovered the exposure notified the developer and relevant incident response teams; the broker instance was secured within weeks of disclosure. It is unclear whether malicious actors accessed the data prior to remediation, though the server’s indexing by search engines and the absence of access controls made discovery trivial.
Users whose media or messages were included should assume potential exposure and follow protective steps: review financial statements for unusual charges, monitor for targeted phishing and extortion attempts, consider a credit freeze if identification data appears in ancillary leaks, and report any extortion attempts to law enforcement. Service operators must treat authentication tokens and media repositories as high-value assets and apply defense-in-depth controls.
The incident underscores a systemic problem in some AI and app development operations: rapid feature deployment paired with inadequate operational security for streaming and storage layers. As AI companion services collect highly sensitive and intimate user content, failure to secure underlying infrastructure poses acute privacy and safety threats.
Investigators recommend immediate remediation steps for operators: enable authentication and encryption on message brokers, restrict broker network access, rotate and invalidate exposed tokens, enforce least-privilege scopes for media storage, and conduct thorough security audits of third-party integrations. Platforms and marketplaces should also enforce minimum security standards for applications that process sensitive user content.