HIPAA Times news | Concise, reliable news and insights on HIPAA compliance and regulations

AI chatbot apps leak user prompts and tokens in massive data exposure

Written by Farah Amod | Sep 30, 2025 11:38:28 PM

An open server tied to three popular generative AI apps has exposed sensitive data from potentially millions of users.

 

What happened

Cybernews researchers uncovered a security lapse involving Vyro AI, the company behind ImagineArt, Chatly, and Chatbotx. An unprotected Elasticsearch database linked to Vyro AI was found leaking 116GB of user logs in real time, including prompts submitted by users, authentication tokens, and user agent details. These logs were traced to both production and development environments, making the exposure wide-reaching.

The breach could affect millions: ImagineArt alone has over 10 million installs on Google Play, and Vyro AI claims more than 150 million app downloads overall. Exposed tokens could be used to hijack user accounts, access private conversations, generate AI content, or conduct unauthorized purchases of AI credits.

 

Going deeper

Logs were accessible for an estimated two to seven days at a time and may have been visible to attackers since February 2025. The Elasticsearch instance was indexed by IoT search engines, suggesting prolonged public exposure. Researchers noted that leaked tokens, if not rotated, could give attackers access to sensitive user data and activity across the apps.

Beyond account takeover risks, leaking user prompts presents a significant privacy concern. Conversations with AI tools often involve personal reflections, emotional disclosures, or confidential questions, data that users may not expect to be stored, much less exposed.

Vyro AI is based in Pakistan and has not issued a public statement in response to the disclosure.

 

What was said

Cybernews researchers stressed that the scale and type of data involved make the leak particularly serious. “Monitoring user behaviour, extracting sensitive information, hijacking accounts—these are all plausible outcomes from the exposed data,” they explained.

The team also warned that stolen prompts and tokens could be used to purchase AI credits fraudulently or impersonate users online. The leak has triggered fresh concerns about how AI startups handle security in the race to dominate the market.

 

FAQs

What is Elasticsearch, and why was it involved in this leak?

Elasticsearch is a data storage and search engine tool. When misconfigured such as leaving it open without authentication, it can expose large datasets to the public internet, as seen in this case.

 

What are bearer tokens, and why are they risky if leaked?

Bearer tokens are digital keys that authorize user access. If stolen, they can let attackers impersonate users, view private chats, or make unauthorized purchases.

 

How can I tell if my AI app data was leaked?

Vyro AI has not issued a notification system, but users of ImagineArt, Chatly, or Chatbotx should monitor for unusual account activity and consider resetting credentials or deleting accounts if concerned.

 

Why are AI prompts considered sensitive?

Prompts often contain personal thoughts, health concerns, legal questions, or emotional reflections that users share with AI under the assumption of privacy.

 

What should AI companies be doing differently to prevent leaks like this?

They should implement strong authentication, limit data retention, rotate access tokens regularly, and undergo independent security audits before scaling access to millions of users.