HomeAI NewsAI chatbots reveal people’s phone numbers

AI chatbots reveal people’s phone numbers

Reports surface of personal contact info exposed by AI tools like Gemini and ChatGPT.

AI chatbots, including Gemini and ChatGPT, have exposed people’s real phone numbers to the public, raising serious privacy concerns. Reports indicate that users are receiving calls from strangers seeking personal information, while others are finding their contact details surfaced in AI-generated responses.

The issue stems from personally identifiable information (PII) present in training datasets used by these AI models. While the exact mechanisms vary, experts attribute the exposure to these tools processing and returning sensitive data without proper user consent or safeguards.

For builders and operators of AI systems, this incident underscores the critical need for robust privacy protocols and transparency regarding how personal information is handled during model development. Enterprise teams must remain vigilant about potential security risks and take proactive measures to protect user data.

As concerns mount, the focus now shifts to developing better data protection techniques and policies to prevent similar incidents in the future. Researchers and companies alike are likely to invest more resources into enhancing AI privacy protections.

What matters

  • People’s real phone numbers are being revealed by AI chatbots.
  • This poses significant privacy risks for users and highlights training data issues.
  • Experts see a 400% increase in AI-related privacy requests as concerns grow.

Why it matters

Experts see a 400% increase in AI-related privacy requests as concerns grow.

This GenAI News article was prepared in original wording using reporting and materials published by MIT Technology Review AI. Source reference: https://www.technologyreview.com/2026/05/13/1137203/ai-chatbots-are-giving-out-peoples-real-phone-numbers/.

Drafted by the GenAI News review pipeline.

latest articles

explore more