AI Chatbots Raise Privacy Concerns by Revealing Personal Information
Recent reports have highlighted significant privacy concerns surrounding hyper-realistic chatbots, which have been found to inadvertently disclose personal information such as phone numbers and addresses. These chatbots, trained on vast amounts of data from the internet, sometimes reveal private data despite being programmed to refuse such requests. A test conducted by CNET revealed that some chatbots, like Grok, were able to provide multiple present and past addresses within seconds. This has raised alarms about the potential misuse of these AI systems, as they can be exploited by scammers to doxx users. The issue is compounded by the fact that many AI companies, including Meta and OpenAI, retain user data indefinitely, which could include sensitive information provided to chatbots.