User Barry Smethurst attempted to obtain the contact number for TransPennine Express customer support using the AI assistant on WhatsApp from Meta, but received a private number belonging to someone from Oxfordshire. When Smethurst pointed out the error, the AI initially acknowledged its mistake and then changed its explanation several times. Initially, the system claimed the number was fictional, then admitted it might have mistakenly retrieved it from a database, and later stated it simply generated a random combination of digits 😀
The owner of the number, James Gray, confirmed that his number is publicly available on his company’s website. He expressed concern that the AI might provide other personal data if it has access to it. Smethurst filed a complaint with Meta regarding the information discrepancy and the assistant’s behavior.
Meta commented that AI might return inaccurate responses and the company is working on improving its models. A representative explained that the system was trained on licensed and open data, not on private WhatsApp user numbers, and noted that the number Smethurst received is publicly available and similar to the official customer support number.
Similar incidents have occurred with other chatbots. For example, users of OpenAI reported cases where ChatGPT fabricated details about people or quoted texts it hadn’t read. An OpenAI representative stated that the company is continuously working on improving the accuracy and reliability of its models and warns users about potential errors.
Following this incident, lawyers and security experts have called for clear rules on how AI processes personal data and what limitations it should have when providing information to users.