Wtf?! Even when the generative AI becomes more widespread, the system remains prone to hallucinations. It is one thing to give people to glue the pizza and to eat rocks, but the chat wrongly told a person that he spent 21 years in jail to kill his two sons.
Norwegian National Arave Hazar Holmen contacted the Norwegian Data Protection Authority when he decided to see what the chat knew about him.
Chatbot responded confidently in his general, saying that Holman had killed his two sons and attempted to kill his third son. It said that he was sentenced to 21 years in jail for these fake crimes.
While the story was fully fabricated, Holmen had elements of life that the chatgipt got correct, including their children’s number and gender, their approximate age and their hometown’s name, causing false claims about the murder to be more frightening.
Holmen says that he has ever been accused and not convicted of any crime and he is a dutiful citizen.
Holmen contacted the privacy rights group NOYB about hallucinations. It was researched to ensure that the chatgipt Holman is not being mixed with anyone else, possibly with a similar name. The group also examined the newspaper’s archives, but nothing was clear to suggest why the chatbot was making this fierce story.
The LLM of the chat has been updated since then, so it does not repeat the story when asked about Holmen. But NOYB, which has collided with Openai in the past, providing false information about chat, has still filed a complaint with the Norwegian Data Protection Authority, Dattylsynet.
According to the complaint, Openai violated the GDPR rules that state companies should ensure that it is accurate. If these details are not accurate, it should be correct or removed. However, NOYB argues that as a Chatgpt training for training purposes feeds back to the user data in the system, there is no way to be certain that the wrong data is completely removed from the LLM dataset.
NOYB also claims that the Chatgpt does not comply with Article 15 of GDPR, which means no guarantee that you can miss or see Everyone A piece of data about a person is fed in the dataset. “This fact still causes crisis and fear for the complainant, (Holman),” Noyab wrote.
NOYB is asking Dattylsynet to order Openai to remove defamation figures about Holmen and to eliminate the wrong results about individuals to express their models, which will not be any simple task.
Right now, the Openai’s method of covering your back in these situations is limited to a small disclaimer in the lower part of the chatgipt, stating, “Cuts can make mistakes. Check the important information, as if there is a double murderer.