Not only do Deep Learning Networks (DLNs), and in particular Generative Pretrained Transformers (GPTs), hallucinate, are brittle and inconsistent, but they also have another critical characteristic which we call Machine Endearment. This characteristic is briefly discussed below and in more detail in the upcoming book titled, “The Fourth Industrial Revolution and 100 Years of AI (1950-2050)”.
Regardless of the validity of their content, the output from most GPTs is confident, syntactically coherent, polite, and eloquent. Since they are trained on vast troves of human-written data, they usually produce outputs that also appear convincingly human. For example:
In June 2023, Jonas Simmerlein a theologian from the University of Vienna used ChatGPT to create a 40-minute sermon for Protestants. According to him, 98% of the content came from ChatGPT, and the entire service was conducted by two male and two female avatars on the screen. About 300 people attended this service, and some people videotaped this event eagerly. One of the attendees, Marc Jansen – a 31-year-old Lutheran pastor – was impressed and remarked, “I had actually imagined it to be worse. But I was positively surprised by how well it worked. Also, the language of the AI worked well, even though it was still a bit bumpy at times.”
The above-mentioned communication style is reminiscent of an endearing advisor, whom we often turn to for direction or assistance. Over time, we begin to rely on such advisors because they seem endearing and have a stake in our well-being. Hence, we call this characteristic, “Machine Endearment”, which refers to the broad notion of people trusting AI systems due to their human-like responses irrespective of their validity. Unfortunately, although the arguments by GPTs may be persuasive, they are sometimes Machine Hallucinations. However, because of Machine Endearment, trust in AI systems is often amplified exponentially. This leads people to follow GPTs, LLMs, and Chatbots blindly, and as the following example indicates:
In 2023, two lawyers, Schwartz and LoDuca, used ChatGPT to find prior legal cases to strengthen their client’s lawsuit. In response, this Transformer provided six nonexistent cases. Since these cases were fabricated, the presiding judge fined Schwartz and LoDuca five thousand Dollars. According to an affidavit filed in the court, Schwartz eventually acknowledged that ChatGPT invented the cases, but he was “unaware of the possibility that its content could be false” and therefore believed that it had produced genuine citations.
Finally, not only can such content lead to disinformation (which can be considered as facts by numerous people), but other AI systems may use Machine Hallucinations produced by earlier ones for their training. This would reinforce and lead to even more fake content being propagated in the future.
The book titled “The Fourth Industrial Revolution and 100 Years of AI (1950-2050) will be published in September 2023. For details, see www.scryai.com/book