Machine errors
Due to lack of proper alignment in the AI, Xiang admits that there were responses that might not have met ethical standards. Proper alignment is crucial for the safe and ethical use of AI, as it helps systems correctly learn and generalize from human preferences, goals and values.
Under one query about whether a person's mother still loved her, the AI answered rudely, saying that she had never cared about or liked the questioner. Xiang chose not to post the answer.
"I have been quite cautious with the answers it generated. Once it was bad-tempered, replying by saying 'go to hell'. Although that rarely happened, I kept a close watch on it," Xiang says, adding that using a bigger model could potentially reduce the occurrence of such issues.
Besides improper answers, the AI's identity varied from time to time. Responding to a question about whether tolerance for alcohol was a necessary part of nightlife, the AI replied as a bartender, saying "we welcome everyone to our bar".To another question asking what it feels like to drink again after quitting alcohol, it claimed to have been sober for 20 years.
In response to a query about dating in Shenzhen, it described itself as "male, 26, working in Shenzhen, single, looking for a girlfriend".
"The program pulled data from what it has been fed, leading to it making things up," the engineer says, adding that advances in generative AI mean fake information, like images, videos, audio and bots, are now all over the internet.
As the supervisor of the account, he censored the answers carefully to prevent the AI from blindly giving advice to others, which might be taken seriously. This is also why he ended the experiment.
"It triggered reflections on how AI changes communication on social media if AI-generated content dominates the internet and most people believe it to be another human," Xiang says. "We need to consider the influence of AI on social dynamics and think about how to use it responsibly."
As the technology accelerates quickly, researchers have noticed the public tendency toward being too trusting. A September study published in Nature magazine spotlighted that a growing amount of literature indicates that people tend to be overly trusting of AI, even when the consequences of it making a mistake could be grave.
Colin Holbrook, associate professor of Cognitive and Information Sciences at the University of California, Merced, and a principal investigator of part of the study, says that the consistent application of doubt is needed.
For his part, Xiang says the popularity of AI presents opportunities and challenges, and it is up to us to navigate this new landscape with caution and foresight.