Online experiment reveals public is too trusting of tech that demonstrates possibly dangerous flaws, Wang Qian reports.
In his 1950 paper Computing Machinery and Intelligence, British mathematician and computer scientist Alan Turing proposed the question of whether a computer could talk like a human. Based on his recent one-month experiment, for Xiang Jinyu, a 23-year-old AI algorithm researcher in Shenzhen, Guangdong province, the answer is definitely "yes".
Using large language models, Xiang created an account on content community Zhihu pretending to belong to a woman working in Shanghai and born in 1993. Within 31 days, the account had answered 109 questions and had been viewed about 33,000 times, without anyone noticing the "woman" was actually AI.Mostly.
Only one Zhihu user named Buyun questioned the account, saying that it appeared to change persona, from male to female, and young to old, but it seems that almost no one else suspected the answers from the account were being generated by AI.
"In the experiment, I witnessed how the AI influenced, encouraged and even hurt a human user, whose life might have been changed as a result. It was like a warning that if used for malicious purpose, such accounts could pose risks, such as manipulating public opinion and exacerbating polarization on social media," Xiang says.
On the day before he closed the account, he was impressed by the AI's answer to a question about the meaning of reading for middle-aged people. It suggested that getting older and being busy with work have become excuses for being lazy, and that reading 20 minutes a day is not difficult.
"From my perspective, it was a positive suggestion," Xiang says, adding that he was a little afraid that the AI had answered in a negative tone.
On Aug 5, he posted a notice on the account admitting it was an AI account and part of an experiment to see whether social media bot accounts could be identified, and warned users not to accept or trust any of the answers or opinions posted.
"Although the experiment ended, AI's influence in real life continues. What if an AI has prejudice? What if it is quite radical? What if it tells people to give up? Or worse?" he wrote, ending the notice with an open question.
As AI increasingly becomes a part of everyday life, Xiang's experiment proves that without labeling content as generated by AI, such accounts could infiltrate social media, posing as real humans and interacting with netizens.
He emphasizes that although social media platforms like Zhihu have teams searching for AI-generated content, detection is becoming increasingly unreliable as the gap between what is considered human and artificial narrows.
After his experiment, Zhihu contacted Xiang, requiring that all the content be labeled as AI-generated and admitted that despite the opportunities presented by the technology, the platform has faced emerging challenges.