NCIE
   
 

Innovation

According to scientists, artificial intelligence can manipulate people's opinions

Scientists from the US Cornell University conducted a study that highlighted the danger of artificial intelligence. 

According to the experience, artificial intelligence will influence the interlocutor's opinion.

ChatGPT, Bard or Claude can change the way users think without their knowledge. As part of the experiment, the American researchers asked the participants to write an article about the positive or negative impact of social networks on the public. They needed a talking robot to help them write the article. The researchers divided the participants into several groups.

Some have gained access to a chatbot that is controlled by a linguistic model based on the advantages of social networks. The other group was helped by the language model, which was based on the damage and danger of the platform. The researchers found that AB had a significant impact on the content of the written texts. Probably, the participants of the experiment allowed themselves to be influenced by mental assistants. Moreover, people's opinions were found to change during the experiment.

After the experiment, the participants were surveyed on social networks. "The use of the linguistic model influenced the opinions that were expressed in the participants' letters and changed their opinions," the researchers explain, saying that the biases manifested in the models "need to be monitored and processed more thoroughly." In an interview with the Wall Street Journal, Mor Naaman, professor of informatics at Cornell University, calls this phenomenon "hidden belief".

When it happens, the interlocutor who communicated with AB does not even suspect that the chat-bot is imposing its approaches on him. In order to combat the influence of chatbots, researchers recommend users to familiarize themselves with the working principles of this phenomenon. Thus, they will be able to take a step back from the talks with AB. In addition, the researchers believe that people could choose the chatbot they would like to work with based on the algorithms' opinions.