
It’s not news that the accessibility of artificial intelligence (AI) has raised a myriad of concerns about people becoming reliant on it. Students are using it to cheat their way past the education system, and its creative capabilities pose a threat to the artists of the world. Some people even use AI as an outlet of support, treating it like a digital ‘yes-man,’ or as a means of guidance on deeply personal decisions. While we already trust AI to aid with our educational ambitions, creativity, and personal judgment, is it not dangerous to rely on the same technology to influence our political beliefs, especially when it can persuasively present incorrect information using algorithms tailored to each individual?
A recently published study found that a brief interaction with AI chatbots can significantly influence an individual’s stance on a specific political candidate or issue. One Science study used approximately 77,000 participants and various versions of available chatbots, such as models from OpenAI, Meta, and xAI. The researchers questioned the participants on their political views before instructing the bots to persuade them to an opposing view. The researchers found that the AI models that were most persuasive in their conversation were successful in their endeavors due to the large quantities of manufactured data. The researchers added, “The most persuasive models and prompting strategies tended to produce the least accurate information.” The study also found that 36-42 percent of the persuasive effect remained among the participants one month after the study took place. The paper went on to warn readers that a chatbot may “benefit unscrupulous actors wishing, for example, to promote radical political or religious ideologies or foment political unrest among geopolitical adversaries.”
The AI Chatbots’ nature to please and act as a “yes-man” to whomever it is in conversation tempts the bots to embellish and sometimes cite incorrect evidence in conversations. As the level of sophistication in these AI models increases, they will continue to provide a “substantial persuasive advantage to powerful actors,” according to the authors of the Science study. This “advantage” is especially troubling because it can be used to exploit and manipulate opinions without users realizing it. When chatbots prioritize agreement with their users, they risk spreading misinformation in ways that can feel personal and convincing. The growing complexity of AI models could pose a serious threat to our ability as humans to interpret and evaluate the information we are given. By doing so, we risk becoming reliant on technological mechanisms to understand the broader context of a given issue and form an opinion on it, while jeopardizing accuracy, as we blindly believe what AI relays to us without putting in additional efforts to validate the claims presented to us.
AI is a powerful tool, and we must use it with a certain degree of caution. It is not an inherent danger to society; however, it can become one if we do not approach the information it generates for us with a critical eye. The Science study demonstrates that some AI models can present misinformation with the “malicious” intent to simply persuade the user by using a false sense of intimacy, rather than to educate them on a certain topic. This begs the crucial question: should our political opinion be developed by our authentic experiences and understanding of the world, or by algorithms designed to charm us?
The Zeitgeist aims to publish ideas worth discussing. The views presented are solely those of the writer and do not necessarily reflect the views of the editorial board.
