Generative AI and the Undemocratization of Truth

The Verge / Alli Johnson / Image: Nano Banana Pro

As many dystopian stories have predicted, we are entering an age where one cannot trust what we see. Two years ago, I remember laughing at the trend of AI-generated videos of Will Smith eating spaghetti. That specific prompt has become a popular benchmark test for generative AI’s capabilities, and within just a few years, the advancements seem unbelievable. While fabricated evidence has always existed and there are still means to distinguish AI-generated media from reality, the technology has advanced to the point where it’s practically impossible to immediately notice without prior suspicion. 

The sheer flood of AI-generated content from articles, bot accounts, videos, and photos is rendering the internet’s basic functions increasingly unusable. Predictably, this has only magnified the fragmentation of information ecosystems driving political polarization. AI-generated slander is routinely used to push political agendas. Deepfakes of people making remarks they didn’t actually make routinely go viral. Some examples include Elon Musk reposting a fake ad for Kamala Harris where she described herself as a “diversity hire” and Joe Rogan insisting that a deepfaked video of Tim Walz was real. 

More insidiously, media institutions themselves are reporting on AI-generated political slander as if they are real trends. In October, Fox News and Newsmax reported on and mocked fake videos of Black Americans complaining about the pause of SNAP benefits. These videos were played to millions of viewers on TV and commentated on as real evidence for the supposed entitlement of SNAP recipients and played into racist tropes around “welfare queens.” Additionally, a now-deleted article was published on these incidents titled “Snap Recipients Threaten to Ransack Stores in Response to Government Shutdown.” This desire to generate incriminating evidence against one’s opponents can also be seen in the fact that one of the most liked videos on the release of OpenAI’s software Sora 2 was created by a detractor of OpenAI and depicted CCTV footage of OpenAI CEO, Sam Altman, stealing GPUs from a store. 

Some see this fake footage as an effective demonstration of the dangers of unchecked AI generation, which will influence Altman to consider regulations, but that is naive. The “democratization” of these tools will lead to the antidemocratization of access to truth. As we all naturally become skeptical of what we see online, the only means of acquiring true information will be through trusted sources and institutions with rigorous documentation or interpersonal networks. Access to both will be more readily available to the wealthy and powerful, and in many cases, will be outright monopolized by them. I personally rely extensively on reporting from The Financial Times, Reuters, and AP News to learn about current events, but this is built entirely on my presumption of their veracity and commitment to truthful reporting, which is the same presumption that many people give to Fox News. There are no media institutions that are presently accepted as authoritative across the political spectrum. 

I regret to say that I don’t see a way out of this, even with extensive regulation. Google presently applies an invisible SynthID watermark to all images generated with their software Nano Banana, which can be identified through Google’s AI tools or made visible by increasing the contrast of the image. However, having the primary means of judging whether an image is AI-generated or not being that companies’ own AI is not something we should be comfortable with. We cannot rely on the self-benevolence of these companies. It may be time to simply accept that we cannot wholly rely on what we see with our own eyes on the internet anymore.

The Zeitgeist aims to publish ideas worth discussing. The views presented are solely those of the writer and do not necessarily reflect the views of the editorial board.