
There is a particular kind of policy failure where the cure and the disease become indistinguishable. Theoretically, age verification mandates are designed to keep minors safer online. Discord claims its new verification system would do exactly that, but its exposed code reveals it screens ordinary users indiscriminately for terrorism and espionage connections. Vendor Persona Inc. was chosen to build this system, and is funded by Peter Thiel, co-founder of Palantir, one of the world’s most powerful AI surveillance companies. One has to ask: who is this infrastructure truly protecting?
Under the new system, users would be required to submit either a government-issued ID or a video selfie/face-scan for facial age estimation. Discord claims that the facial scans never leave the device and that uploaded IDs are immediately deleted after age confirmation. Yet, when researchers later found Persona’s code sitting exposed on a government-authorized server; they discovered it was not just checking ages but also running 269 distinct verification checks, screening users indiscriminately against terrorism and espionage watchlists, and assigning risk scores to individual profiles. This seems rather excessive for “child safety.”
What makes Discord’s position particularly hard to defend is the timing. Months before the Persona partnership, hackers had already accessed the government IDs of over 70,000 Discord users through a previous third-party vendor. Discord’s response was not to reduce its dependence on third-party infrastructure, but to build more of it, as if pushing the responsibility of handling user data away from itself.
The company’s push for age verification is even more suspect, considering that it is obviously counterintuitive for business. Discord imposed these verification requirements on all 200 million users without any legal obligation to do so in most markets, except in the UK, Australia, and Brazil. Yet, the vast majority of Discord users reside in jurisdictions where no regulator forced Discord’s system changes. The predictable result was immediate: searches for “Discord alternatives” jumped 10,000 percent overnight, and longtime Nitro subscribers, Discord’s main source of revenue, began publicly canceling. Why would a platform knowingly torch its main revenue base over child safety concerns, potentially alienating stakeholders and users alike, especially when it has filed for an IPO to go public sometime in the near future?
For many, this move came as a shock, as Discord has generally maintained good faith with its user base and been well-managed. Some might argue that the push for age verification is a continuation of Discord putting its users first, altruistically sacrificing revenue for child safety. However, Discord is still a for-profit company whose primary, if not only, goal is to create value for its shareholders. Additionally, consistent reports show Discord is nowhere close to being among the most problematic platforms when it comes to harboring online child predators, compared to Snapchat, WhatsApp, MeetMe, and other obscure messaging apps. None of these platforms plan to pursue anything similar to Discord’s biometric verification. Why then has Discord never thought to justify this move with a broader call to action — a more general call for other online platforms and parents to take accountability for child safety?
The more coherent explanation is that verification infrastructure has immense long-term value that personalized advertising revenue makes legible, and that value far exceeds whatever Nitro cancellations cost in the short term. Discord has already begun testing unskippable video ads, which they call “Quests” that pause if users tab away, guaranteeing full impressions for brands, with opt-in personalized promotions available through privacy settings. The Nitro cancellations Discord absorbed were likely a calculated short-term cost, and the infrastructure built is a long-term asset.
When the backlash came, Discord cut ties with Persona and its CTO issued an apology, acknowledging the company had “missed the mark.” The episode was reframed as a vendor problem and a due diligence failure, correctable with better processes. Yet, that framing is deliberately narrow. The question was never whether Discord picked the wrong vendor. The project has not been abandoned; Discord has only delayed the global rollout to the second half of 2026, promising “additional verification options and greater vendor transparency”.
Personalized advertising and cross-platform tracking by companies like Google and Meta is already a fraught tradeoff that most users accept by default rather than by choice. Biometric data is a different category entirely — not just more personal, but permanently so. A password can be changed. A face cannot. The first platform to successfully monopolize verified biometric profiles at scale would possess something unprecedented in the history of consumer data: an immutable, government-corroborated identity layer sitting beneath every interaction, every ad, and every political inference drawn from behavioral patterns. Peter Thiel — a man who has stated on record that he sees no obvious answer to whether the human race should continue to exist — is practically foaming at the mouth at the prospect of turning biometric data into surveillance power and profit.
“Child safety” does not justify that tradeoff. It does not even seriously engage with it. Discord’s rollout, and the broader legislative push enabling it, asks users to accept permanent surveillance infrastructure in exchange for protection from a threat that existing, far less invasive tools could address. That is not a reasonable bargain. A hard line needs to be drawn — on Discord, and on every platform that tries to make you think it is.
The Zeitgeist aims to publish ideas worth discussing. The views presented are solely those of the writer and do not necessarily reflect the views of the editorial board.
