The sound of tabloids does not alter the atmosphere of the university’s stone courtyards on a soggy Cambridge morning. Bicycles rest against walls from the Middle Ages. Wearing headphones and scarves, students rush past. However, a dispute about artificial intelligence has erupted in one contemporary office and reached the courts.
Due to publications that depict him and his colleagues as part of a “AI cult,” Dr. Shahar Avin, a senior research associate at the University of Cambridge’s Centre for the Study of Existential Risk, has filed a lawsuit against MailOnline, the Daily Mail’s digital division. It’s a witty phrase. It persists.
| Category | Details |
|---|---|
| Plaintiff | Shahar Avin |
| Institution | University of Cambridge |
| Research Center | Centre for the Study of Existential Risk |
| Defendant | Daily Mail / MailOnline |
| Field of Work | AI Safety, Existential Risk, Governance |
| Reference | https://www.cser.ac.uk |
Avin contends in court documents that the portrayal belittles important scholarly studies on AI safety and could harm scholars who study long-term technological governance. According to the Daily Mail, academics researching existential AI dangers were zealots engrossed in apocalyptic speculation rather than true science.
The word “cult” might have been used more as a provocation than as a precise term. But subtlety is important in academia.
According to most reports, Avin’s research focuses on systemic AI risk, or the possible ripple effects of increasingly independent systems interacting with delicate political, social, and economic institutions. He has arranged seminars that look at “red flags” that indicate unstable technology trajectories and role-playing exercises that simulate crises relating to AI.
Without context, these exercises can sound dramatic. “What if artificial intelligence systems vie for resources?” “What if incentives that are not aligned scale too quickly?” They invite headlines when taken at face value. There is a perception that the media occasionally has a greater thirst for controversy than for justification.
Instead of occult symbols, the whiteboards at Cambridge’s Centre for the Study of Existential Risk are covered in schematics. Scholars argue on regulatory design, governance structures, and probabilities. They use language that is conditional and cautious. They quarrel with one another.
However, the artificial intelligence cultural moment is unstable. Tech leaders on the one hand promise economic growth and revolutions in productivity. Critics, on the other hand, caution about runaway systems, false information, and job displacement. AI safety experts find themselves in a precarious middle ground between hype and terror.
Avin claims that the Daily Mail’s coverage veered toward what he calls “technopanic”—the idea that people who study catastrophic risk are fear mongers rather than logical analysts. He contends that this framing suppresses reasonable inquiry in addition to damaging reputations.
How the courts will interpret the phrase is still up in the air. Freedom of expression and reputation must be balanced under British defamation law. Was the term “AI cult” exaggerated, or did it suggest serious wrongdoing? One lawsuit doesn’t seem to be the only source of stress.

AI safety has shifted from the periphery of academia to the political spotlight. AI safety institutes are being established by governments. Voluntary commitments are being published by corporations. Despite concerns from watchdog groups, venture cash continues to pour into frontier models.
Researchers looking at the worst-case possibilities may seem inconvenient in such a setting. This situation is ironic. Silicon Valley frequently rewards people who envision radical futures and embraces “moonshot thinking.” However, academics are occasionally written down as alarmists when they envision extreme threats.
It seems that society has not yet decided how seriously to take long-term AI risk as we watch things develop. Is it an exercise in speculation? A wise precaution? A diversion from more pressing issues? In keeping with his connections to Cambridge, Avin has apparently joined the UK AI Safety Institute as a research scientist. The work continues with interdisciplinary cooperation, governance proposals, and scenario analysis.
The Daily Mail, meanwhile, defends its reporting and journalistic freedom to examine new areas. Tension is what tabloids feed on. Intuitive headlines rarely adequately capture complex debates. The true tale may be one of trust outside of the courtroom.
Have faith in the media. Have faith in academia. Have faith in organizations to responsibly interpret technological change. The term “AI cult” might become obsolete. However, the fundamental discussion of how to convey uncertainty in the era of potent algorithms will continue.
The spires of Cambridge have witnessed centuries of scholarly debates. The majority continued to be restricted to lecture halls. This one has made news across the country.
Regardless of the lawsuit’s outcome, it highlights a more fundamental query: who has the authority to shape the discourse surrounding AI? That question seems considerably more important and less dramatic in the calm of a collegiate hallway than the tabloids portray.