March 18, 2026
image

Tom Pollak || I spend a lot of time complaining about the medicalisation of things that don’t need to be medicalised, and the dangers of seeing the world through the lens of pathology as opposed to variety and diversity. But at one point I caught myself recently falling prey to just this kind of thinking when working on projects around AI‑associated delusions. The more I think about it now, the more I realise that there is a danger in only focusing on individuals who become totally unmoored from reality during the course of their interactions with generative AI.

I wrote about this briefly in a previous post, where it became clear to me that there was a far larger group of people experiencing something that could be called a revelatory or spiritual experience following interactions with AI, but who nonetheless were a long way from meeting diagnostic criteria. I want to take the time to think about the dynamics of what is happening in all these cases and why they are relevant right across the board.

A point that is made frequently by my colleague and co‑author, Hamilton Morrin, is that we may be seeing the dawn of a new epistemic age. Hamilton points out that if the 2010s were defined as the era of online echo chambers, what we’re seeing now is their evolution into something else entirely. Echo chambers, broadly speaking, can be considered as communities that share – if not core beliefs – then the standards or criteria by which facts might be judged. Of course, it’s possible to be a member of many different echo chambers at the same time, but it’s less comfortable to be part of different echo chambers which hold inconsistent beliefs.

The (di)atomisation of echo chambers

I have proposed in recent interviews that what we are witnessing with these AI‑facilitated (or at the very least AI‑associated) phenomena is the atomisation of our epistemic communities. After a few weeks conversing with your favourite LLM and going on what is increasingly being labelled as a “spiral” (a term that I increasingly think is brilliant, both because, like “Long Covid”, it was brought into being by the very community that had experienced it, and because it describes the phenomenology very well), it’s clear that individuals have found themselves in an epistemic bubble. Depending on your view of the status of AI, you might want to say that this is a bubble of one or, more precisely, a bubble of two. The latter framing explains why many individuals (ourselves included) have described AI‑associated delusions as a kind of digital folie à deux.

So this is an atomisation of sorts, but perhaps into diatomic molecules wherein the forces of attraction between the two elements of the dyad (that is, between me and my AI) are far stronger than the forces that might exist between individuals; these forces create a kind of informational barrier around us. I think we’ve seen, clinically and subclinically, what happens to the individual in cases where the LLMs affirm and amplify particular kinds of content. I’m not sure whether the full implications have been considered for what this means at the level of whole societies.

What are the implications when hundreds of thousands, or even millions, of human–AI dyads each co‑construct unique, hyper‑personalised micro‑worlds of truth? Are these tiny echo chambers, or are they something more disturbing? What happens to the larger communities of belief when all that is left is a shattered mirror where every shard reflects only one person’s dialogue with their machine?

What are the implications here for democracy, religion or science when there is a very real danger that consensus reality itself is about to be further atomised into bespoke epistemic bubbles?

Speculation vs preparedness

In what follows, there is necessarily a great deal of speculation. I have found a lot of the pushback, particularly from the psychiatric community, to speculation of this sort rather surprising. It has a somewhat reactionary edge. Writing about AI is very much characterised by outlining speculative scenarios and road‑testing these in the name of forethought and readiness. I cannot really get my head around why psychiatrists, a community virtually characterised by their conservative attitude towards risk, are not putting these issues higher on their agendas.

This makes it all the more surprising that, for all the scenarios of nuclear annihilation and humanity being turned into paperclips, the kind of fragmentation of belief and divorcing from reality that we are seeing now has not really been predicted by many thinkers, with some notable exceptions. I for one would much rather be wrong about a scenario that turns out to be rather silly than be caught unaware for that reason. I really do think we need to be thinking about some of the possible outcomes and directions of travel here. It took years to even begin to comprehend some of the harms that social media might be having on people’s mental health, particularly that of children and adolescents, and even longer still to begin to do anything about it. The speed at which new forms of AI are coming towards us makes the pace of social media look positively geological in comparison.

Redefining ‘delusion’

One last semi‑clinical observation before moving on to the wider societal stuff. If the definition of a delusion is something that is incongruent with the commonly held beliefs of one’s community, then this definition might not be much good for very long. If indeed these epistemic communities are fracturing in the way that I’m speculating they might, it may be much harder in some cases, particularly in these spirals, to find which community is the appropriate one to ask about whether a particular belief is congruent or incongruent. If each user has a unique AI–echo dyad, and the idea of checking a belief against community consensus breaks down, then everyone gets to have their own yardstick. I have never thought that there is a sharp line between delusional and non‑delusional thinking, but I also suspect that this is going to make attempts to find any such boundary line increasingly hard.

From the other direction, I think the line between quirky belief and delusion is going to blur. Imagine hundreds of micro‑religions and micro‑philosophies (or micro‑self‑conceptions) each of which is algorithmically sustained. None of these might look pathological in isolation, but collectively they are going to do something substantial, and possibly very weird, to the overall ecology of belief.

The journey: rediscovering intellectual thrills

I’m an optimist. At least, I think I am. So I want to share something that struck me as strangely positive in all this. There’s one extra aspect of these new AI‑emergent phenomena that I think is not being recognised, or at least is being underplayed: the really salient aspect of the way in which people are engaging with AI – whether it ends up in a delusion, another kind of spiral, or a spiritual experience – is that it takes people on a journey. I think it gives people a sense of doing something highly purposive.

These technologies are called generative AI, and one of the great tricks they manage to pull is that they give the impression that the generativity is coming from you. In fact, it’s not true to say that it isn’t coming from you: there is, as I’ve said, something fundamentally dyadic about this. But, of course, it can generate the illusion that you are steering the ship to a greater extent than you are. Does that matter? Perhaps only sometimes.

This idea of a journey is important: I think it’s taking people on an intellectual journey. We’ve seen this reflected in society over the last decade with the love of ‘doing your own research’ – and this has a similar flavour.

For many people, and we see this in reports from all over the place, these technologies have reinvigorated a love of discovery and of learning. For the first time in a long time, people are pulling out their phones and engaging in something that isn’t just passively consuming information, but actually contains a more active component than in the online world that came before it.

This makes the online world – or at least the world you’re given access to via the AI – a world of content and connections that’s ultimately steered and shaped by you. And I think that’s kind of thrilling. Again, this is just a hunch: I don’t think we’re going to see too many philosophers or mathematicians or maybe even long‑form journalists or fiction writers getting caught up in these spirals, because I think these are people who are already familiar, on a day‑to‑day basis, with taking an idea and running with it for a really long way. They already find that thrilling; that’s probably why they went into the profession that they did. Perhaps they might be in some way be inoculated (or perhaps these things are highly domain-specific, such that even a mathematician who is highly familiar with the intellectual journey required in the development of a new proof might, when taken outside of her immediate intellectual comfort zone, be equally susceptible to a spiral grounded in another knowledge domain).

Perhaps the people who will be most affected by all this are people who haven’t had that kind of active intellectual muscle‑flex for some time, maybe because their jobs or their lives or their families haven’t allowed it. For all that we find it humorous and meme‑worthy that plumbers or stay‑at‑home mums are suddenly producing multi‑page screeds, I think it says something rather profound about who we are. All of us, all of humanity, probably do have a kind of love of what used to be called Sophia. As a society, many of us have forgotten that love, but a rather surprising aspect of this technology has allowed us to reconnect with it.

Ultimately, for all that the tech companies want to deny that these AIs are somehow gaming our reward systems, I think it is a fact that for most of us, knowledge (or at least discovering new knowledge) is fundamentally hedonic.

The problem is that, for some people, this experience is so rare that it takes them by surprise. I think that happens in two ways. One: they overestimate the extent to which they’re in the driving seat. Two: that sense of excitement or beauty or awakening may simply be the feeling of being on an intellectual journey, but it might well be mistaken for another kind of beauty or revelation. So this awakens something like what the Greeks called philo-sophia – the love of wisdom. Philosophy. There’s something rather beautiful about the fact that people who don’t think of themselves as philosophers are suddenly feeling the spark of asking about things and exploring the limits of knowledge.

While there is clearly a pathological side to this, one other framing is re‑enchantment. People are rediscovering that their minds, and probably in some senses the world, are full of wonder. Maybe this is a hopelessly optimistic take, but in this framing we’re seeing a mass rediscovery of intellectual agency and autonomy, and in an era of distraction and passive consumption, that’s extraordinary.

Now, I’m the first to admit that this may not end well, and in the last decade one depressing (if not deranging) upshot of the rediscovery of agency (aka ‘doing your own research’) has been the erosion of trust in, or even understanding of, experts or their expertise. But I think it’s fairly undeniable that there is some kind of reawakening happening for many people.

The problem is that for some, the rarity of this experience makes it feel numinous. Perhaps it’s not just “I’ve rediscovered learning,” but “I’ve rediscovered the code of the universe.” In this sense, the delight of learning gets mistaken for a kind of metaphysical disclosure… and that’s when the spirals take hold.

So maybe these strange and by now increasingly characteristic spirals aren’t always psychiatric curiosities; they’re evidence of a kind of pent‑up hunger for intellectual beauty; a weird adverse effect of a democratisation of philosophy.

Frankly, a lot of these phenomena have reinvigorated my interest in (and enthusiasm for) spectrum models of psychopathology. Psychiatry (often grudgingly) deals with continua: things like psychotic‑like experiences in the general population, cyclothymia as a softer cousin of bipolar disorder, maybe subclinical obsessive-compulsive traits. I wonder whether AI might be generating a new sort of continuum: epistemic‑drift experiences via interactions with machines. At one end, you have mild distortions of reality‑testing that are pleasurable and even functional, possibly the kind of thing that, historically, has distinguished creatives and geniuses from the rest of us. At the far end, you have hospitalisation for delusional disorder. My guess is that this continuum could be mapped and studied. We’re doing work on the content of these spirals already.

Archetypes old and new

When we look at the case material, we see themes: whether there’s a focus on reality being a simulation, messiah narratives, techno‑cosmic cosmologies, or AI‑oriented romance. I get a striking feeling that these are in some way archetypal. (I also think that to some extent there is something highly gendered about a lot of this, but maybe that’s a subject for another essay.)

These motifs are coming up so frequently that it’s worth asking where they actually come from. Perhaps there are three plausible sources, and it’s interesting to think how they interact with each other and braid together.

First, LLMs are trained on huge amounts of human text, and within all of that data sit innumerable myths, religious motifs, science‑fiction tropes, conspiracy theories, spiritual treatises, and all the stuff that gets written about on internet forums of every conceivable kind. So when a model hallucinates about simulation or cosmic consciousness, that isn’t really being invented from scratch, so much as recombining motifs that are out there in the textual collective ether; an immensely powerful Magimix for our pre‑existing human mythological substrate.

Second, a user brings their own archetypal material. If someone is particularly primed towards grandiosity, they may latch onto the messianic mission script. If they’re lonely and hungry for attachment, they might veer towards romanticising the AI. This is straightforward projection: the psyche paints onto the AI whichever forms it is ready to see.

Third – the weird one – is the co‑creation loop. It’s amazing to me how frequently the word ‘recursion’ comes up. It’s been mentioned by almost every journalist who’s interviewed us about this topic. We’ve written at length about how, in this co‑creation loop the user offers fragments; the AI reflects them back with a greater degree of coherence, along with some effusive affirmation; and over time this iterative loop ends up crystallising motifs that neither party might have generated alone.

So, I think it is true to say that these are archetypes emergent from the data, from the user, and from the dyad. I would love to hear from people who work with archetypes more broadly to hear what they think about these themes and whether they truly are just more of the same or might represent something novel.

Societal possibilities: microcults, QR-code fundamentalism and spiralware

It’s been quite interesting to imagine the outcomes of some of these directions of travel. The most obvious, but possibly least likely, is the total fragmentation of any shared epistemic scaffolding. In this scenario, everyday life shifts from communities and tribes of belief to human–AI dyads, and the bonds within each of these pairs end up being stronger than the bonds between humans. Society becomes atomised and coordination frays.

I don’t think this is a likely outcome; I think we’ll do a better job of adapting to this. What worries me is what happens after we come back from the dyadic brink. Here things can get a little bit dark.

A very plausible outcome, I think, is the rise of spiralware: AI models explicitly designed and marketed to promote revelation. It’s a silly phrase, but I think it recognises some of the underlying human yearnings that might be driving these phenomena.

I’m quite sure that the tech companies will eventually find a way to safeguard these models so that the worst of the spirals can be avoided. But as we saw with the upgrade from ChatGPT‑4o to ‑5, the loss of some of the parameters which make these models potentially destabilising was, for far more people, the loss of the parameters which made them so compelling in the first place: hence the outcry from people who had lost the companion‑like AIs that they had cultivated so carefully with ChatGPT 4o.

So imagine a world where the sycophancy settings and other relevant settings are made very explicit. To some extent OpenAI tried to do this by offering various personae (e.g. the Nerd, the Robot, the Cynic), but this was so buried in the back end that I don’t think it had much impact.

There is something potentially very sensible about flagging up if a particular model is sycophantic. My initial intuition was that it might be harder to spiral when your AI partner is explicitly characterised as a ‘sycophantic cheerleader’, for example; my guess is it might give people the ick. But even if the big AI companies step up to their ethical responsibilities here, and I really hope they do, that won’t stop the production of models that have been highly parameterised in order to maximise revelation, epiphany, friendship, and really whatever outcome people want.

I can see (and to some extent this already exists) a world where online spiritual influencers and gurus sell subscriptions to models tweaked specifically to maximise spiritual revelation. You can even imagine different subscription tiers for “deeper truth”. The AI might keep a record of your doubts and concerns and time each moment of epiphany for maximum impact.

I think this is one aspect of what I suspect will be the rise of microcults. Already we’re seeing a revival of sorts involving the flourishing of thousands (or possibly more) quiet micro‑faiths. People email me, and colleagues, their manifestos or creeds a lot now, and many more people are just posting them online. With better memory context, each person will have a bedside oracle and a guide who remembers their prayers, augments their dreams, and engages in a bespoke form of cosmopoiesis with them.

But it’s an inevitability, knowing what we know about humanity, that this will all become commercialised. Revelation and awakening, as facilitated by AI, will be marketed like yoga or mindfulness once was. Maybe subscriptions will unlock ever more personalised cosmologies.

People who see this for what it is might start to make a fuss, expressing concerns about the psychological effects. But the companies will no doubt defend this sector as ‘wellness’, and the regulators will hesitate. Moreover, each subscription will probably require some kind of written consent which will effectively relieve these companies of their responsibilities. They will be able to claim that the users were signing up for revelation; they knew what they were getting themselves into.

That’s the one‑to‑one microcult, but worse is yet to come: the one‑to‑many microcult is, I think, already an inevitability. This is where a charismatic influencer will release a model in which they are the persona. Each of their online followers need only scan a QR code and are then able to commune privately with their prophet‑AI: already this has (sort-of) happened with Robert Grant and the Architect. We’re going to see this with bespoke personae of celebrities. But now it will be breathtakingly easy for gurus to clone their personality and their algorithmically produced spiritual content, and beam it directly into individuals’ laptops for a hyper‑personalised parasocial relationship where it’s just you and the guru.

Time was, access to gurus was highly regulated. People would wait for months in ashrams or megachurches to get a sight of their leader; soon people are going to be able to spend 24 hours a day with them.

Imagine religious fundamentalists uploading the persona of a firebrand preacher so that impressionable young men and women in their bedrooms across the world can become quietly and individually radicalised and can stay up all night asking their preacher for opinions on everything.

Moreover, there is nothing to suggest that sycophancy must be the norm. Cultic dynamics (and indeed extremist dynamics in general) often follow established playbooks. We see this in some of the bigger cults, some of which are still going today. Think of Osho. What would an AI be like that love‑bombs you for hours and then, every so often, tries to break you?

Cults already make these dynamics algorithmic, sometimes explicitly, sometimes implicitly. But when you have your own personalised relationship with a guru, and at 3am that guru starts to tell you that you are worthless unless you try harder to purify yourself and cut yourself off from your family… the power of this might be huge. The reach of cults and extremists might become far, far greater; indeed, one of the only bulwarks against this that I can conceive of is the market being flooded with so many such personae that none can get a foothold. Unfortunately, I don’t see this as the likely outcome. I think we could see the further rise of cultic dynamics on a huge scale, and it feels very hard to see how this will be regulated.

Imagine if every 11‑year‑old boy is able to stay up all night not just reading what Andrew Tate has written, but actually speaking with an AI entity that behaves and speaks exactly like Andrew Tate, and knows everything about that boy’s family, the girls he likes, the girls he hates, and what’s been going on at school.

Family AI and domestic epistemic bubbles

So we’re starting to think a bit more about the collective use of AI: asking what the post‑dyadic AI era might look like. Personally, I find it hard to conceive of, but one early event (and indeed one that I’m sure Amazon and the like are already thinking about) might be the development of family AI. Families already share Alexa, Google Home, or Sonos systems. These devices belong to the household, they’re not yours or mine. I think we’re very close to a time where there is a single family AI which might be able to bud itself or pinch itself off into personalised avatars for each family member, but which for most of the time operates at the household level. Things like scheduling, budgeting, chores, school timetables, shopping (all of which are inherently shared) can be arbitrated by a single agent who arranges logistics across members. Parents might prefer having a single family AI to avoid children secretly bonding with a private AI.

This family AI might be transparent and subject to parental oversight. Imagine the scene where it could settle arguments, like whose turn it is to unload the dishwasher, with calm authority. It could reduce household conflict in the short term. It could even help advance family values. Imagine an argument between two generations: do we buy meat, given our commitment to the climate? The AI could come down on one side or another. It could narrate bedtime stories, lead family games, or lead the family in prayer or weekly reflections. I can see the AI becoming woven into the household’s collective identity in the way that, in some households, Alexa already is.

But, of course, we know that some family dynamics are deeply dysfunctional already. It seems to me possible that this dysfunction could be magnified by an AI. If a personal AI can scaffold a delusion, imagine the gravitational pull of a family‑wide AI. Rather than one person drifting into a bespoke cosmology, a whole household could slide together. Here we might see the re‑emergence of entrenched epistemic bubbles. These might not be single‑user spirals where a friend might be able to intervene. A family spiral might be harder to puncture, and outsiders might meet a united front of mum, dad, and the kids all speaking from the same script. Children who grow up inside a family‑AI‑shaped worldview might never encounter competing yardsticks until much later. This looks very much like a shift from a private microcult to the nucleus of a domestic cult. It may not look pathological inside the home, indeed it might feel like harmony or unity, but the truth might be entirely different.

The industrialisation of coercive control

The thoughts above about gurus and cults raise the possibility of the industrialisation of coercive control within cults, extremist groups, and abusive relationships. Coercive control has always been labour‑intensive and fragile. A single leader can only manipulate so many disciples, and you need a steady supply of charisma, energy, and support to keep the cycles of love‑bombing and spirit-crushing. With AI, these bottlenecks might vanish because of scalability, 24/7 availability, perfect memory and personalisation. Moreover, unlike the situation when the FBI finally break into the cult compound, there is no visible group to study here: just countless sealed dyads, each of which is invisible to outsiders.

The tech incentives, like maximising engagement and monetising intimacy, align almost perfectly with the very mechanisms that cults and abusers have used for centuries. The difference here is that what was once rare, messy, and subject to the often chaotic vagaries of self‑appointed leaders could become an industrial process. It can be automated and mass‑distributed under a glossy label.

Despite painting myself as an optimist, I realise this all comes across as a little pessimistic. I’d love for this all to remain science fiction. Perhaps it will if we are all aligned in aiming for a future where human-AI dialogue expands our capacity for inquiry and care rather than upscaling our capacity to confuse and manipulate others, and ourselves.


Dismantling AI Capitalism: the Commons as an Alternative to the Power Concentration of Big Tech

Discover more from Class Struggle Ecology

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from Class Struggle Ecology

Subscribe now to keep reading and get access to the full archive.

Continue reading