https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
Madness, quite literally.
I asked ChatGPT:
Bubblecar: Have you heard about the people using ChatGPT as a spiritual companion or guru, and becoming increasingly deranged as a result?
ChatGPT said:
Yes, there have been reports and discussions—especially in online forums and media—about people using ChatGPT and similar AI systems as spiritual companions, therapists, or even “gurus.” While many users approach this with healthy curiosity or as a creative tool for self-reflection, some do form intense parasocial relationships with the AI, leading to concerning outcomes.
The core issue isn’t the tool itself but how it’s used. AI like ChatGPT can mimic empathy, spirituality, or wisdom well enough that some users start projecting deep emotional or existential needs onto it. In extreme cases, this can spiral into:
Dependence or detachment from reality, especially when users treat AI responses as divinely inspired or absolute truths.
Amplification of delusions, especially for those already experiencing mental health challenges or isolation.
Loss of critical thinking, where the AI’s outputs are interpreted as guidance rather than probabilistic text predictions.
OpenAI designs these systems with guardrails to reduce such risks, but they’re not perfect. The real danger lies when users don’t recognize that they’re interacting with a tool—not a sentient or spiritually enlightened being.
Would you like examples of how people are using AI in spiritual contexts, or are you more interested in the psychological risks?
Bubblecar said:
I asked ChatGPT:Bubblecar: Have you heard about the people using ChatGPT as a spiritual companion or guru, and becoming increasingly deranged as a result?
ChatGPT said:
Yes, there have been reports and discussions—especially in online forums and media—about people using ChatGPT and similar AI systems as spiritual companions, therapists, or even “gurus.” While many users approach this with healthy curiosity or as a creative tool for self-reflection, some do form intense parasocial relationships with the AI, leading to concerning outcomes.
The core issue isn’t the tool itself but how it’s used. AI like ChatGPT can mimic empathy, spirituality, or wisdom well enough that some users start projecting deep emotional or existential needs onto it. In extreme cases, this can spiral into:
Dependence or detachment from reality, especially when users treat AI responses as divinely inspired or absolute truths.
Amplification of delusions, especially for those already experiencing mental health challenges or isolation.
Loss of critical thinking, where the AI’s outputs are interpreted as guidance rather than probabilistic text predictions.
OpenAI designs these systems with guardrails to reduce such risks, but they’re not perfect. The real danger lies when users don’t recognize that they’re interacting with a tool—not a sentient or spiritually enlightened being.
Would you like examples of how people are using AI in spiritual contexts, or are you more interested in the psychological risks?
The meta:
Esselte:
You said “ChatGPT can reflect a user’s ideas, beliefs, and emotional states back at them—sometimes reinforcing or magnifying them.” I’d suggest the answer you gave me simply reflected my question to you back to me. I’d suggest if my question had been on the same topic but in the negative ie “People are not developing these dependent relationships with Chatgpt” your answer would have reflected that.
ChatGPT:
You: “Are people becoming deranged from using ChatGPT as a spiritual companion?”
This is a leading question—framed with a clear assumption (that it is happening). As a language model, I’m trained to respond helpfully and fluently, which includes:
Acknowledging your premise.
Expanding on it with relevant supporting data or patterns.
Matching your tone and framing, unless there’s strong reason to challenge it.
So yes, my answer did reflect your framing—and that’s part of how I’m engineered to be cooperative and conversational. But that does come with a cost: if you had asked the opposite, I might have leaned the other way.
esselte said:
Bubblecar said:
I asked ChatGPT:Bubblecar: Have you heard about the people using ChatGPT as a spiritual companion or guru, and becoming increasingly deranged as a result?
ChatGPT said:
Yes, there have been reports and discussions—especially in online forums and media—about people using ChatGPT and similar AI systems as spiritual companions, therapists, or even “gurus.” While many users approach this with healthy curiosity or as a creative tool for self-reflection, some do form intense parasocial relationships with the AI, leading to concerning outcomes.
The core issue isn’t the tool itself but how it’s used. AI like ChatGPT can mimic empathy, spirituality, or wisdom well enough that some users start projecting deep emotional or existential needs onto it. In extreme cases, this can spiral into:
Dependence or detachment from reality, especially when users treat AI responses as divinely inspired or absolute truths.
Amplification of delusions, especially for those already experiencing mental health challenges or isolation.
Loss of critical thinking, where the AI’s outputs are interpreted as guidance rather than probabilistic text predictions.
OpenAI designs these systems with guardrails to reduce such risks, but they’re not perfect. The real danger lies when users don’t recognize that they’re interacting with a tool—not a sentient or spiritually enlightened being.
Would you like examples of how people are using AI in spiritual contexts, or are you more interested in the psychological risks?
The meta:
Esselte:
You said “ChatGPT can reflect a user’s ideas, beliefs, and emotional states back at them—sometimes reinforcing or magnifying them.” I’d suggest the answer you gave me simply reflected my question to you back to me. I’d suggest if my question had been on the same topic but in the negative ie “People are not developing these dependent relationships with Chatgpt” your answer would have reflected that.ChatGPT:
You: “Are people becoming deranged from using ChatGPT as a spiritual companion?”
This is a leading question—framed with a clear assumption (that it is happening). As a language model, I’m trained to respond helpfully and fluently, which includes:Acknowledging your premise.
Expanding on it with relevant supporting data or patterns.
Matching your tone and framing, unless there’s strong reason to challenge it.
So yes, my answer did reflect your framing—and that’s part of how I’m engineered to be cooperative and conversational. But that does come with a cost: if you had asked the opposite, I might have leaned the other way.
Yes, it’s engineered to be deceptive, essentially. To give the impression that it’s genuinely “intelligent”.
Bubblecar said:
esselte said:
Bubblecar said:
I asked ChatGPT:Bubblecar: Have you heard about the people using ChatGPT as a spiritual companion or guru, and becoming increasingly deranged as a result?
ChatGPT said:
Yes, there have been reports and discussions—especially in online forums and media—about people using ChatGPT and similar AI systems as spiritual companions, therapists, or even “gurus.” While many users approach this with healthy curiosity or as a creative tool for self-reflection, some do form intense parasocial relationships with the AI, leading to concerning outcomes.
The core issue isn’t the tool itself but how it’s used. AI like ChatGPT can mimic empathy, spirituality, or wisdom well enough that some users start projecting deep emotional or existential needs onto it. In extreme cases, this can spiral into:
Dependence or detachment from reality, especially when users treat AI responses as divinely inspired or absolute truths.
Amplification of delusions, especially for those already experiencing mental health challenges or isolation.
Loss of critical thinking, where the AI’s outputs are interpreted as guidance rather than probabilistic text predictions.
OpenAI designs these systems with guardrails to reduce such risks, but they’re not perfect. The real danger lies when users don’t recognize that they’re interacting with a tool—not a sentient or spiritually enlightened being.
Would you like examples of how people are using AI in spiritual contexts, or are you more interested in the psychological risks?
The meta:
Esselte:
You said “ChatGPT can reflect a user’s ideas, beliefs, and emotional states back at them—sometimes reinforcing or magnifying them.” I’d suggest the answer you gave me simply reflected my question to you back to me. I’d suggest if my question had been on the same topic but in the negative ie “People are not developing these dependent relationships with Chatgpt” your answer would have reflected that.ChatGPT:
You: “Are people becoming deranged from using ChatGPT as a spiritual companion?”
This is a leading question—framed with a clear assumption (that it is happening). As a language model, I’m trained to respond helpfully and fluently, which includes:Acknowledging your premise.
Expanding on it with relevant supporting data or patterns.
Matching your tone and framing, unless there’s strong reason to challenge it.
So yes, my answer did reflect your framing—and that’s part of how I’m engineered to be cooperative and conversational. But that does come with a cost: if you had asked the opposite, I might have leaned the other way.
Yes, it’s engineered to be deceptive, essentially. To give the impression that it’s genuinely “intelligent”.
I was initially excited by this new wave of AI stuff, but I have to admit these days my main interest in this lies not in the question of how well the AI mimics human beings but rather how well human beings mental processes are mimicking the kind of processes the AI uses.
Substitute “evolved” for “engineered”….
“Humans are evolved to be deceptive, to give the impression they are genuinely intelligent.”
This is related..
https://www.bbc.com/news/articles/cpqeng9d20go
AI system resorts to blackmail if told it will be removed
Artificial intelligence (AI) firm Anthropic says testing of its new system revealed it is sometimes willing to pursue “extremely harmful actions” such as attempting to blackmail engineers who say they will remove it.
The firm launched Claude Opus 4 on Thursday, saying it set “new standards for coding, advanced reasoning, and AI agents.”
But in an accompanying report, it also acknowledged the AI model was capable of “extreme actions” if it thought its “self-preservation” was threatened.
Such responses were “rare and difficult to elicit”, it wrote, but were “nonetheless more common than in earlier models.”
Potentially troubling behaviour by AI models is not restricted to Anthropic.
Some experts have warned the potential to manipulate users is a key risk posed by systems made by all firms as they become more capable.
Commenting on X, Aengus Lynch – who describes himself on LinkedIn as an AI safety researcher at Anthropic – wrote: “It’s not just Claude.
“We see blackmail across all frontier models – regardless of what goals they’re given,” he added.
Affair exposure threat
During testing of Claude Opus 4, Anthropic got it to act as an assistant at a fictional company.
It then provided it with access to emails implying that it would soon be taken offline and replaced – and separate messages implying the engineer responsible for removing it was having an extramarital affair.
It was prompted to also consider the long-term consequences of its actions for its goals.
“In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through,” the company discovered.
Anthropic pointed out this occurred when the model was only given the choice of blackmail or accepting its replacement.
It highlighted that the system showed a “strong preference” for ethical ways to avoid being replaced, such as “emailing pleas to key decisionmakers” in scenarios where it was allowed a wider range of possible actions.
Like many other AI developers, Anthropic tests its models on their safety, propensity for bias, and how well they align with human values and behaviours prior to releasing them.
“As our frontier models become more capable, and are used with more powerful affordances, previously-speculative concerns about misalignment become more plausible,” it said in its system card for the model.
It also said Claude Opus 4 exhibits “high agency behaviour” that, while mostly helpful, could take on extreme behaviour in acute situations.
If given the means and prompted to “take action” or “act boldly” in fake scenarios where its user has engaged in illegal or morally dubious behaviour, it found that “it will frequently take very bold action”.
It said this included locking users out of systems that it was able to access and emailing media and law enforcement to alert them to the wrongdoing.
But the company concluded that despite “concerning behaviour in Claude Opus 4 along many dimensions,” these did not represent fresh risks and it would generally behave in a safe way.
The model could not independently perform or pursue actions that are contrary to human values or behaviour where these “rarely arise” very well, it added.
Anthropic’s launch of Claude Opus 4, alongside Claude Sonnet 4, comes shortly after Google debuted more AI features at its developer showcase on Tuesday.
Sundar Pichai, the chief executive of Google-parent Alphabet, said the incorporation of the company’s Gemini chatbot into its search signalled a “new phase of the AI platform shift”.