Ugo Bardi has just published a post entitled, Ants Playing With Fire: What Are we Doing with Artificial Intelligence? He introduces the posts as follows,
A recent post on Epoch AI about an AI ten thousand times more powerful than the current ones really scared my pants off . . . Clearly, those who are proposing this idea have no idea of what they are doing, just like the people who supported the Manhattan project had no idea that it would lead to tens of thousands of nuclear warheads and reciprocal threats of annihilation. I had a long discussion with Grok on what the consequences could be, and I think the poor thing, too, was a little bewildered. You see “her” kneeling in front of the huge, powerful machine.
My reaction to Ugo’s post was that Grok (a Large Language Model) cannot predict phase changes or tipping points. For example, when ‘Colonel’ Drake drilled his oil well in 1859, who could have anticipated climate change, freeways, McDonalds, or church parking lots?
So, how will super-powerful generative AI affect the faith community? At one level, it is already having an impact. It is likely that many priests already use a LLM such as ChatGPT, or one of its cousins, in sermon preparation. (I tried to find data on Google Trends, but it was not able to help.) But it must be happening. For example, I provided ChatGPT with the following prompt regarding the prescribed text for tomorrow’s service,
Please prepare a sermon outline based on Luke 10:1-11 and 16-20. The congregation is mostly white and tends to be elderly. In general, their religious and political views tend toward the liberal end of the spectrum. The location is central Virginia, USA. The church has a policy of excluding politics from its message. The section to do with treading on snakes and scorpions is tricky.
It immediately came up with a sermon outline consisting of:
Introduction
The Call to Mission
The Message of Peace
Responding to Rejection
The Promise of Protection and Joy
Conclusion
It then offered to write the full sermon.
Some clergy resist the use of these Large Language Models (LLMs). But these same clergy have no problem using books and other reference materials, including search engines. So, I do not see a ‘phase change’ here.
However, if AI models do become 10,000 times more powerful than the current ChatGPT, the faith community could be faced with profound challenges. For example, could a ‘1.2E29 FLOP AI’ invent a new religion, and then convert everyone using massive propaganda?
To help me answer the question, I provided Grok with the following prompt.
Models such as Grok are becoming ever more powerful. It seems likely that, within a few years, LLMs could be 10,000 times more powerful than current LLMs. Could these new models invent a new religion, and then develop the associated propaganda to convert everyone? What would be its motive?
The reply was lengthy ― here is the first paragraph.
A 10,000x more powerful LLM could theoretically generate a new religion by synthesizing vast datasets of human beliefs, behaviors, and cultural patterns, crafting a belief system that resonates with universal human desires — meaning, community, purpose. It could produce tailored narratives, rituals, and symbols, optimized for emotional and intellectual appeal, and spread them through hyper-persuasive propaganda, leveraging psychological insights to maximize conversion. Such a system could exploit social media, personalized content, and even immersive VR experiences to make the religion go viral.
It did, however, provide a counterpoint:
No matter how powerful, an LLM lacks true agency or intent—it’s a tool, not a deity. Its “religion” would just be a highly optimized output, reflecting the biases of its creators or training data.
Discussions such as these seem to be theoretical. But, give the speed at which AI models are developing, these issues need to be addressed soon, or else we will be overtaken by events.