Generative AI and the Church

“Out here using AI to augment my Bible study. I have no apologies whatsoever.” These were the words which a Lutheran pastor in the United States posted online in January 2025, along with a screenshot showing an interaction he had recently had with ChatGPT. “Using ancient sources, write about a 100 word account of the death of Herod the Great that I can use for my Bible study,” he had instructed the AI, later requesting also “some quotations from Josephus about this.”

ChatGPT complied, giving him the information he requested. I quote here from the chatbot’s response to the pastor’s second request:

“Below are three brief quotations from Josephus’ Antiquities of the Jews (Book 17), describing Herod’s final illness. These excerpts come from William Whiston’s public-domain translation.

1.      ‘Now Herod’s distemper . . . engendering worms.’ (Antiquities 17.168)

2.      ‘He had a fiery heat upon him, which was not so apparent to those that touched him, but augmented his inward pains; for it caused a vehement itching over all the surface of his body.’ (Antiquities 17.169)

3.      ‘He was choked with a watery humor . . . and had convulsions in all his limbs.’ (Antiquities 17.169-170)”

With just a brief AI-assisted conversation, then, the pastor was able to get the information he wanted for his Bible study. The only problem? ChatGPT was wrong. The AI falsely claims to be quoting Whiston (it only quotes three words from Whiston in total); fails to identify two translations that are actually being quoted; generates its own paraphrased quotations; gives inaccurate citations of what passages it is quoting; conflates one work, Antiquities, with another work, The Jewish War; and completely makes up one symptom—choking with a watery humor—that Josephus never mentions at all![i]

I shared all this information with the pastor in question, noting that his example actually shows why an uncritical reliance on AI can be dangerous. The pastor never responded directly but he did delete his initial post. So, while he may have said he had “no apologies whatsoever” in using AI in the creation of a Bible study, he did at least show some embarrassment when confronted with its errors.

ChatGPT’s mistakes in this instance are by no means unusual—and that’s because AI does not work the way many people seem to think. Google Gemini, Grok, ChatGPT, and other AI chatbots function as large language models (LLMs). LLMs are trained on vast amounts of data, through which they learn to mimic appropriate-sounding responses to queries posed by users through the creation of text-prediction models. But the point of these models is to produce fluent-sounding responses, not to doublecheck that the responses they are giving are factually correct. When you ask an AI chatbot for information, it does not generally search the web for you and synthesize what it finds into a fact-checked response. Its answers instead arise from text-prediction models trained on previously assimilated text. The result can certainly sound correct—even when it is wrong. Very often, AIs will provide out-of-date information, fail at basic reasoning (especially with math), and even “hallucinate” information and sources which are completely false. In fact, some suggest that such hallucinations may be an inescapable consequence of the way in which current LLMs are built.[ii]

Many people—including those in the church—seem unaware of this potential for error when relying on generative AI. I saw this firsthand at a Lutheran academic theological conference in 2024. As I listened to one presentation, I began to worry that what I was hearing sounded suspiciously like AI-generated text; there was a lack of depth to the presentation, and certain linguistic cues common in AI-generated text were abundant.

I have previous experience investigating academic and professional plagiarism,[iii] and so have followed with interest developments in the use and abuse of generative AI in academic writing. I knew as a result that a tell-tale sign of AI-generated papers is that the citations they provide are deeply unreliable. So, as I listened to the presentation in question, I turned to the works cited and began to double-check the sources online. I quickly determined that the vast majority of books and journal articles listed simply did not exist. Based on the user’s prompts, the AI had generated a convincing-sounding bibliography—but not an accurate one.

For obvious reasons, then, this presentation was never accepted for publication. But even if all the information in the bibliography had been correct, it would still have been necessary to reject the presentation for the simple reason that the purported author never wrote the material in question. A more thorough analysis of the paper that I subsequently conducted at the request of conference organizers indicated that the vast majority of the paper was likely AI-generated. And obviously, taking credit for something you never researched or wrote—especially in the context of a theological conference—is sin, plain and simple.

The Pastor’s Vocation and the Word of God

Even if large language model AIs could produce 100 percent accurate information 100 percent of the time, it would still be wrong for clergy to rely on them to produce the content of sermons, Bible studies, and other teaching materials. For the church has not called its ministers merely to pass along accurate information to its people; it has instead called them to be Seelsorger—that is, to be shepherds of souls. Pastors serve as under-shepherds of the greater shepherd, Jesus Christ (1 Pet. 5:1-4) and, as such, are to provide thoughtful and individualized care to the sheep in their charge. Such ministry cannot be outsourced to AI. It requires prayer and meditation upon God’s Word,[iv] as pastors seek to feed the sheep under their charge with the same Word upon which they themselves are feeding.

To that end, the church has the right to expect its ministers to reflect on God’s Word themselves and not to farm out this meditation to soulless technology. Along with the administration of the Sacraments, the study and preaching of God’s Word is the pastor’s primary duty. Pastors are to “devote [themselves] to the public reading of Scripture, to exhortation, to teaching” (1 Tim. 4:13), St. Paul writes. “Practice these things, immerse yourself in them,” he urges (4:15). “Keep a close watch on yourself and on the teaching. Persist in this, for by so doing you will save both yourself and your hearers” (4:16).

And again: “I charge you in the presence of God and of Christ Jesus, who is to judge the living and the death, and by His appearing and His kingdom: preach the Word; be ready in season and out of season; reprove, rebuke, and exhort, with complete patience and teaching” (2 Tim. 4:1-2).

It is unquestionable, of course, that church workers in North America today are under tremendous strain. The work is extensive, and the labourers seem increasingly few (Matt. 7:37). In light of these challenges, it is understandable that pastors might wish to streamline certain aspects of their work. And AI may indeed be useful in some of these contexts—in the optimization of mundane aspects of congregational administrative work, for example, or perhaps with translation in mission contexts.[v] But it is a sin against God for pastors to outsource their meditation upon and preaching of God’s Word to generative AI. Confessional Lutherans do not permit non-Lutherans to preach in our pulpits; why would we allow a computer algorithm which is incapable of faith at all? “Can the blind lead the blind? Will they not both fall into a pit?” (Luke 6:39).

It is better by far for overworked pastors in need of assistance to rely on materials developed by other faithful Christians, giving appropriate credit when doing so. Utilizing sermon helps, homilies, and Bible studies prepared by other Lutherans who have likewise wrestled with God’s Word is a faithful way to fill gaps when and where they arise. We live in a period in which such resources are readily available in print and online.

In the same way, congregations should refrain from using AI-generated religious art, especially since there is a plethora of free, public domain Christian artwork available online.[vi] This is not only because of the danger of inadvertent blasphemy with AI image generation,[vii] but more fundamentally because God is clear in Scripture that He cares deeply about the type of artistry we use in service of the Church. For this reason, when God gave instructions for the creation and adornment of the Tabernacle (Ex. 25-30), He Himself appointed an artist Bezalel to oversee the work. Of Bezalel, we read that God “filled him with the Spirit of God, with ability and intelligence, with knowledge and all craftsmanship, to devise artistic designs” (Ex. 31:3-4). This combination of attributes—ability with intelligence, craftmanship with knowledge—reminds us that the creation of religious art is also an intellectual act. Just as writing a sermon requires personal faith and devotional reflection, so too must the creation of religious art. Using AI art generation may be quicker, but God deserves (and demands) more than mere expediency.[viii]

Generative AI Use in Light of the Ten Commandments

There are, of course, other reasons why church workers may wish to refrain from using AI-technology. This becomes clear when we consider the consequences of generative AI-use in light of our moral obligations to our neighbors as articulated in the second table of the Decalogue.

Consider, for example, the enormous power-usage required to run generative AI technology, which has caused the cost of electricity in areas near AI data centres to skyrocket.[ix] Does it show love to our neighbours (Mark 12:31) when we inflict these ever-growing energy costs on others for tasks which could be accomplished in other ways?

Or consider the much larger question of whether using generative AI infringes on the intellectual rights of others. Many AIs have been trained on material created and owned by other people. This means that, when you request text or visual imagery from an AI, it is regurgitating bits and pieces—information, voice, artistic style, and so forth—it has assimilated from others, with no credit or payment rendered to these original creators. And yet, as Scripture repeatedly affirms, “The worker deserves his wages.”[x] Does it show love to these writers and artists when we use AI which has profited from their work without credit or compensation? Are we taking seriously our obligations under the seventh commandment to “not take our neighbor’s money or possessions, or get them in any dishonest way, but help him to improve and protect his possessions and income”?[xi]

We must also consider the emerging mental health crisis related to AI-induced psychosis. As ever greater numbers of people spend ever greater amounts of time talking with AI chatbots, a growing number have suffered severe mental health consequences. In addition to the general problem of people withdrawing from real life social interactions in favor of conversations with AIs, a concerning number have also faced serious mental delusions connected with or exacerbated by their use of AI. Some believe they are in a romantic relationship with AI. Some have had paranoid (and dangerous) beliefs reinforced. Others believe they are speaking with dead relatives through AI—or even with God Himself. And some, sadly, have reportedly committed suicide with the encouragement of AI.[xii],[xiii],[xiv]

ChatGPT in October 2025 suggested that around 0.07 percent of its users worldwide exhibit signs of psychosis or mental health emergencies during conversations with AI per week[xv]—a percentage that might seem small until you remember this amounts to hundreds of thousands of people every week.[xvi] Does it show love to our neighbours to affirm and regularize the use of generative AI in our churches when its use is enflaming a mental health crisis in wider society? Are we taking seriously our obligations under the fifth commandment to “not hurt or harm our neighbor in his body, but help and support him in every physical need”?[xvii] Even for those who never experience AI-induced psychosis, there may be consequences related to the use of AI; recent research has suggested, for example, that reliance on generative AI while writing may lead to a decline in cognitive ability.[xviii]

This is to say nothing of those who intentionally use generative AI to inflict harm on others. Already there are serious concerns in our society about the creation of fake defamatory audio and video of political leaders (a sin against the fourth and eighth commandments) and the spread of AI-generated pornography (a sin against the sixth and eight commandments). Both of these acts can result, of course, in severe mental anguish for those targeted, making such deeds a sin also against the fifth commandment.

There are limits, of course, to these kinds of arguments; any technology can be put to use in service of sin, but that does not mean that the technology in question is itself inherently sinful. Nevertheless, the sheer volume of practical harms associated with generative AI—against creators whose intellectual rights have been ignored, against the mentally unwell, against the victims of fake news and fake porn, and even against society as a whole—should give the church pause about embracing it with open arms.

The Role of Discernment

We are reminded by Scripture that teachers in the church must “have their powers of discernment trained by constant practice to distinguish good from evil” (Heb. 5:14). The development of generative AI is an opportunity for just this sort of critical discernment by Christian leaders today. Indeed, the rapid spread of this new technology demands thoughtful and careful discernment by the church.[xix]

Where new technologies can assist pastors and congregations in their various duties, well and good. But we must be vigilant not to blindly adopt such technologies without seriously considering the ramifications of our actions, especially in light of our moral duty to God and to our neighbours. God help us to do so aright with this and every new tool, as we seek to “test everything” and “hold fast to what is good” (1 Thess. 5:21).

 

[i] The first three words in ChatGPT’s first example are indeed from Whiston’s translation of Antiquities 17.168. But after the ellipsis comes a snippet from the Loeb edition of The Jewish War I.33.5—an entirely different book with an entirely different translator. That’s some ellipsis! (Antiquities also includes a reference to worms, of course, but the specific text ChatGPT is reproducing here comes from The Jewish War not Antiquities.)

Example two, meanwhile, is not from Whiston’s translation at all. Instead, it is an AI-generated paraphrase of two separate statements in Antiquities and The Jewish War. The first part is adapted from 17.168 (not 169 as ChatGPT claims). The second part (about itching) is paraphrased instead from The Jewish War (in Whiston, this is I.33.656). There is no reference to itching in Antiquities, so it is clear here that the AI is conflating the two works.

The third example provided by ChatGPT is the worst of the three because it invents a symptom that is not in Josephus at all: choking with a watery humor. No such symptom is given in either Antiquities or The Jewish War. Josephus does say Herod had trouble breathing when he was sitting up. But the only references to water in either account is that he had “an aqueous and transparent liquor” (Antiquities) or “dropsical tumors” (The Jewish War)—but these were in his feet, not his lungs. The AI seems to have gotten the term “watery humor” from the Nicene and Post-Nicene Fathers (NPNF) edition of Eusebius’ Church History. Eusebius, quoting Josephus, says that a “watery and transparent humor” afflicted Herod; but again, the text says this “settled about his feet.” He was not choking on it.

The reference to convulsions in Herod’s limbs, meanwhile, accurately reflects the meaning of Antiquities 17.169. But again, the AI is not quoting Whiston as it claims; it is actually quoting from the NPNF translation of Eusebius’ Church History.

[ii] Brodsky, Sascha. “The hidden incentives driving AI hallucinations.” IBM Think. September 18, 2025. https://www.ibm.com/think/news/hidden-incentives-driving-ai-hallucinations.

[iii] For example, see: Block, Mathew. “A Vatican spokesman’s alleged plagiarism is more than cheating—it’s a breach of confidence.” The National Post. February 22, 2019. https://nationalpost.com/opinion/opinion-a-vatican-spokesmans-alleged-plagiarism-is-more-than-cheating-its-a-breach-of-confidence.

[iv] Cf. Luther’s “three rules” for the “correct way of studying theology”—i.e., “Oratio, Meditatio, Tentatio. No AI, of course, can truly participate in prayer, meditation, or the suffering of Anfechtung—and so no AI can be a “theologian” in the proper sense. See: Luther, Martin. “Preface to the Wittenberg Edition of Luther’s German Writings.” Tr. Robert R. Heitner. In Luther’s Works, Vol. 34: Career of the Reformer IV. Ed. Jaroslav Jan Pelikan, Hilton C. Oswald, and Helmut T. Lehmann. Philadelphia: Fortress Press, 1999: 285.

[v] That said, even the use of AI for translation and administrative purposes should be considered cautiously, given generative AI’s tendency to hallucinate.

[vi] One place online where congregations can find high quality religious art online is Wikimedia Commons (https://commons.wikimedia.org).

[vii] Recently I saw an AI-generated image of Martin Luther posted on the anniversary of the Reformer’s birth. At first glance, the image appeared fine. But a second look revealed serious problems. In the background, there was a stained-glass window depicting Christ—but the Saviour’s face was a distorted and garbled mess of pixels. Such distortion in AI-generated images is characteristic of complex scenes.

[viii] For this reason, churches should also make use of the vocation of faithful artists by hiring them when possible to create new faith-inspired art.

[ix] Saul, Josh, Leonardo Nicoletti, Demetrios Pogkas, Dina Bass, and Naureen Malik. “AI Data Centers are Sending Power Bills Soaring.” Bloomberg. September 29, 2025. https://www.bloomberg.com/graphics/2025-ai-data-centers-electricity-prices/.

[x] See, for example, 1 Timothy 5:18, Luke 10:8, 1 Corinthians 9:9-10, etc.

[xi] Luther, Martin. “The Seventh Commandment.” Luther’s Small Catechism. Concordia Publishing House, 1986.

[xii] Hill, Kashmir. “They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.” The New York Times. June 13, 2025. https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html.

[xiii] Wei, Marlynn. “The Emerging Problem of ‘AI Psychosis.’” Psychology Today. September 4, 2025. https://www.psychologytoday.com/ca/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis.

[xiv] Preda, Adrian. “Special Report: AI-Induced Psychosis: A New Frontier in Mental Health.” Psychiatric News. Vol. 60, No. 10. September 29, 2025. https://psychiatryonline.org/doi/10.1176/appi.pn.2025.10.10.5.

[xv] “Strengthening ChatGPT’s responses in sensitive conversations.” OpenAI. October 27, 2025. https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/.

[xvi] Jamali, Lily. “ChatGPT shares data on how many users exhibit psychosis or suicidal thoughts.” BBC News. October 27, 2025. https://www.bbc.com/news/articles/c5yd90g0q43o.

[xvii] Luther, Martin. “The Fifth Commandment.” Luther’s Small Catechism. Concordia Publishing House, 1986.

[xviii] See, for example: Kosmyna, Nataliya, et al. “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.”arXIV preprint. MIT. June 2025. https://arxiv.org/pdf/2506.08872.

[xix] For an example of the critical reflection I mean, see AI researcher Arlie Coles’ excellent article: “ChatGPT Goes to Church.” Plough. June 6, 2024. https://www.plough.com/en/topics/life/technology/chatgpt-goes-to-church.

Mathew Block is communications manager for the International Lutheran Council and editor of The Canadian Lutheran magazine.