The Year of our Lord 2026 is just around the corner. Let’s hope it’s not The Year AI Took Over the Planet. Let’s hope AI is not a ticking time bomb. If it did take over, that might be an overall improvement to the way things are run here on Planet Earth — in the short run, at least. It’s the long term, the unintended consequences, that worries us. To deal with this looming threat, the LDS Church has issued its own AI policy, denoted 38.8.47 in the Handbook’s arcane numbering system, titled “Appropriate Use of Artificial Intelligence.” Let’s dig in.
First, in a glaring omission, the policy does not say “artificial intelligence should not be used to take over the world.” Because the policy isn’t really worried about things like that. It is only concerned with uses and abuses of AI within the Church. LDS leaders frequently opine on political and social issues (always described as a moral issue to justify the comments) so it would not be out of line for LDS leaders to issue a broader caution about the possible malign uses of AI. But they didn’t. So let’s see what they did, in fact, talk about. I’ll put quotes from the policy in bold font for convenience.
The introduction notes that “AI should be used responsibly” and “AI cannot substitute for the individual effort or divine inspiration required for personal spiritual growth or genuine relationships with God and others.” I have no doubt young tech-savvy LDS teens will soon figure out it’s easier to ask AI to summarize Isaiah in four paragraphs rather than labor through all 66 chapters (although in a modern translation rather than the clunky KJV it will be a lot easier this year). The LDS policy warns against such a techie shortcut.
The policy also warns that “members should not use AI to create or disseminate anything that is false, misleading, illegal, or harmful.” They might have also stated that principle applies with or without the use of AI, but I think they are worried that members, relying on AI, might *unintentionally* circulate or publish false or misleading information using AI. As opposed to the merely human-based LDS circulation of false or misleading material. If there was a policy pledging the Church as an institution to honest, candid, and transparent communication, the caution against *members* using false or misleading communication would have more bite.
Moving along, the section titled “Learning and Teaching” adds that “AI can be a useful tool to enhance learning and teaching,” then gives a couple of fluffy paragraphs describing all the non-AI ways members are supposed to study the scriptures, write talks, and teach. So this section doesn’t really give the average member much direction on how to use or not use AI in doing LDS stuff (reading scriptures, writing a talk, teaching a class).
The next section, “Relationships with God and Others,” tells us that “interactions with AI cannot substitute for meaningful relationships with God and others.” Oh, I don’t know, I find online interactions do, in fact, often substitute for some interactions with real humans. Maybe we are talking about three levels of interaction here: (1) direct interaction with real humans; (2) online interactions with real humans; and (3) interactions with bots and AI. I’m guessing, based on the talk Elder Bednar gave a couple of years ago, that LDS leaders are worried LDS teens and young adults will find AI boyfriends and girlfriends more attractive and easier to deal with than the real-life versions. You can read section 38.8.47.2 yourself and decide what is really going on.
The final section, “Callings and Assignments,” is probably the most helpful. “When used appropriately, AI can be an effective tool to assist [leaders and members] in their duties.” It notes that “AI can be helpful for research, editing, translation, and similar tasks.” It cautions that “leaders should not rely upon AI to provide advice to members on medical, financial, legal, or other sensitive matters.” And it issues the reasonable warning that “sensitive information, such as Church records, personal member data, or confidential communications, should not be entered into AI tools that are not provided or managed by the Church.”
So re-read that second quotation in the above paragraph. Research, editing, and translation sounds like stuff the curriculum and translation departments at the COB do, which the policy is saying is okay. It does not come out and say that members can use AI to write talks or compose a lesson, which is roughly similar to research and editing, albeit at a simpler level. But it does not say either that members should *not* do that.
When (already?) members start doing this — reading AI-generated talks over the pulpit — some GA is going to address it in Conference, no doubt. Here’s the thing: the overall quality of talks and information in LDS talks will probably go up if this is done. But sometimes false and misleading info will be included as well, given that AI has a real problem filtering out phony facts and misleading claims. The other avenue for likely use of AI the average member encounters will be articles in LDS magazines and the sprawling LDS.org site. “Slop” is the term for AI-generated content that is now flooding the Web, and we see it all the time now, whether you realize it or not. I doubt LDS slop will be any better than secular slop.
So, human readers, what do you make of all this?
- What do you make of the new LDS AI policy?
- Have you encountered AI-generated LDS talks or articles?
- Have you heard anyone explicitly state over the pulpit that AI wrote their talk?
- Have you used AI to help you write a talk or lesson? Was the resulting product improved?
- Are all the doom and gloom warnings about AI just techie anxiety or is this really going to change the world for the worse? Or maybe the better?

There’s also a write-up at the LDS Newsroom about the new policy:
https://newsroom.churchofjesuschrist.org/article/general-handbook-enduring-guidance-artificial-intelligence
I think this is already insinuated by your post, but the only reason the quality of sacrament meeting talks would perhaps “increase” with chatbots is because many members write such uninspired and rambling talks to begin with, that the dull, bland mediocrity of LLMs would be an actual improvement for them. Yet that’s merely an improvement from a D essay to a C; I still don’t want to listen to either during my short time on this earth.
But then again, we have been training members to write like LLMs for years now: blandly just regurgitating conference talks rather than adding anything new to the conversation. That is, the problem isn’t that members might use machines to write their talks, but that they have been taught to write like machines all along.
I recently read something about the young girl who writes talks for Bednar. Is having a ghost writer different than just using ChatGPT? Bednar supposedly gives here a few prompts, stories to include and she has learned how to adapt a writing style to sound like a right-wing conservative bully. Will she soon lose her job to ChatGPT to save money, or does Bednar view a ghostwriter as integrous while AI is not?
ot quite on topic, but I’ve noticed a very cringe thing happening in our family text thread where my boomer dad has all of a sudden discovered AI. Several times he’s sent generic well-wishes or celebrations regarding someone’s birthday, and it is so obviously AI-generated as to come off as quite insulting. When people can easily see that AI was used to generate content, especially intimate, personal content, it seems to send a signal that the relationship is not worth that much time.
Also, I use AI in a very specific way, especially when writing. I will feed my content through an AI and explicitly tell the LLM to not change any of my voice/tone/wording, but merely to clean up grammar, spelling, and punctuation. I like that approach, as I feel it maintains the essence of what and who I am.
But I also think this AI collective freak-out is no different than the advent of other technologies that came before us. I think people were worried when printed text came out that a loss of the appreciation of craftsmanship and art would occur, and that by making images and texts widely available, it would degrade the art form. Eventually, we will find some happy medium.
I haven’t explicitly encountered any AI at church, but it wouldn’t surprise me to know that a sizable minority of talks are already being at least partially written by AI. I do see it being used at work *a lot* and no where more than by our R&D department. The number of times I’ve had to push back against an assertion of fact that was supported exclusively by “chatGPT said” scares me. I don’t mind the use of AI at church or at work when it comes to brain storming or polishing text. I don’t trust it to be knowledgeable about anything. (We’ve all seen the memes made of its disastrously wrong responses.)
JB is spot on that LDS culture has spent decades training us to give talks and lessons that are dull, mechanical regurgitations. And AI can probably already do that better than humans can. Last Sunday my ward got everyone from Sr Primary age and up together for a 2nd hour meeting where we watched an incredibly boring video from the area presidency and then paused it periodically to ask for comments. It was the worst 2nd hour meeting I’ve been to since the last second hour meeting I went to. (I go to primary every week, where the instruction is equally infantile, but at least the audience is, in fact, 6 years old.)
I’m convinced that they keys to giving a good talk are: 1) a bit of natural talent, 2) a lot of time thinking about what to say, and 3) saying something that you feel passionately about. I’d rather listen to someone passionately about something I don’t care about, than hear them talk dispassionately about something I do care about.
One thing I’ll say for AI, it can write a perfectly serviceable talk for a kid to deliver in Primary in about 30 seconds which is quite helpful when you find out your kid is scheduled to talk just as you’re walking out of the chapel and into the hallway to go to class.
Also, there are stories in the scriptures where there is very little to nothing in the way of easily accessible artwork representing the scenes or themes of those stories (Joseph escaping from an oft-nude Potiphar’s wife is certainly not one of them as it seems to have been a favorite of male painters for centuries – go figure). Getting AI to create relevant artwork in the style of an artist of your choosing (I like several 19th century realist painters) is really a boon for lesson preparation if you’re the kind of teacher who likes to use visual aids.
Not a Cougar, I disagree regarding the usefulness of AI-generated artwork and illustrations. My high-school age teen has a teacher that seems to have recently discovered AI and is enjoying it a little too much. He frequently inserts AI-generated illustrations into lesson materials, either as legimitate visual aids to amplify a point, or as (poorly attempted) humorous attention-getters during class lectures. Some of these images find their way into materials sent home to parents. Though the content is harmless and inoffensive, the general look and feel of AI-generated “art” is so deep in the Uncanny Valley that its uncomfortable and cringeworthy in appearance, and instantly sets off internal BS alarms (in those who still have functioning ones). Among the teen crowd, they always elicit eye-rolls, groans, jeers and wisecracks from classmates; younger people can instantly identify the artificiality, and recognize it as something not to be trusted. Church-related AI-generated illustrations I’ve seen are just weird, and often doctrinally incorrect. It pretty much screams “everything in this Church is all made up” which, whether you agree or not, is probably not the right message to convey to a Primary class.
Recently AI was able to successfully take three differing drafts of my MIL’s obituary and integrate them in a way that made everyone happy. I didn’t tell them until afterwards it was AI that wrote it and not me, but nobody seemed to care, at least not publicly. At work we’re openly using AI as much as possible and people are worried sick about losing their jobs. The only thing the company requires is that somewhere there is a human in the loop so that AI isn’t making decisions that AI is monitoring.
I didn’t take the time to read the new AI policy but eventually they’ll have to be more explicit when AI gets more sophisticated. Will sex robots – a specific type of AI – be considered adultery from a church point of view, without consent from a spouse? What about with consent or participation of the spouse? Recorded hymn accompaniment is already used frequently, is that really different than AI writing a talk? What about AI generated ward and stake boundaries and AI assigned missionaries? I bet it’s already happening.
I think church HQ will use AI as much as possible to save money and time and to replace jobs but that they will try to preserve the human aspect of it at the local level as much as possible. AI to assign missionaries, absolutely. AI to assign ministering companions, no.
I don’t know. I’ve used AI to do research and it is quite good at searching tons of sources and then compiling a very digestible report for me to read. It’s also really good at helping me organize thoughts or challenge me in a way I didn’t think of before. I can’t say I’ve ever had any remotely fruitful, consistent response from God in my prayer practices over the years. If the church says we shouldn’t use AI as a replacement for relationship with God, then shouldn’t God actually show a little more interest than he currently does? It’s all well and good to say that he loves us and knows our names, but that is an exceedingly low bar. My kids would say I love them because I am present with them, show genuine interest, engage with them, help them, etc. I don’t know that God meets me where I’m at as I generally feel I’m just left to figure everything out on my own even after asking for his help.
As one who embraces AI as a learning tool, I think the tensions we are noting are real—and not new. Every major communication technology, from the printing press to PowerPoint, has raised similar concerns. AI is simply the latest mirror reflecting those concerns back at us.
This policy suggests to me that AI should be an assistant, not a replacement for moral judgment, understanding and inspiration. Writing a talk with AI help can fit squarely in that frame—if the speaker remains responsible for the message, checks sources, and thoughtfully seeks meaning rather than outsourcing it. But outsourcing the hard job of thinking is something many members are already quite skilled at, and AI is certainly not to blame.
I think that many of you are right that quality will likely rise on average. Many talks are already compilations of quotations and ideas assumed to be true by the speaker; AI simply accelerates that process. The dangers of epistemic laziness or uncritically using prose or cliches that may introduce distortions can be amplified with AI. So bad talks will likely be a bit worse.
I strongly believe that the solution isn’t banning the tool; it’s cultivating an atmosphere of learning and discernment. Teaching members how to verify, revise, and spiritually wrestle with content is far more consistent with a faith that values agency and learning “by study and also by faith.”
Jack, different strokes I suppose. My experience with AI-created art has been perfectly adequate for the intended purpose of providing some sort of visual representation of the material. Maybe the class’s collective tummy will be in knots at the sight of a simple digital painting of a fairly generic Philemon reading a letter from Paul, but I have my doubts. Of course, I absolutely do try to use original artwork wherever possible (and some of it is pretty bad – especially Book of Mormon artwork – the Book of Mormon Art Catalog website does its best to post all the art it can find, but some of it is very amateurish), but sometimes, the story I’m trying to tell just doesn’t have any representative artwork.
Todd S. I grew up with one of Bednar’s speech writers. She worked for him when he was at BYU Idaho in the presidents office and continued to write for him when he became an apostle. She stopped when her family responsibilities became too much. She isn’t a young girl any more than I am a young man. Middle age comes for us all.