The Year of our Lord 2026 is just around the corner. Let’s hope it’s not The Year AI Took Over the Planet. Let’s hope AI is not a ticking time bomb. If it did take over, that might be an overall improvement to the way things are run here on Planet Earth — in the short run, at least. It’s the long term, the unintended consequences, that worries us. To deal with this looming threat, the LDS Church has issued its own AI policy, denoted 38.8.47 in the Handbook’s arcane numbering system, titled “Appropriate Use of Artificial Intelligence.” Let’s dig in.
First, in a glaring omission, the policy does not say “artificial intelligence should not be used to take over the world.” Because the policy isn’t really worried about things like that. It is only concerned with uses and abuses of AI within the Church. LDS leaders frequently opine on political and social issues (always described as a moral issue to justify the comments) so it would not be out of line for LDS leaders to issue a broader caution about the possible malign uses of AI. But they didn’t. So let’s see what they did, in fact, talk about. I’ll put quotes from the policy in bold font for convenience.
The introduction notes that “AI should be used responsibly” and “AI cannot substitute for the individual effort or divine inspiration required for personal spiritual growth or genuine relationships with God and others.” I have no doubt young tech-savvy LDS teens will soon figure out it’s easier to ask AI to summarize Isaiah in four paragraphs rather than labor through all 66 chapters (although in a modern translation rather than the clunky KJV it will be a lot easier this year). The LDS policy warns against such a techie shortcut.
The policy also warns that “members should not use AI to create or disseminate anything that is false, misleading, illegal, or harmful.” They might have also stated that principle applies with or without the use of AI, but I think they are worried that members, relying on AI, might *unintentionally* circulate or publish false or misleading information using AI. As opposed to the merely human-based LDS circulation of false or misleading material. If there was a policy pledging the Church as an institution to honest, candid, and transparent communication, the caution against *members* using false or misleading communication would have more bite.
Moving along, the section titled “Learning and Teaching” adds that “AI can be a useful tool to enhance learning and teaching,” then gives a couple of fluffy paragraphs describing all the non-AI ways members are supposed to study the scriptures, write talks, and teach. So this section doesn’t really give the average member much direction on how to use or not use AI in doing LDS stuff (reading scriptures, writing a talk, teaching a class).
The next section, “Relationships with God and Others,” tells us that “interactions with AI cannot substitute for meaningful relationships with God and others.” Oh, I don’t know, I find online interactions do, in fact, often substitute for some interactions with real humans. Maybe we are talking about three levels of interaction here: (1) direct interaction with real humans; (2) online interactions with real humans; and (3) interactions with bots and AI. I’m guessing, based on the talk Elder Bednar gave a couple of years ago, that LDS leaders are worried LDS teens and young adults will find AI boyfriends and girlfriends more attractive and easier to deal with than the real-life versions. You can read section 38.8.47.2 yourself and decide what is really going on.
The final section, “Callings and Assignments,” is probably the most helpful. “When used appropriately, AI can be an effective tool to assist [leaders and members] in their duties.” It notes that “AI can be helpful for research, editing, translation, and similar tasks.” It cautions that “leaders should not rely upon AI to provide advice to members on medical, financial, legal, or other sensitive matters.” And it issues the reasonable warning that “sensitive information, such as Church records, personal member data, or confidential communications, should not be entered into AI tools that are not provided or managed by the Church.”
So re-read that second quotation in the above paragraph. Research, editing, and translation sounds like stuff the curriculum and translation departments at the COB do, which the policy is saying is okay. It does not come out and say that members can use AI to write talks or compose a lesson, which is roughly similar to research and editing, albeit at a simpler level. But it does not say either that members should *not* do that.
When (already?) members start doing this — reading AI-generated talks over the pulpit — some GA is going to address it in Conference, no doubt. Here’s the thing: the overall quality of talks and information in LDS talks will probably go up if this is done. But sometimes false and misleading info will be included as well, given that AI has a real problem filtering out phony facts and misleading claims. The other avenue for likely use of AI the average member encounters will be articles in LDS magazines and the sprawling LDS.org site. “Slop” is the term for AI-generated content that is now flooding the Web, and we see it all the time now, whether you realize it or not. I doubt LDS slop will be any better than secular slop.
So, human readers, what do you make of all this?
- What do you make of the new LDS AI policy?
- Have you encountered AI-generated LDS talks or articles?
- Have you heard anyone explicitly state over the pulpit that AI wrote their talk?
- Have you used AI to help you write a talk or lesson? Was the resulting product improved?
- Are all the doom and gloom warnings about AI just techie anxiety or is this really going to change the world for the worse? Or maybe the better?

There’s also a write-up at the LDS Newsroom about the new policy:
https://newsroom.churchofjesuschrist.org/article/general-handbook-enduring-guidance-artificial-intelligence
I think this is already insinuated by your post, but the only reason the quality of sacrament meeting talks would perhaps “increase” with chatbots is because many members write such uninspired and rambling talks to begin with, that the dull, bland mediocrity of LLMs would be an actual improvement for them. Yet that’s merely an improvement from a D essay to a C; I still don’t want to listen to either during my short time on this earth.
But then again, we have been training members to write like LLMs for years now: blandly just regurgitating conference talks rather than adding anything new to the conversation. That is, the problem isn’t that members might use machines to write their talks, but that they have been taught to write like machines all along.
I recently read something about the young girl who writes talks for Bednar. Is having a ghost writer different than just using ChatGPT? Bednar supposedly gives here a few prompts, stories to include and she has learned how to adapt a writing style to sound like a right-wing conservative bully. Will she soon lose her job to ChatGPT to save money, or does Bednar view a ghostwriter as integrous while AI is not?