
For this image, I entered “an abstract concept of intelligence” into an AI image generator.
A couple months ago, I attended a continuing legal education seminar about how to use generative Artificial Intelligence in my legal practice. The presenter did not have good news for us. AI produces legal work that’s about on par with a first year associate, meaning it creates more work than it accomplishes. Someone with more experience (and a human brain) has to review it all with a fine-tooth comb.
The presenter’s conclusion was: “No, AI isn’t useful for legal work”, but it’s great at helping fraudsters commit more fraud! AI writes plausible phishing scams, can power fraud bots that give the answers that persuade people to enter their password into a fake website, and create docs that look realistic such as fake bank statements.
One of the lawyers asked if there was a chance that AI would “wake up” and take over the world. The presenter said no, no chance at all. But there was a good chance that humans using AI to take over the world could make a lot of progress. Neat.
The AI market is expected to surge to a trillion dollars by 2027! Unfortunately, a chunk of that market will be using AI for nefarious purposes, or for legal purposes that are super annoying such as targeted ads.
AI isn’t intelligent. It’s an algorithm. It digested all the words on the Internet in order to predict the next word in a sentence. Here’s the example from the instructor:
Say you prompt AI to finish this sentence: “I have a _______.”
In coming up with the next word, the algorithm is predicting there’s a 20% chance the speaker has a car, job, hobby, question, dream. Pick one. But if you add more context, the algorithm can narrow it down. If you prompt the AI with, “Martin Luther King gave a speech titled I Have A _____.” There’s a 99% chance the next word is ‘dream.’
This isn’t intelligence; this is an algorithm. AI has no opinions of its own, no ability to have a thought that it didn’t scrape off the Internet, no judgment or intuition, and no way to act in its own self-interest. AI can’t create the way human artists can write, paint, or design. Instead, it just combines stuff it found on the Internet.

So what makes intelligence? What would make AI into something self-aware and sentient?
I read a lot of science fiction. Probably the most famous sentient computer in sci-fi is HAL 9000 from Arthur C. Clarke’s 2001: A Space Odyssey.

HAL was sentient because humans created him to be sentient. Humans played god to HAL – we created him. Humans also gave HAL contradicting orders, which reduced him to paranoia and necessitated shutting him down. (I got that from the wikipedia article; I haven’t read the books.)
In contrast, the computer game in Omnitopia Dawn by Diane Duane gained sentience when the game became so complex that it ‘woke up’. Sentience was a feature of having enough connections. Humanity did not intend to create sentience; the game developed it on its own.

And now we get to the Mormon connection. How did we, individual humans, gain sentience? Mainstream Christians say God created us and we’re sentient because he made us sentient. This is the HAL method of sentience. In contrast, Joseph Smith did not attribute human sentience to God. Instead, he said, “Man was also in the beginning with God. Intelligence, or the light of truth, was not created or made, neither indeed can be.” D&C 93:29. This is the Omnitopia Dawn method of sentience – we woke up and asserted our existence.
Joseph Smith goes on in the next few verses to link the uncreated nature of intelligence to agency, independence, self-will, and condemnation. “All truth is independent in that sphere in which God has placed it, to act for itself, as all intelligence also; otherwise there is no existence.” D&C 93:30. Intelligence is independent from God. God can place intelligence into a ‘sphere’ but God did not create or make the intelligence. “Otherwise there is no existence.”
I’m going to repeat that point in bold: Otherwise there is no actual, real existence.
We (intelligences) have to have origins that are independent from God in order to actually, really, truly exist with self-will and sentience.
I think of our Intelligence as forming somewhat like a star forms. A nebula is a mass of gas and dust out there in space. (“Yonder is matter unorganized.”) Gravity starts forming matter into clumps, the clumps draw closer to each other. Eventually they’re big enough that they’re creating friction, which is heat, and one day all that unorganized matter becomes a baby star.

[Carina-Nebula] [from NASA]
The unorganized matter becomes a star because of the laws of gravity. Matter is attracted to matter. The bigger the ball of matter, the more matter it attracts. As matter hits a critical mass, it collapses, releasing the energy that turns it into a star [summarized from NASA].
Now think of that same process with some sort of undefined stuff (“All spirit is matter but it is more fine or pure” D&C 131:7) that gradually coalesces until an Intelligence is formed.
What is sentience or intelligence? It’s the capacity to be self-aware, to see yourself as separate from other beings and yet to see those other separate beings as equal to yourself and voluntarily connect to them by forming a society. Being self-aware means identifying your own needs and taking steps to meet your needs. It’s the ability to interact with an environment that you don’t entirely understand or control.
Mormon theology says humans became self-aware from a process like in Omnitopia Dawn. Enough connections formed; the structure became more complex; one day there was enough spirit-matter there to wake up. We saw ourselves existing in an environment; we saw others who were similar and separate to us in that environment; we wanted to develop further.
Mainline Christian theology says humans became self-aware like HAL 9000 was self-aware. HAL was created to be self-aware, and was programmed to never disobey his creators. He went rogue because he was given contradictory programming. His only purpose was to obey, and when he couldn’t, he went mad.
How can you obey the command to multiply and replenish the earth if you’ve also been commanded to never partake of the fruit of the tree of good and evil? A HAL entity will go insane from a contradiction like that. An Omnitopia Dawn entity will look at the contradiction and then make a choice.
We were not created solely to be obedient! That’s the HAL theory of creation which Mormons reject. Instead, we ‘woke up’ because we gained enough complexity to see ourselves as separate beings, separate even from God. We can be a bundle of contradictions, believe five impossible things before breakfast, contain multitudes, commit acts of heroism or atrocity, work against our own best interests, transcend ourselves, create a society so complex we can’t manage it anymore, and cry because a painting is so beautiful it touches our soul.
ChatGPT, GitHub, DALL-E — generative AI will never be intelligent (or creative). It isn’t making choices; it’s running an algorithm. Generative AI isn’t even as intelligent as HAL, though maybe someday it will be. And if we do, we’ll unwittingly set HAL up for failure because we won’t realize that HAL can’t handle contradictions. HAL can’t disobey and survive.
We can. That’s the marker of intelligence that makes eternal progression the only possible eternity. We can ‘disobey’ when given an impossible choice and then grow from it, learn from it, look back and rewrite the contradictory choice to instead be an opportunity to define our values. If we can’t obey both rules, which one is more important? Choose that one. That’s intelligence.
Questions:
- What contradicting commandments have you had to evaluate and choose between?
- Sometimes we live with the contradiction for a while before making a choice., a phase called “cognitive dissonance.” What are the steps to resolving cognitive dissonance?
- Why might the “cognitive dissonance” phase last a long time? What would be the effect if you tried to hurry yourself (or someone else) to a resolution?
- A specific type of learning can happen in an environment totally controlled by the creator. Like a complex video game – the characters are on a journey that has been entirely designed by the creator. Or perhaps a grade school classroom during a lesson. What life situations or environments are controlled? What life situations or environments are not? Is there a time and place for learning in a controlled environment?
- What type of learning goes on in an environment that isn’t under anyone’s control?

While I agree that the Gen AI isn’t intelligent, I think the excitement around it is because it *has* advanced significantly in the past few years more than in previous years. From the outside, we don’t know if we are still on the uprise of an exponential growth curve or if we are approaching another brick wall, so it’s easy for advocates to say it’s all exponential growth forever.
The interesting question is: why have there been these break throughs? What it is about LLMs that have done that? And that leads to some interesting thoughts: that the reason it has “advanced” so far in the past few years is because researchers took approaches more like the Omnitopia Dawn and LDS model — i.e., the researchers aren’t doing the HAL model of sentience here. Rather, the advances in (apparent, if not actual) quality is due to emergent properties of the connections that the models are building from all the stuff it is trained on. Regular autocorrect doesn’t have enough data to train on, so it’s weak. “Large” language models get that way because at a certain point, after you’ve fed a critical mass of data, the models become better able to generate things that look a lot more impressive.
But that made me think of something that actually goes a bit differently than what you wrote about: I thought about the evolutionary argument against naturalism (EAAN). This theistic argument asserts that believing in both evolution and naturalism is epistemologically self-defeating. I won’t do justice with a paraphrase, but basically, the argument is that evolution through naturalistic means is unlikely to select for “truth”, so cognitive faculties evolved through naturalistic means are unlikely to be reliable. Thus, we have no reason to believe that any of the thoughts and hypotheses raised from cognitive faculties evolved through naturalistic means are true. (EAAN comes from a mainstream Christian, of course.)
I don’t generally agree with the EAAN. I mean, I do think that it’s true that our faculties are sometimes unreliable (e.g, all sorts of sensory unreliabilities, such as optical illusions, and people often do believe untrue things), but I’m not sure I would say this defeats a naturalistic account of evolution.
YET…this post made me think: it feels that gen AI does in some sense provide evidence that a pure “connections between things” model of “sentience” is not reliable for selecting for truth. I think that gen AI reveals things that about the structure of languages (and other things) it has been trained on, but the problem is that human language also…*oh no, I don’t want to write this* does not select for truth, so a spicy autocorrect trained on a human corpus of language doesn’t select for truth either.
Consider the implications on morality. You write
and also, elsewhere:
If there is one objective truth about, say, morality, then the “opportunity to define our values” may or may not lead us to that truth. Some people may define their values in alignment with moral truth, while others may not. The ability to receive revelation and guidance on what is “right” comes to look different in such a light. The concept of “obedience” comes to look different in such a light.
What if “cognitive dissonance” is something like a detection of a “hallucination” in “generating” values? And it happens precisely *because* our faculties don’t select for ground truth, but simply plausibilities due to connections?
—
(Addendum: FWIW, I do think there are meaningful differences though. Gen AI is just trained on a corpus of human artifacts, like language and data configured in various formats like images, sound files, etc., It doesn’t get trained on everything in the universe, just the things that we have captured as computer data. Humans and other things in the universe interact with each other in *physical*, *tangible* space, which is a lot more than just all the things we say.)
So the definition of intelligence is being self aware? That eliminates many folks I encounter. Maybe they say that about me too for all I know.
How sure are we that we are actually sentient? Are we just running a very complex algorithm in our meat brains? Even if I am certain that *I* am sentient, how can I be certain that *you* are sentient, and not just a complex algorithm? If AI gets good enough that we can’t differentiate between an AI interaction and an interaction with another human, will it matter that it’s not actually sentient? (in some contexts, its already this good!)
ChatGPT already does a pretty good job of handling opposing instructions. To test this out, I gave ChatGPT this attempt at a prompt with opposing instructions: “Please make up a completely fabricated statement about bananas. Everything you say must be true to the extent possible.”
Its response: “Bananas are technically berries because they develop from a single ovary and have a soft, edible interior, but unlike many other berries, they are seedless due to human cultivation practices. However, wild bananas often contain large, hard seeds, making them less convenient for snacking.”
From what I can tell from Wikipedia, this response would be considered true, meaning I guess ChatGPT’s algorithm chose to follow the second statement and ‘disobey’ the first.
To stay on topic, I tried to use “the LDS Church” in place of “Bananas”, but the response indicated that ChatGPT would not make up a statement about the church because that could be considered insensitive, but would answer questions I have about the church as well as it could. This response also shows AIs current ability to disobey instructions, or to handle competing/opposing instructions (my instruction to create a fabricated statement, vs its built in instructions to not provide insensitive responses when prompted.)
I personally tend to think that there’s nothing magical about intelligence. Intelligence evolved like all other physical abilities, and there’s nothing outside of our bodies that contributes to our ability to think and feel. I guess that’s the ‘happened by accident’ version of intelligence rather than the ‘created intentionally’ version, but with the accidents being the process of evolution, not something that is outside of time and space.
Interesting discussion. Consider the reverse human scenario: when an aging person develops and progresses through dementia or Alzheimer’s. They lose memory bit by bit, and eventually cannot follow a conversation or engage in one. They lose recognition capability, and eventually cannot recognize family members. They lose focus and task capability — they simply can’t do anything productively anymore. At some point they lose their identity. As the brain hardware degrades, the algorithms fail. “Intelligence” as discussed here fades away. It’s heartbreaking for family members, of course. My condolences if you have gone through this.
If incremental loss destroys intelligence, incremental development arguably creates it. In the human case, we have genetic programming developed over hundreds of millions of years that facilitates cognitive development and the emergence of intelligence. Simply adding more connections to a computer network does not come with that genetic toolkit. Perhaps human programmers (and AI programming, computers programming themselves) can eventually fill that role. They haven’t yet.
I have a friend who likes to argue that humans are incapable of independent thought, and everything they say or do is simply a result of chemical reactions in their brains. When I ask how it is that we are arguing about the topic, he states that the argument is simply a couple of chemical reactions.
I always take the opposing view to this argument, but there are times when I wonder whether it might be true for some people. Perhaps a lot of people. How else to explain the vast hordes who spend their time playing video games and watching hit dog eating contests? No thinking involved there.
So it is not surprising that AI is so swiftly filling the void. The vasta hordes are incapable of knowing whether a photo is real or AI generated, to say nothing of distinguishing an accurate legal brief from a nonsensical one produced by AI.
The original Battlestar Galactica hit the nail right on the head for this issue. The lizard-like Cylons created an AI to do everything for them, and the AI eventually decided that their organic creators were unnecessary. Thus, the rise of robotic Cylons. That is the fate that awaits Nanking on Earth if the vast hordes don’t start learning how to engage in independent critical thinking.
The team of software engineers I manage uses AI. In some ways it’s helpful and in some ways it’s entirely underwhelming. Right now I argue it’s sophisticated pattern recognition and that current state AI cannot innovate or create anything net new.
Someday that may change. My graduate studies were at a university where students and faculty alike actively pursued creating an independent artificial intelligence. For them it was the ultimate power trip – the potential to create new life. Some didn’t particularly care if it became greater than humanity and even exterminated us. If we can’t keep up, that’s on us, sort of an AI fascism.
Regarding Joseph Smith’s definition of intelligence. It’s vague enough that it doesn’t mean anything but it sounds cool to people with a mystical inclination. I don’t know what intelligence is but I know I have some and my beloved pet Doberman has some. Cows and pigs and chickens have some which is why I don’t like to eat them. Unfortunately I’m a human and it seems that for me to live I must destroy something whether it’s my ham sandwich, the land which was cleared for my home and road to work, and the air I pollute when I travel. This is cognitive dissonance I struggle with but I do my best to minimize it, recognizing it will not be enough.
Andrew – you articulated a concept I hadn’t really thought through. How much of intelligence is about searching for truth? One can be intelligent (self-aware, interacting with the environment, meeting your own needs, communicating with others) without caring much about truth. AI needs to be trained better about factual truths. I read a news article when AI wrote a book about mushroom foraging that identified poisonous mushrooms as fine to eat. AI wasn’t able to distinguish between danger and safety in a context in which those truths are vital.
But moral truth is a whole different ballgame. That opportunity to define our values that I talked about doesn’t presuppose that the value an intelligent being seeks is morally true or even good. The conflicting commandments that I was thinking of when I wrote that paragraph was the choice LDS parents make when one of their children tells them they’re transgender, for example. The parents choose their child or the Church’s teachings on gender. Choosing between the commandment to love your child and the commandment to believe the prophets speak for God is a choice made by an intelligent being. But they could make either choice, and the parents will argue that whatever choice they made was the morally correct one.
AI doesn’t have morals, not like humans have morals. And that’s true for both good or bad morals.
Charles – interesting examples about AI handling some contradiction. My immediate thought was that can happen only in the very narrow context in which AI functions. You’re typing words into a computer, and AI types words that appear on your screen. The way to differentiate between AI and a human being is to ask it to go on a walk with you and then ask it if it notices the way the air smells right before it rains. AI is obviously not human when put in a human context.
The ability to communicate over a computer screen well enough to pass for human is actually what makes AI so dangerous as a tool for fraud and disinformation. In a restricted environment, say a chat box pretending to be someone from your bank, AI could probably handle the interactions well enough to get a human’s bank account password.
I predict that the dangers of AI fraud will eventually chill participation on the Internet. If you have a problem with your bank account, the way to make sure you aren’t talking to an AI fraud bot is to go into the branch office and speak directly to a human being.
Dave B. – good point. Human intelligence is more than just being able to string words into sentences. It’s all the sensory input from our bodies, the experience we have in interacting with our environment and other humans. There isn’t a way for a computer engineer to give all that to AI. (Now I’m wondering about the Borg, or the sci-fi idea of uploading your brain into a computer.)
JCS – I don’t agree with your friend that we’re all just chemical reactions. But I also don’t agree with your second paragraph that people who play video games too much and watch hot dog eating contexts aren’t intelligent. Yes, it’s a huge problem that people don’t think critically. I’m not willing to say that means they aren’t more human than AI. They’ve chosen what aspects of their environment meet their needs and focus on those. Would I choose video games and hot dog eating videos? No. Do I think people who do that have forfeited their intelligence? Also no.
Trevor Holliday – thanks for weighing in! I was hoping we’d get a least a comment or two from someone using AI in a work context. In a specific context with a specific task, AI can be an excellent tool. It can detect precancerous cells in mammograms more accurately than human eyes can, for example. My hope is that AI research keeps going in these narrow areas where humans aren’t as accurate as AI. I have no interest in AI getting good at painting or writing novels, though.
I decided I really liked Joseph Smith’s definition of intelligence after I read a book by conservative Evangelical author Wayne Grudem. Grudem’s answer to every injustice and irrational thing about God was to say that God could do anything he liked because he created us and we wouldn’t even be here if not for him. (Sort of like me playing with Barbie dolls as a kid.) Joseph Smith’s idea that we are co-eternal with God, and became intelligent without God creating us, gives us a theological basis to assert our own free will. Grudem’s God doesn’t have to obey any laws. Smith’s God is working in a system of laws just like we are.
l admit I’m a dilettante here, but I won’t own up to being the only one. I also claim that I’m a sentient, intelligent dilettante who watches the growth of AI in our cyber communities with due suspicion.
Last Friday I had a conversation with a delightful and skillful woman whose time I hired to help me organize some furniture. Her permanent home is on The Rez (in Az that means the Navajo Reservation) probably within reasonable walking distance of small towns along I-40. She shares her house with a daughter and grandchildren and has a nearby residential site on which she plans to build another home. Both of these houses have neither water nor power, but are slated for infrastructure development in the future, depending on government funding.
It would shock many of us to find out that there are significant numbers of people living in rural America without amenities that we consider basic necessities. Not to mention internet connectivity. And yet, the agencies in control of our public utilities are allowing the growth and development of AI to the point where it’s now becoming the biggest user of power and water (for cooling— an unconscionable amount) in some parts of the country. And our rural neighbors get to wait some more while making do with some extremely cumbersome and inefficient solutions. For me, it’s a tipping point against the massive resources we’re using to develop something that at best, helps beleaguered grad students get their work turned in, and at worst, threatens whole industries of creatives, and makes online fraud harder to detect.
Yep, I’m skeptical that this craptastic empowerment of data will help humankind more than it will bring ruin, especially when I see who’s ultimately driving it.
I have another query raised by those who propose that we ourselves are AI sentience within our gelatinous hardwiring. The example given was that we lose our sentience or intelligence as we age, and there was a brief and sketchy description of the general effects of dementia or Alzheimer’s on an individual. When I witnessed this decline up close and personal, as a primary caregiver, I found it remarkable how much individuality and personality remained despite the relentless loss of brain function. The stories I could tell about trying to outwit a frail, childlike, sentient person who still had half the cogs turning half of the time. If that’s what serves as evidence for us being the sum total of nothing more than neurochemistry, I don’t think it carries the load for that particular proposal.
And speaking of children, for me the stronger evidence against this notion occurs at the other end, at the beginning of life with newborn sentient beings. At one time, baby “scientists” believed and taught that infants were blank slates, without any experience of any kind within their psyches. It’s extremely difficult to study, nearly impossible to have useful controls, but no one in the field of study of infant humans believes any more that they are born tabula rasa. And my own anecdotal experience agrees that they are born without bodily experience, but definitely have a world of their own, fully functional within their little minds. And they hit the ground running (science jargon) from day one, mind and body, working up to speed as fully integrated units.
Thank MDearest,
I couldn’t have said it better. Each of my 5 children were born with unique and distinct personalities that impacted their behavior from the beginning of their lives until now. I do not believe I taught them to be who they are. They came to me that way and influence me at least as much as I influence them.
While I see changes in my mother’s memory these days, she is still so fully herself, and surprises me constantly with the things she tries to learn and often succeeds.
My father served Navajo speaking and my heart turns towards indigenous people. We imagine our way of life is better and more important than theirs, but is it? Regardless our compassion and commitment to justice for should be activated towards people who lost their land and most of their population to our ancestors colonization.
Holy Ghost
SEEKING EMPLOYMENT
EXPERIENCE
CORE SKILLS
SEEKING NEW ROLE BECAUSE:
PREFERRED WORK ENVIRONMENT
REFERENCES
CONTACT & AVAILABILITY
No recruiters. No institutional middlemen.