“How’s your day?” pops up a message in your Facebook messenger. You see that it’s from your now adult child’s third grade teacher whom you haven’t talked to in ten years or maybe it’s from someone you met once at a work conference at a job you are no longer working. If you’re like most people, you immediately know “That’s not a real person having a real conversation.” You do not respond. You delete it or block it and move on with your life.

I listened to a really interesting podcast about the Turing test and the rise of AI. The Turing test, also known as the Imitation Game, was invented by Alan Turing in 1950 to test a machine’s ability to convincingly mimic human intelligent behavior for at least ten minutes. The test includes three participants: an interrogator, a human respondent, and a machine program designed to generate human-like responses. All interactions occur via text-only, such as keyboard and screen. An evaluator observes the interactions. If the evaluator cannot reliably distinguish the machine from the human, the machine has passed the test.

If you do a quick Google search, you will either be told that the first time a machine passed the Turing test was 2014 (Eugene was the name of the program, but he prefers Gene), or that nobody has ever passed the test. The podcast mentioned that the first program to beat the test was actually in 1989. This early prodigy passed the test in rather unique way, by insulting the interrogator using hostile and vulgar language that made the interrogator emotional and defensive, distracting them from noticing that the insults were repetitive and escalated conflict rather than responding to questions. Sound familiar?

The 2014 success Eugene used a few tactics to pass the test. First, “Eugene” identified as a young teen for whom English was a second language. This allowed the interrogator to give a pass to quirks of language as a byproduct of unfamiliarity with the language or the repondent’s immaturity.

The podcast shared the story of a researcher who had an ongoing “relationship” with a woman in Russia who turned out to be a machine. They wrote each other long messages for months, and he was interested in meeting in person and pursuing a real relationship, when he eventually figured out that this was a computer program. He had become suspicious when she had mentioned she was going for a walk, and he looked up the weather to see that it was too cold and stormy for a walk where she said she was; he asked her about it, and she ignored his question, instead responding with more of the same, talking about her mother, asking about him. He also was frustrated that their intimacy never seemed to increase in the conversation. She never referred back to previous conversations they had. She asked him about himself and told her about herself, but always at the same level as before. He sent her an email that was just random characters typed with his signature at the bottom. Her reply was the same as all of her other replies, just her talking about her mother and her friends and asking about his life.

The point of the Turing test is not to assess the intelligence of the machine. In fact, if it’s too intelligent (e.g. solves a mathematical problem that most humans could not), it will be obvious that it’s not human. The point is to deceive humans by imitating them. Mimicking stupid behavior is probably a better deception than mimicking intelligence. In fact, even Turing suggested that including typos was an important part of beating the game.

When I was still in college, I worked in a call center and I had a supervisor who did not pass the Turing test despite being a living human being. Talking to Toby was far less sophisticated than talking to a chatbot. Rather than attempting to respond to anything you said, Toby would just repeat the last word you had said in a semi-thoughtful sounding way. It was incredibly grating. Toby never asked you a question or even gave a response that added information. I could have programmed an AI more sophisticated than Toby when I was in high school in the mid-1980s. Toby wasn’t even trying.

When a program can deceive us into believing it is human, we can see this as a machine’s success, or we can see it as a human failure. Anyone trying to beat the test is creating a program that exploits human blind spots in relationships. We mistake the following things as real human interactions:

  • Being bullied, insulted or put on the defensive
  • Being asked to talk about ourselves
  • Hearing someone rant about a topic or go off on a tangent or only talk about themselves
  • Listening to someone who is not our equal in the conversation (e.g. younger, difficulty with communication)

Real human interaction can fail if we consider the Turing test. Do our relationships at Church pass the Turing test or are they sometimes like Toby, lacking in actual substance, basically just individuals all talking to ourselves? I was recently asked for an update on my ministering, and I hadn’t done it, and honestly, I just wasn’t really committed to doing it. It feels like forced friendship to me. I’m open to making new friends, but based on actually finding the other person interesting to talk with, having something in common, and a give and take. Ministering assignments can be like that, too, but they aren’t always.

Online interactions can be real, or they can be fake or superficial. I’ve tried to discipline myself not to respond emotionally to insults or rants. Responding, OK, but not getting wrapped up in the conflict of it or feeling defensive because so often these types of discussions are bad faith engagements. There’s no dialogue. There’s just two people ranting, neither one interested or listening.

I’ve likewise began to notice I try to avoid interactions that feel one-sided, with one party basically interrogating the other person but not contributing, or vice-versa. I have a tendency to be the ranter rather than the rantee, but that’s still not something I consider a real interaction. I could ask more questions, but sometimes I don’t like feeling like I’m pulling teeth if it’s not happening naturally.

There are some Church defenders online that I’ve encountered, quite a few of them actually, who don’t really have interactions. They have a pro-church agenda, and they see themselves as “defenders of the faith.” Rather than defending the faith effectively, though, they just seek out anybody they think is critical or insufficiently faithful, and they either attack, bear testimony, or shake the dust off their feet and leave. There’s no need for humans to be involved in this type of “interaction,” even if this behavior feels very human. You could literally create a chatbot to do it. The same thing can happen in person, too; it doesn’t have to be online discussions only. It’s just one-sided ranting rather than an interaction.

  • What percent of your communications with Church members pass the Turing test? Is that higher or lower than your interactions on the whole?
  • Have you cut back on these types of superficial interactions in your life? How did you do it?
  • Have you been fooled by a machine or program? How did you figure it out?
  • Do you think the quality of our interactions as Church members are getting better or worse over time? Why do you think as you do?

Discuss.