Articles

ChatGPT, Consciousness, and the Human Mind

ChatGPT cannot replace or even duplicate the human mind. It can only imitate it.

03/1/23

John Stonestreet

Kasey Leander

In 1950, British mathematician, computer scientist, and codebreaker Alan Turing created a test to determine if machines could think. A computer and a human being would send written responses to a human interrogator. If the responses from the computer were indistinguishable from the responses of a real person, Turing argued, that machine should be considered “intelligent.” 

Seventy-three years later, software company OpenAI has come closer than anyone in history to passing Turing’s test. Chat Generative Pre-Trained Transformer, or ChatGPT, is a ground-breaking language software that responds to nearly any prompt with a coherent, well-reasoned response almost indistinguishable from what a human might produce. 

ChatGPT is, to borrow a phrase, “breaking the internet.” So far, what it has produced ranges from the impressive to the hilarious.  It is also forcing a series of existential crises. For example, teachers are scrambling to discern the work of their students from the work of compelling AI counterfeits. The tech industry now faces what The New York Times calls “an AI arms race,” as competitors like Google apply their own AI to search engines and ad generators. Technology has made searching for errors in code, sifting through mountains of data, and summarizing complex issues in a few paragraphs exponentially easier and more user-friendly. 

Users can thank a revolution in machine learning for this technology. Computers of the not-so-distant past, described Stephen Shankland at CNET, were “famously literal, refusing to work unless you follow exact syntax and interface requirements.” For example, a Google search of “tacos” would pull up everything tagged with the word “taco,” usually ranked by popularity.  

AI programs such as ChatGPT, however, compile information differently. Programmers “train” an algorithm to distinguish between real-world categories, including the contextual usage of words on the internet. The algorithm is able to “learn” what those words mean and then use them more appropriately when given a prompt. The result is something that speaks with the collective knowledge of the internet. This means that what the program produces can be somewhere between constructive and misleading and, depending on what it is asked, between beneficial and hilarious.  

For example, when asked to write a Shakespearean poem about Einstein in jean shorts, ChatGPT returned this gem:  

In days of yore, when Albert pondered long, 

In contemplation of the cosmos strong, 

His denim-clad legs, the air did bask, 

As he sought the secrets of the natural task. 

Despite reports to the contrary, neither ChatGPT nor any form of AI produced so far represents a new form of consciousness. It is unlikely that ChatGPT will be the program that ultimately destroys humanity, as the late Stephen Hawking feared about AI. At its best, ChatGPT is a useful tool that is able to curate and make sense of incredible amounts of information in a user-friendly way.  At its worst, ChatGPT could further erode our collective ability to think critically and has the potential to be dramatically, and even harmfully, wrong.  

The bigger concern, however, is the same one that accompanies all new technologies to some degree. What will this technology do to our understanding of what it means to be human? As many have noted, simulating a mental process is a far cry from actual consciousness.  

Though ChatGPT can certainly pass Turing’s test, consciousness cannot be reduced to intelligence, and intelligence cannot be reduced to processing information. That sort of reductionism is inherent within a worldview that reduces all of reality, including who we are as human beings, to matter. Though computers are reducible to their constituent parts, humans are not. We imagine. We relate. We don’t merely imitate; we create. Though creatively, ChatGPT does imitate. This is evidenced by a clear progressive bias in its output. For example, it is willing to promote drag queen story hour but not to argue against it.  

A few years ago, a movie was made about Turing’s life called The Imitation Game. That is just what we’re seeing here. ChatGPT cannot replace or even duplicate the human mind. It can only imitate it. Today’s AI is far more advanced. But it still falls far short of who we are in the same way that Deep Blue, the chess-playing computer, did 20 years ago. As David Gelernter put it at the time in Time 

How can an object that wants nothing, fears nothing, enjoys nothing, needs nothing and cares about nothing have a mind? … What are its après-match plans if it beats Kasparov? Is it hoping to take Deep Pink out for a night on the town? It doesn’t care about chess or anything else. It plays the game for the same reason a calculator adds or a toaster toasts: because it is a machine designed for that purpose. 

This Breakpoint was co-authored by Kasey Leander. For more resources to live like a Christian in this cultural moment, go to colsoncenter.org. 

Share


  • Facebook Icon in Gold
  • Twitter Icon in Gold
  • LinkedIn Icon in Gold

Have a Follow-up Question?

Related Content