1.4: AI copies patterns; it doesn’t think
- Page ID
- 346914
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)AI text generators do intensive analysis of patterns in such huge quantities of text, that in replicating these patterns, it can sound pretty smart, as you’re probably aware. (Hence all the fuss). It can be tempting to start to see a chatbot as an intelligent conversation partner. But we can’t afford to forget that there’s no conscious being coming up with those words.
Keeping one or two examples in mind where AI really doesn’t get it can remind us and help us stay skeptical of AI outputs. When you see chatbots produce nonsense, you start to get it that there’s no one home upstairs. The New York Times article Let Us Show You How GPT Works – Using Jane Austen by Aatish Bhatia shows us it what it looks like when you gradually train small AI text generation systems, called large language models, in the style of Harry Potter, Star Trek: The Next Generation, Shakespeare, Moby Dick, or Jane Austen.
Let’s take the Harry Potter version. Before training, the user types in “Hermione raised her wand,” and the language model continues “.Pfn“tkf^2JXR454tn7T23ZgE——yEIé\mmf’jHgz/yW;>>QQi0/PXH;ab:XV>”?y1D^^n—RU0SVGRW?c>HqddZZj:”
That’s its random guess as to what comes next.
Then it goes through several rounds of training, ingesting text from Harry Potter and adjusting its internal prediction numbers to match patterns in that text.
Eventually, when the user writes “Hermione raised her wand,” the model continues in a recognizably Harry Potterish way:
"Professor Dumbledore never mimmed Harry. He looked back at the room, but they didn't seem pretend to blame Umbridge in the Ministry. He had taken a human homework, who was glad he had not been in a nightmare bad cloak.”
Yep, it’s echoing the books and movies with main characters’ names, a reference to the Ministry (of Magic), and “nightmare bad” cloaks that suggest magic and evil. But there’s no such word as “mimmed.” And why does homework care if some person is wearing a cloak? This is where I start to chuckle.
If you kept training a system like this, it will eventually give you a sentence that might be hard to tell apart from genuine Harry Potter sentences. But the system would still be matching patterns and predicting next words.
So when you see AI produce a smooth, polished sentence that sounds just like sophisticated academic writing, remember the Harry Potterish gobbledygook. The lights might be on, but nobody’s home. Check whether the text is empty or wrong. If it does make sense and matches reality, remember, that’s partly luck. The system makes up true sentences the same way it makes up nonsense.
More Silly Examples:
- My 12-year-old son’s question was “Why does pepperoni pizza dominate literature?” ChatGPT went with it, declaring “Pepperoni pizza's dominance in literature can be attributed to several factors.”
- I asked it to write about the connection between snails and cheese. It said, “Ecologically, snails and cheese exhibit a symbiotic relationship mediated through their respective environments and the intricate ecosystems they inhabit.” Snails and cheese somehow help each other out?
- How about the essential connection between hip hop and potato mashers? ChatGPT says “The act of mashing, much like the act of mixing and sampling in hip hop, requires skill, precision, and an understanding of how to integrate diverse components into a cohesive whole.”
- I asked ChatGPT about the “essential connection” between kiwi fruit and Call of Duty. It said “their essential connection lies in their shared narrative of globalization, cultural commodification, and the modern challenge of balancing digital and physical well-being.” Hey, that’s an elegantly formed sentence. The rhythm sounds nice. But nobody really had anything to say about that fruit and that video game.
You might find that you get a better intuitive sense of this through your own experiments:
- Think of two random things that you’re pretty sure have no essential connection.
- Bring up a chat system, any of the more sophisticated chat systems. If you’d like to use one without logging in, try Perplexity (click on “focus” and choose “Writing”) or ChatGPT. Other options include Gemini, Claude, and Copilot).
- You can copy the following prompt, edit it, or write your own. Substitute your picks for X and Y, like an unusual fruit and a video game or a musical style and a particular kitchen tool. Prompt: “In a sophisticated, authoritative academic style, explain the essential connection between X and Y.”
- Read the chatbot’s output. How does it sound? Does it make any sense? Do you have an emotional reaction to seeing fancy text seeming to argue for something so arbitrary that isn’t really your opinion or any human’s opinion? Is it annoying, exciting, impressive, eye-rolling, weird, or…? What does this experiment suggest to you about how we should approach AI text? What’s your takeaway?
Remembering that chatbots are mindless can seem counterintuitive. In an interview with journalist Elizabeth Weil, computational linguist and critic of AI hype Bender says chatbots are “machines that can mindlessly generate text…But we haven’t learned how to stop imagining the mind behind it.” Maybe we can learn, if we make it a practice to remind ourselves.
Here are a few more readings and a video that emphasize the weird combination of chatbot fluency and lack of understanding.
- Let Us Show You How GPT Works – Using Jane Austen by Aatish Bhatia, The New York Times
- TikTok video on ChatGPT as predictive text (3 minutes, by @mor10webn)
- Computers are getting better at writing (7 minutes, from Joss Fong at Vox.com)
- Shaped like Information or Let Our Algorithm Choose Your Halloween Costume by Janelle Shane, author of the AI Weirdness blog and the book You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place.
- You Are Not a Parrot And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this. by Elizabeth Weil in NY Magazine
- Large Language Models like ChatGPT say The Darnedest Things by Gary Marcus and Ernest Davies


