Skip to main content
Humanities LibreTexts

Student Essay Critiquing a New York Times Article on the Dangers of AI

  • Page ID
    259613
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Note: The sample essay below is shared with the student's permission under a CC BY NC 4.0 license.

    Lily Raabe

    Professor Mills

    College of Marin

    ENGL 150

    15 April 2024

    How Dangerous Is AI?

    Is there a safe way to use and develop AI, or will it inevitably be our downfall? The article “You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills” by Yuval Harari argues that AI, specifically large language models, could use its mastery of human language to destroy democracy, culture, and possibly humanity. Although the argument is compelling, it lacks evidence to support his claims and does not provide a strong rebuttal to the counterargument that AI potentially has limitless benefits. 

    In the article, Yuval Harari discusses the rapid development of AI and its potential negative impact on our future. The article begins with a shocking and somewhat unsettling statistic presented by Harari.  He cites a 2022 study where 700 leading experts and researchers in artificial intelligence were asked about the possible future risks of a powerful AI currently in development. Half of the participants said they believe there is a 10% or greater risk of human extinction, and if not extinction, then a similarly permanent and severe disempowerment caused by AI. This is a startling fact- companies are putting significant amounts of money into something experts believe has a 10% chance of permanently hindering us or wiping us out completely. Harari argues that tech companies are neglecting to protect consumers from this potentially dangerous product and urges that we slow down the rate of development so society can prepare for the future of AI. 

    Harari follows this with another issue concerning him: AI’s proficiency with language. Harari claims that language is the basis of humanity and that without it, we would not have laws, money, art, or science. So, he asks, what does it mean for humans if artificial intelligence has a better grasp on language than we do?  How does AI’s ability to generate language influence stories, images, laws, and policies? Harari also argues that AI can exploit human weakness, bias, and addiction and form intimate relationships with people. Humans could easily be taken advantage of if their weaknesses or personal feelings are being preyed upon. He worries that an advanced knowledge of language combined with the ability to manipulate could be detrimental to humanity. He writes, “In games like chess, no human can hope to beat a computer. What happens when the same thing occurs in art, politics or religion?”. 

    Harari believes that AI’s ability to master language will impact human culture. He points out that what started with AI writing essays for students could quickly turn into “political speeches, ideological manifestos, holy books for new cults.” This, Harari argues, could rewrite human culture. He continues by saying that humans are in a cultural prism, or cocoon, and how we see reality is affected by our cultural lens. He worries that if AI shapes culture, the way we interpret our reality will negatively change. He provides an example of using AI algorithms in social media to support this claim. Harari explains that companies use AI to create algorithms to keep people more engaged with the content that they view on social media. He argues that AI can create illusions that do not align with reality because the algorithm will curate content based on biases people have. The illusion he alludes to is that Donald Trump did not lose the 2020 election, and he believes people think this because of content fed to them by the algorithm. Harari claims that our current societal divide and the possible unraveling of democracy are due to social media and AI algorithms that help push propaganda. 

    After this, Harari provides a counterargument in his essay: "A.I. indeed has the potential to help us defeat cancer, discover lifesaving drugs and invent solutions for our climate and energy crises. There are innumerable other benefits we cannot begin to imagine. ” But he immediately rebuts it, saying, “But it doesn’t matter how high the skyscraper benefits A.I.  assembles if the foundation collapses.” Harari argues that AI development is moving too quickly and that if we don’t create regulations and enforce the safe use of AI, we cannot safely take advantage of the benefits. 

    Harari follows this with a call to action. He demands control of AI development before there are extreme consequences and claims democracy is at stake if we continue down this path. He calls for world leaders to address the potential issues with AI and for regulation around large language models. Finally, Harari suggests we prepare institutions for a world with AI and “to learn to master A.I. before it masters us.” 

    This was a compelling essay. Harari highlights potential issues that the rapid development of AI could cause. He emphasizes the importance of language to drive home the point that large language models hold more influence over us than we thought and that this influence could have far-reaching consequences for democracy and culture. He also brings attention to how urgently we need to place preventative measures so AI does not become uncontrollable. One weakness in the argument is Harari’s response to the counterargument. Harari acknowledges the counterargument but does not offer a convincing response. He does not explain why it is not worth the risk to find the benefits of AI or expand on his rebuttal. If AI has the potential to cure cancer or create lifesaving drugs, as he says, isn’t that worth exploring? If AI is being developed anyway, we may as well take advantage of it and use it to develop cures and solutions for worldwide problems. If we use AI in professional settings or in government funded programs to solve problems, it might also spur politicians to create regulations, which Harari wants. 

    The other weakness is that there is little evidence to support the claims. Harari cites a survey of experts at the beginning of the argument, but the rest is purely his opinion and does not provide evidence based on actual data. He offers no real justification for the idea that AI will use manipulative language and tactics that will lead to the fall of democracy; to convince us, he just repeats that idea with different phrasing. 

    Harari’s argument forces the reader to consider an uncomfortable prospect: AI could potentially eradicate humans. But is this really the case? AI has problems. It can be racist, show extreme bias, and make up sources or information, but could it lead to the fall of democracy, as Harari claims? We need more information about how exactly AI could use language to manipulate, exploit, or harm humans. 

    Harari uses borderline fearmongering tactics, writing, “However, simply by gaining mastery of language, A.I. would have all it needs to contain us in a Matrix-like world of illusions, without shooting anyone or implanting any chips in our brains. If any shooting is necessary, A.I. could make humans pull the trigger, just by telling us the right story.” But how would this be possible? Harari is correct that language is the basis of humanity and its culture, and I understand why a robot having mastery of language would be worrying. Still, his argument would be much more convincing if he showed how AI could wield language as a weapon against humanity. The claims seem exaggerated and unsubstantiated. Claiming that AI could manipulate humans into behaving in a way that kills us seems far-fetched. AI is not a free-thinking entity; it has no agenda. Why would it try to manipulate us? What would it gain from that? I wish he would provide evidence or explain his reasoning. I do not believe that the language Harari is using is purposefully manipulative but rather a tactic to force readers to recognize unregulated AI as a serious issue. However, his writing comes across as dramatic and hard to believe. 

    Despite this, Harari does a fantastic job of convincing readers to reflect on the possibilities of unregulated AI and the importance of language, culture, and human connection. His essay empowers readers to challenge corporations and world leaders and to think critically about the benefits they are promising from AI use.

    Works Cited

    Harari, Yuval, et al. “You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills.” The New York Times, 24 Mar. 2023, www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-chatgpt.html. 

    Attributions

    By Lily Raabe. Anna Mills changed the title (the original title was “Response Essay.”) Shared with permission under a CC BY NC 4.0 license

     


     


    Student Essay Critiquing a New York Times Article on the Dangers of AI is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.