2.5: Does using AI do harm? If so, should we stop using it?
- Page ID
- 346963
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)Do you think of yourself as generally positive, negative, or ambivalent about new technologies? How about chatbots; do you feel differently about them? In the three semesters I’ve taught college courses focused on AI, I’ve seen a range of attitudes among my students. I try to welcome all of them. I myself am both concerned and excited about AI.
Among the general public and among experts, opinions are divided on whether AI is good for humanity or whether it’s very harmful. Some see AI as humanity’s best hope. Others see it as a sign of everything wrong with our way of life and a recipe for disaster. Many others, maybe a majority, are ambivalent.
So how could AI do harm? Earlier sections have explored chatbot bias and misinformation. Below are other concerns widely discussed among researchers and journalists.
Ethical concerns besides bias and misinformation
-
Energy use
It's widely accepted that training these systems is much more energy-intensive than traditional computing. Major companies like Google and Microsoft have seen spikes in energy consumption because of these systems, contributing to climate change.
-
Water use
Both training and running the systems requires water at a time when water scarcity globally is a growing concern. Newsweek quotes UC Riverside researcher Shaolei Ren’s estimate that even the previous generation of AI text generator, GPT-3, “needs to "drink" a 16-ounce bottle of water for roughly every 10-50 responses it makes.”
-
Violation of intellectual property rights
Should chatbots even be allowed to train on human writers’ work if they can’t give credit to that work when they paraphrase it? Public debate about AI and Intellectual property rights is underway. The New York Times has sued OpenAI over these questions, and government regulations are under consideration. Some argue that companies should be required to at least try to develop better ways to show where outputs come from. Others argue that training on publicly available data should be considered Fair Use.
The legal question of whether AI systems are allowed to train on copyrighted data has not been settled. Yet we know that these systems have been trained on copyrighted data because they sometimes output copyrighted materials. This raises several ethical problems:
- How can human writers and artists be given credit if a chatbot bases its output on their work?
- Should profits from chatbots be shared with the humans who wrote the text they're trained on?
- People may use AI instead of hiring the same artists or writers who came up with materials to train the AI. How then will writers and artists earn a living?
-
Labor conditions
AI systems rely on human-created training data and human efforts to improve their results and algorithms. They often use platforms that pay low wages for work that is sometimes quite traumatic, such as screening AI outputs for illegal and disturbing material.
-
Privacy invasion
This is a concern both from the perspective of whether private data was included without permission in the training of the models, and whether these chatbots will reveal individuals' confidential information in the course of their operation. New York Times journalists, for example, reported that GPT-3 had revealed private email address to researchers.
-
Existential risk
Is it possible that more advanced AI could act against the interests of humans or even kill us? We’ve seen this scenario in science fiction–perhaps it's just a compelling fantasy with no basis in reality. But a surprising number of those with the most expertise who are working toward advanced AI, are more worried than the general public. Even if AI systems themselves have no anti-human tendencies, they could be misused by bad actors for harm. Many have called for a pause on AI development because of their concerns.
What to do
Clearly, these are concerns worth understanding better before we decide how big of a problem each one is. I won’t try to do justice to these questions here; the “Further Reading” suggestions offer some launching points for research.
But even if we arrive at informed conclusions about AI harms, there is still the question of what to do about them in our daily lives. If you think today’s AI is having bad effects, do you not use it? Do you only use certain kinds of AI systems? Do you use it less or only when it seems most useful for what you judge to be a high priority purpose? Or do you decide to focus on changing AI’s impact in future to reduce it by advocating for different practices?
Some argue that we shouldn't use systems that were created in unethical ways. Others argue that now that the systems are here, we may find ethical ways to use them as we push for future systems to be created differently.
Is the rise of GenAI inevitable?
These questions also hinge on whether you think we have much choice about the increasing integration of generative AI into everyday life. Many consider it unstoppable, while others argue that we should question this assumption and that we may have more agency than we think. Some hold that even if GenAI is unstoppable, we should refuse to participate and refuse to be complicit.
Broader ethical questions
I don’t know about you, but even as I write this, I feel a bit dizzy and overwhelmed at how much there is to sort out.
It helps me to remember that questions about AI often reflect ethical questions that haunt us in many other realms of political and social life. Philosophers have long wrestled with them. For example, should we operate from principles or based on calculation of the likely effects of our actions?
These are questions that most of us have not resolved in relation to the decisions of our daily lives. Do we purchase clothing and food produced in ethical ways? Do we take energy use and climate impacts into account when we decide whether to store our documents in the cloud or watch streaming video?
Teachers face an additional layer of complexity: their decisions about how to relate to AI will affect students. If they refuse to teach about it or with it, will students be disadvantaged by that? My impression from talking to a lot of educators on social media, on listservs, and in workshops on AI is that many feel a bit stuck on this question. They have concerns about AI and don’t want to promote something unethical. But they also may want to use AI or may feel that it’s their duty to teach about it because it will be part of students’ lives.
My personal position
You may have gotten a sense for my own view (though I try to be fair-minded and balanced), so let me just lay it on the table. I do use AI. The parallel I see is to Internet search. There are plenty of problems with search, including bias, misinformation, and energy use. But few people think we should never have developed the Internet. Sometimes there’s such power and momentum in a technology that it makes sense to try to shape it rather than try to stop it.
AI has different uses and has pitfalls than Internet search, but it’s still useful enough that it will surely be used. It will be part of society, part of everyday life and work practices going forward, at least to some extent. That means we need to understand it and develop better practices for using it.
The harms of AI are real, but they are not set in stone or inevitable. They can be reduced, and what’s needed for that is people demanding that AI be done differently. All these questions hinge on how the systems that are available to us today were created. There has been little oversight of chatbot systems to date. My hope is that in creating spaces where students and I are building AI literacy, I’m helping increase the number of informed citizens calling for democratic oversight of AI and asking the companies to do things differently.
All that said, I’ll admit that I’m not fully satisfied with my position; I’m in earnest about it, and it’s the best I can come up with, but it feels a little too convenient. I’ve found a way to justify using AI, There’s an incentive for me to rationalize doing what I want to do; I find chatbot capabilities amazing. Even their flaws are fascinating to me.
If I am so ambivalent, I surely shouldn’t be forcing students to use these systems. Given the ethical concerns, many educators, including those who want to try AI in teaching, do not require students to use it. Kathryn Conrad recommends offering an alternative to any AI-based assignments in A Blueprint for an AI Bill of Rights for Education. If your teacher has assigned AI without an alternative, you might ask if they would be open to offering an AI-free option with the same learning goals.
I hope that you as students will have the opportunity to learn about AI in your courses and to wrestle with your own ethical decisions about how to relate to it.
Questions
- Which of the concerns discussed above stand out to you and why?
- What more do you need or want to know about AI harms?
- Which way are you leaning on these questions? Do you see yourself using AI, not using it, or limiting your use?
- What, if anything, do you think government should do? Do you see any role for you as a member of society in larger conversations about AI?
- Do you have any questions or comments about my position and the way I’ve framed these issues?
Further reading
- Blueprint for an AI Bill of Rights for Education by Kathryn Conrad
- Some Ethical Considerations for Teaching and Generative AI in Higher Education by Lydia Wilkes
- AI's impact on energy and water usage by Jon Ippolito
- Explained: Generative AI’s environmental impact in MIT News
- IMPACT RISK: an acronym for AI downsides by Jon Ippolito
- What Uses More, an app to help us compare energy use across tasks
- We did the math on AI’s energy footprint. Here’s the story you haven’t heard. by James O'Donnell and Casey Crownhart
- Teaching AI Ethics by Leon Furze
- There Is No Ethical Use of AI by Matthew Cheney
- AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED
- You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills by Yuval Harari, Tristan Harris and Aza Raskin
- Atlas of AI by Kate Crawford


