Skip to main content
Humanities LibreTexts

2.1: Takeaway 1- Students have an Ambivalent Bond with Algorithm-driven Platforms.

  • Page ID
    98077
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Almost all of the students in our focus groups were aware that platforms pushed content onto their screens. While most said they had no idea how algorithms actually worked, they had definite opinions about the effects personalization had on their online lives. As one self-described novice explained, “I’m really not a guru, but search engine algorithms take what you click on and they make this magical potion that always caters to sites you constantly use for news like CNN instead of BBC.”

    Students were quick to give examples of targeted advertising, where the inner logic of algorithmic persuasion was most visible as the same ads chased them across platforms and devices. They often swapped stories about how some personalization attempts had fallen flat. One student said that a search for political internships in D.C. yielded an internship in Korea. Another said she kept getting ads for flights to a place she had just visited.

    While discussions often began with reactions to advertising behavior, students also talked about algorithms used for personalizing other types of content. According to students, the use of algorithms by social media platforms was pernicious. As one said, sites like Facebook “can serve information and news, almost like propaganda, they can serve up whatever information they want to, and potentially change your worldview — it can be a double-edged sword.”

    Getting news online through the internet giants revealed a growing tension between receiving the services they wanted and the psychic cost. One student, resigned to the tradeoff, admitted, “I’m giving up control of my personal data so I can get news I agree with, see my interests reflected in my feed, and shop more efficiently.”

    Resignation and indignation

    An important takeaway from our focus groups was the profound ambivalence — the tangle of resignation and indignation — that almost all students expressed about algorithm-driven platforms that collect data about their personal lives. While many students objected to certain advertising practices platforms used, they were nonetheless resigned to using sites like Google and YouTube. In their words, algorithms were “part of the deal” if they wanted to use “free apps.” Fewer, however, seemed to realize that their motivations to connect and get content could be exploited in ways that extended beyond targeted ads. While some said they knew Google and Facebook were “advertising companies” and “their goal is to tailor advertisements to you,” most found these sites too useful to abandon. Still for others, algorithms were necessary and welcome, since personalization helped filter the vast number of irrelevant Web results they might otherwise get from a Google search. As one student summed it up about his cohort, “We would rather have this convenience than protect our privacy.”

    Comments like these suggest students have an awareness about the benefits of algorithms for sorting information to surface relevant results. At the same time, they signal students’ awareness of the pitfalls of compartmentalizing people to influence their actions and exacerbate divides. But exactly what a company might be doing with the data collected about them was often an unknown variable in students’ cost-benefit analysis. While some weighed whether the ability to use certain sites was worth sacrificing their data, many others claimed it was already too late. As one student rationalized, “Your information is 100% out there somewhere, and it’s definitely concerning, but I want an Amazon account, so I can purchase things, and I want a Facebook account, too.” There was widespread consensus by students that these sites helped them stay in touch with friends and family. As one student admitted about his ambivalence:

    I just feel desensitized to it. If I really think about it, I’m like, yeah, that’s messed up, and I don’t want them listening to me, but then I still use those apps and am letting it happen. So obviously it doesn’t matter to me that much. I mean what are you gonna do?

    Contradictory feelings such as these were most evident when students were asked about their “tipping points.”65 When had an algorithmdriven platform had gone too far? The potential for conversations to be picked up by devices and trigger related advertising was a common tipping point for them (see sidebar, “When algorithms ‘get creepy’”). Some students said they turned off their device’s microphone to retaliate against the practice; many others said that they refused to have Alexa in their residences. In the words of one student, “the phone is already doing enough listening.”66 Another added, “It’s interesting that people who are most close to working technologies like Alexa don’t use them.”

    Others questioned how wedded they were to online behemoths after learning Cambridge Analytica used algorithms to develop a targeted campaign to millions of Facebook profiles to sway users’ voting habits. Still others referred to “deep fakes”67 and viral hoaxes, such as a video clip of House Speaker Nancy Pelosi deliberately slowed to make her appear inebriated68 that spread through Facebook, YouTube, and Twitter in May 2019. Some said Facebook’s refusal to take the video down, along with widespread sharing of the video by pro-Trump supporters, had crossed the line for them.

    At the same time, many students shared concerns about echo chambers that deepened social inequalities. A woman of color related algorithmic injustice to the biases that led to over-policing Black communities while failing to prevent mass shootings by Whites from affluent communities. Another student described predictive algorithms as “just a fancy technologyenabled form of stereotyping and discrimination, which is inherently problematic, but it’s easier for us to overlook because it’s happening online and we’re not seeing it.” Still another student related how he had thought about majoring in the field of computer science, but decided against it:

    I don’t like the direction that technology is going. A lot of it can be used for evil, and even though it’s really smart, and it’s like really well implemented and effective for the people who it’s serving, it’s not serving the general population. And that freaks me out.

    Together, these comments suggest that many students, though not all, have a broader understanding of the impact of algorithms on their lives beyond the strictly personal effects of advertising that many were eager to discuss. They were torn by the appeal of using “free sites“ while knowing they were being tracked for undisclosed purposes. And almost all of them were still trying to figure out some way to circumvent online surveillance and shield their vulnerability, regardless of how effective their methods may have been. As one student described a sense of resignation combined with indignation: “It’s a horrible totalitarian hellscape, but it’s kind of the best we can reasonably expect.”

    When algorithms “get creepy”

    clipboard_e54ed3ee25246f6853c0311fab7e477ae.png

    The rich discussions in our focus groups showed that students across the country had some common concerns about algorithms.*

    In an analysis of logs and transcripts, the PIL Team used eight individual themes based on whether one or more students in one of the 16 groups (N=103) had raised the concern (see Table 1). If students mentioned a concern, for instance, about “the next generation” more than once in a session, we only counted it once.

    More than anything, we heard concerns from students about the “creepiness” of algorithms that violated their privacy. This happened when platforms ‘overheard’ their discussions or shared data with each other to pitch them products.

    I was having a conversation with my friend and joking about somebody being pregnant, and then ads started popping up on my searches for pregnancy tests and supplements. I was laughing because Google got it wrong, but it’s still creepy.

    While students in many groups worried about how the next generation would fare, they, themselves, were often unfamiliar with how the use of algorithms was developing and expanding. Automated decision-making that could directly affect their lives was particularly disturbing. They also remarked on the disappearance of a shared reality that results from personalized news and information. In many discussions these societal and personal concerns intersected.

    * PIL researchers Alaina Bull and Jessica Yurkofsky did the coding for this content analysis from 10 October – 4 December 2019.

    Table 1: What worries students about computer algorithms?
    Concerns about Algorithms Count Percent
    1. (P) Platforms “listening” across devices or platforms. 14 88%
    2. (S) Algorithms & automated decision-making reinforcing inequalities. 12 75%
    3. (P) Platforms shaping individual content & ads they see. 12 75%
    4. (S) Online users not seeing the same reality. 11 69%
    5. (S) The next generation. 10 63%
    6. (P) Platforms selling personal data to third parties. 8 50%
    7. (P) Permanence of data being collected about them. 7 44%
    8. (S) Older generations using these technologies & adapting to changes. 5 31%

    Count is based on concerns discussed per 16 student focus groups.

    (P = Personal concerns, S = Societal concerns)

    References

    1. In Malcolm Gladwell’s bestselling book, The tipping point: How little things can make a big difference (2000), the author describes the tipping point as “the moment of critical mass, the threshold, the boiling point,” p. 304.
    2. While a concern about phones “listening” is widespread, it’s most likely not happening routinely. See Bree Fowler (10 July 2019), “Is your smartphone secretly listening to you?” Consumer Reports, www.consumerreports.org/smar...tening-to-you/
    3. Oscar Schwartz (12 November 2018), “You thought fake news was bad? Deep fakes are where truth goes to die,” The Guardian, www. theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth
    4. Charlie Warzel (26 March 2019), “The fake Nancy Pelosi video hijacked our attention. Just as intended,” The New York Times, www. nytimes.com/2019/05/26/opinion/nancy-pelosi-video-facebook.html

    Contributors and Attributions