Skip to main content
Humanities LibreTexts

6.1: Keywords and Definitions

  • Page ID
    98099
  • Algorithm — the set of logical rules used to organize and act on a body of data to solve a problem or to accomplish a goal that is usually carried out by a machine. An algorithm is typically modeled, trained on a body of data, and then adjusted as the results are examined. Because algorithms are generally processed by computers and follow logical instructions, people often think of them as neutral or value-free, but the decisions made by humans as they design and tweak an algorithm and the data on which an algorithm is trained can introduce human biases that can be compounded at scale. Humans who interact with an algorithm may also find ways to influence the outcomes, as when a marketer finds ways to push a website up in the results of a search through search engine optimization (SEO).

    Algorithmic justice — the application of principles of social justice and applied ethics to the design, deployment, regulation, and ongoing use of algorithmic systems so that the potential for harm is reduced. Algorithmic justice promotes awareness and sensitivity among coders and the general public about how data collection practices, machine learning, AI, and algorithms may encode and exacerbate inequality and discrimination.

    Algorithmic literacy — a subset of information literacy, algorithmic literacy is a critical awareness of what algorithms are, how they interact with human behavioral data in information systems, and an understanding of the social and ethical issues related to their use.

    Artificial intelligence (AI) — a branch of computer science that develops ways for computers to simulate human-like intelligent behavior, able to interpret and absorb new information for improved problem-solving, and recognize patterns. Examples include training robots, speech recognition, facial recognition, and identifying objects such as traffic signs, trees, and human beings necessary for self-driving cars. AI relies on machine learning capabilities and training data. Humans are involved in creating or collecting sets of training data (e.g., employing low-wage workers abroad to identify objects on computer screens to provide data for autonomous vehicle navigation). Bias may be built into machine learning (e.g., by using criminal justice data sets for risk assessment in predictive policing). Machines can be trained to learn from experience but common sense and recognizing context are difficult, thus limiting the ability of computer programs to perform tasks such as distinguishing hate speech from colloquial humor or sarcasm.

    Attention economy — since our attention is a limited resource and every person only has so much of it, companies (both platforms and people who use the platforms to sell, entertain, or persuade) try to engage and keep people’s attention. This rewards clickbait and influences the design of algorithms and platforms to maximize time spent online.

    Big data — a set of technological capabilities developed in recent years which, when used in combination, allows for the continuous gathering and processing of large volumes of fine-grained and exhaustive data drawn from multiple sources to be combined and analyzed continuously.

    Data exhaust — information incidentally generated as people use computers, carry cell phones, or have their behavior captured through surveillance which becomes valuable when acquired, combined, and analyzed in great detail at high velocity.

    Machine learning — the use of algorithms, data sets, and statistical modeling to build models that can recognize patterns to make predictions and interpret new data. The purpose of machine learning is to enable computers to automate analytical model-building so computers can learn from data with little human intervention.

    Personalization — the process of displaying search results or modifying the behavior of an online platform to match an individual’s expressed or presumed preferences, established through creating digital profiles and using that data to predict whether and how an individual will act on algorithmically selected information. This process drives targeted digital advertising and has been blamed for exacerbating information silos, contributing to political polarization and the flow of disinformation. Ironically, to consider information “personal” implies it is private, but personalization systematically strips its targets of privacy.

    Platform — an ambiguous term that means both software used on personal computers and software deployed online to provide a service, such as web search, video sharing, shopping, or social interaction. Often these systems use proprietary algorithms to mediate the flow of information while enabling third parties to develop apps, advertising, and content, thus becoming digital spaces for the individual performance of identity online, data-driven persuasion (commercial as well as political), and group formation through social interaction. In this report, we use the term to refer to “internet giants” such as Google, YouTube, Instagram, and Facebook and others mentioned by students in our focus group sessions.

    • Was this article helpful?