Skip to main content
Humanities LibreTexts

25.7: Flawed Reasoning and Prejudice

  • Page ID
    95264
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Many of the cognitive biases, fallacies, and other pitfalls in reasoning studied in earlier chapters help foster and maintain stereotypes and prejudices. We will now examine some of them.

    Belief Perseveration

    Belief perseveration (8.5) is the common tendency to retain a belief even after our original reasons for thinking it true have been undermined. Some beliefs are so thoroughly entrenched that they are impervious to evidence that would discredit them. Once we acquire a belief, it can be very difficult to get rid of it, even when the evidence clearly tells us that we should. The next two cognitive biases help explain why this is so.

    Confirmation Bias

    Many studies (as well as a bit of careful observation) document our tendency to look for and remember positive evidence that supports our beliefs while not seeking (or even avoiding) negative evidence that could tell against them. Confirmation bias (18.5) is the selective tendency to look for, notice, and remember confirming or positive evidence (that supports what we think) while ignoring or downplaying disconfirming or negative evidence (which suggests that what we think is wrong). For example, if Wilbur already thinks that women are bad drivers, he is more likely to notice or remember cases where women drove badly than cases where they did not.

    Confirmation bias leads to bad reasoning, because it means that we never put our views to the test. Hence there will be little chance of finding out that we are wrong, even when we are. This bears on the topics in this chapter because if we have stereotypes, we will tend to look for (or notice) evidence that confirms them, and we won’t look for (or notice) evidence that might show they are mistaken. So, even if we encounter group members who don’t fit our stereotype (i.e., people who disconfirm it), we may overlook or ignore it.

    Subtyping

    Sometimes when people interact with members of a stereotyped group it is so obvious that the group member doesn’t fit the stereotype that it is impossible for them to ignore it. But even this often fails to lead to any change their stereotype. Instead they maintain their stereotype by appealing to subtypes.

    A subtype is a stereotype for a small and atypical subset of a group. For example, research shows that prejudiced white people have a stereotype of black people as lazy and hostile. When someone like this meets a black person who obviously doesn’t fit the stereotype—say someone who worked their way up from a life of poverty to be CEO of a large, successful corporation—the stereotyper often classifies the other person as a member of a subtype, like Black Businessperson. This allows them to maintain the stereotype in the face of contradictory information (“this person doesn’t fit the profile, but they’re an exception—most black people do”). There are also gender subtypes, like career woman, feminist, housewife, jock, player, and punk.

    Sample Size

    When we draw inferences from small or unrepresentative samples, our conclusions will be unreliable (15.2.3). Some stereotypes no doubt arise from such hasty or unwarranted generalizations. In any large group, there are bound to be some members with negative characteristics. If we happen to interact with just these members of a group, it is easy (though unwarranted) to generalize to the entire group from this faulty sample.

    Heuristics

    The uses of various cognitive heuristics (like availability and representativeness) (Chapter 17) have associated patterns of faulty reasoning called biases, and these can also lead to bad reasoning about other people.

    Two Meanings of ‘Bias’

    Before seeing how this works, it is important to be clear that we are dealing with two different meanings of the word ‘bias’. When we studied heuristics, we also examined their associated biases; for example, the representativeness heuristic often leads to a bias to think that a conjunction can be more probable than either of its conjuncts. In this context, we often speak of a ‘cognitive bias’, and ‘bias’ means a systematic tendency to reason badly in a certain way; for example, we have a bias, a tendency, to think conjunctions can be more probable than their conjuncts, and we tend to overlook regression to the mean. By contrast, when we discuss prejudices, we are likely to talk about ‘biases’ towards groups. Here the word ‘bias’ means pretty much the same thing as ‘prejudice’. Context will make it clear which sense of the word is involved.

    Availability

    We use the availability heuristic (17.2) when we base judgments of frequency on the ease with which examples can be recalled or imagined. This heuristic makes us inclined to assume that things that are easier to remember or imagine are more likely to occur.

    Often, we remember certain things because they really do occur frequently, and when this is the case, the available sample will be a good enough for the rough and ready inferences of everyday life. But things may be available in memory or imagination for reasons having little to do with their true frequency or probability. In such cases, the availability heuristic will lead us to rely on a sample (the cases we easily remember) that is unrepresentative in various ways.

    If we expect members of a group to behave in a certain way (as we will if we are strongly influenced by a stereotype of the group), we will be more likely to notice and remember behavior that matches our expectations. If Wilbur thinks that women are bad drivers, he will be more likely to notice when women do drive badly than when they don’t (or than when men drive badly), and he is more likely to remember those occasions where they do. Such cases will be more available to Wilbur when he thinks about women drivers, and this will lead him to think that a larger percentage of women are bad drivers than is the case.

    Availability may also explain some cases of the out-group homogeneity bias. If we only interact with a small subgroup of a group, or if we rely on media reports that deal only with a small segment of the group, then our sample will be small and unrepresentative. It will easily come to mind, however, and so we may conclude that most members of the group are like the members of the small (and quite possibly atypical) subgroup that we know about.

    Representativeness

    We use the representativeness heuristic (17.3) when we conclude that the more like a representative or typical member of a category something is, the more likely it is to be a member of that category. Put in slightly different words, the likelihood that x is an A depends on the degree to which x resembles your typical A. So here we reason like this: x seems a lot like your typical A; therefore, x probably is an A. For example, Linda resembles your typical feminist (or at least a stereotype of a typical feminist), so (many of us conclude) she is likely to be a feminist. Sometimes this pattern of inference works, but as we have seen, it can also lead to very bad reasoning.

    This can lead us to believe that someone is a member of a group when they are not (“they fit the group stereotype, so they probably belong to the group”). We also frequently make the opposite mistake. Someone is in the group so, we (mistakenly) conclude, they probably fit the stereotype.

    Anchoring and Adjustment

    An anchor (17.5) is a sort of reference point. Often, we know that an anchor is too high or too low, and we try to adjust for it. But very often we don’t adjust enough. So, although we don’t simply accept the anchor as given, it still skews our final judgment; a high anchor often leads us to overestimate something, and a low anchor leads us to underestimate it.

    If we frequently hear that members of a group are unusually dumb or dishonest, we may assume that this claim is too extreme. But it may still provide an anchor, so when we adjust away from it, we don’t adjust enough. We don’t think the group is as dishonest as we are told, but we may still think that they may be a bit more dishonest than average—even when they aren’t. The anchor biases our final judgment.

    Memory Effects

    Memory is an active, constructive process. We fill in gaps in memory in ways that help us make sense of the world. For example, people heard a fictitious story about a dictator. In one version, he was called ‘Gerald Martin’; in another he was called ‘Adolph Hitler’. The story didn’t mention Jews, but many of the people who heard the Hitler version of the story later thought— “remembered”—that it contained the sentence ‘He hated the Jews and persecuted them’. People in the other group did not. People in the first group filled in details based on their knowledge of Hitler. We can also fill in details about a person based on our beliefs about the groups they belong to. Here our stereotypes affect the way we edit memories and fill in details.

    Collective Memory

    Much as individuals have memories that are stored in their brains, societies have “collective memories” (8.10) that are embodied in their beliefs, legends, and stories about the past. Social scientists have found that collective memories change over time, and that they are often quite different from the original events that gave rise to them.

    Sometimes people in power, particularly in totalitarian societies, set out to revise collective memory. There are many techniques for doing this, including rewriting textbooks, constantly repeating the rewritten version of history, and forbidding discussion of what really happened. But it also occurs, in a less conscious way, in more open societies. It often happens that certain groups (e.g., Native Americans) are portrayed in inaccurate ways in a group’s collective memory. This can certainly affect how people will think about that group, what images of it will be available, and how they will feel about it.

    Illusory Correlation

    We believe in an illusory correlation (15.4) when we mistakenly think that two things go together when in fact, they are unrelated. For example, Wilbur may think that being a woman and being a bad driver tend to go together, i.e., that these two things are positively correlated. Illusory correlations often result from an overemphasis on positive cases, and Wilbur is likely to notice the cases where the two things go together (cases where woman drive badly) and to ignore cases where they don’t (cases where women drive well, or cases involving men’s driving). He ignores evidence that tells against our view that the two things are related. It is often easier to think of positive cases where two things go together than to think of negative cases where they don’t, especially if we already think that two things typically accompany each other.

    There are many reasons why people hold stereotypes, but belief in illusory correlations often reinforces them. Thus, people may believe that members of some race or ethnic group tend to have some characteristic—usually some negative characteristic like being lazy or dishonest—which is just to say that they believe that there is a (positive) correlation between race and that characteristic.

    The Validity Effect

    Researchers have found that the mere repetition of a claim will lead many who hear it to think that it is more likely to be true than they would have supposed if they hadn’t heard it before. This is called the validity effect (18.7). The mere repetition makes the claim seem more likely to be true (or “valid”). This effect occurs with true statements, false statements, and statements that involve expressions of attitudes.

    Since the validity effect often leads people to believe things without giving them any thought, if we hear a claim over and over, from the time we are small children, we will have some tendency to believe it simply because we have heard it so much. So, if people repeatedly hear that Jews are materialistic, without hearing others say that they aren’t, they will tend to believe it.

    The Just World Hypothesis

    Many people think that the world is basically fair and just. People usually get pretty much what they deserve and deserve what they get. The psychologist Melvin Lerner called this the just-world hypothesis (18.8).

    Lerner has shown that when people learn about an unfair outcome that is otherwise difficult to explain, they look for a way to blame the person who suffers the misfortune (“they must have done something to bring on their problems”). To the extent that we feel this way, we will tend to think that most people who aren’t doing well are getting what they have coming (until recently, many victims of rape were viewed in this way). So, if a group is treated badly, we may feel, they must have some defects that explain this bad treatment (“Where there’s smoke there’s fire”). In short, we tend to blame the victim, and sometimes this extends to blaming entire groups.

    Wishful Thinking

    We engage in wishful thinking (9.5) when we disregard the evidence and allow our desire that something be true to convince us that it really is true. Our strong human tendency to wishful thinking is one reason why claims by pseudoscientists, advertisers, and others are accepted even when there is little evidence in their favor. When things go badly for others, we tend to blame the victim. But when things go badly for us, we often look for someone else to blame. We would rather not think that we are at fault, or even that we had a run of plain bad luck. Scapegoating is blaming others for problems, especially social or personal problems. Prejudice, stereotypes, and scapegoating reinforce each other. Unflattering stereotypes about a group lead to prejudice against its members, and the need to find scapegoats provides an emotional cause for adopting stereotypes.

    Prejudices and scapegoating seem to be more common when times are tough, for example, when the economy is bad. A classic example is the way that Nazis in Germany in the 1930s and 40s blamed the Jews for many of their social problems (which included a devastated economy). Scapegoating is still very common today, and with the rise of communalism, it may increase.

    Dissonance Reduction

    Most people do not see themselves as evil or sadistic. Consider people who find themselves in a situation where they systematically treat others badly, or even where they simply ignore their plight. It would lead to a good deal of cognitive dissonance (Chapter 19) if they thought to themselves: although I am basically a good person, I am treating these people badly (or ignoring their suffering), and they do not deserve to suffer. One way to reduce this dissonance is to change one’s attitude toward the mistreated group: “Maybe they really do deserve to be in their situation after all”; “Maybe (in extreme cases) they aren’t even quite human.”

    This fits with the common finding that most of the people who worked in Nazi concentrations camps came to see their victims as less than human. When you treat someone badly, there is a tendency to derogate them, to think, “well, they deserved it.” When we are the victimizers, blaming the victim helps us reduce dissonance.

    Inert Knowledge

    As with any knowledge, we may know that common stereotypes are inaccurate or oversimplified in various ways, yet fail to keep this in mind outside the setting (e.g., a classroom) where we learned it. When we need it, the knowledge just doesn’t come to mind. We aren’t aware of our situation as one where our knowledge about the limitations of a stereotype is relevant; we simply don’t think about such factors.

    Framing Effects

    We have seen more than once that the way factors are presented or worded can influence how we think about them. A description of a welfare program as one that helps young children get a hot breakfast will evoke very different images from a description of it as an inducement for Welfare Queens to have more children so they will receive bigger checks. In highly emotional issues of the sort that surround race, sexism, sexual orientation and the like, different people will frame things in vastly different ways.

    We have also seen that it is extremely important whether something is framed as a gain (e.g., lives saved) or as a loss (lives lost). Most things can easily be framed either way. For example, a member of a dominant group can frame something as a higher employment rate among a minority group (a gain) or as members of the minority group taking jobs away from members of the dominant group (a loss).

    Status Quo Bias

    The status quo bias is a preference to keep things pretty much the way they are. Unless things are going badly, people often prefer to keep things the same, rather than risk trying something new. So, people in groups that are doing well will have a preference to keep things largely as they are, rather than to risk new policies (e.g., a new affirmative action program) that might make things worse for them.

    Contrast Effect

    The contrast effect occurs when our evaluations of, or judgments about, something are influenced by the contrast between it and the things around it. If you view another group (and its members) as inferior to your group (and its members, including me), then you look better by comparison. It may increase my sense of self-worth and self-esteem. So, people sometimes derogate other groups so that their own group will, by contrast, look better.

    Self-fulfilling Prophecy

    A self-fulfilling prophecy is the tendency for a person’s expectations about the future to influence that future in a way that makes the expectations come true. Sometimes we have expectations about people that lead us, unwittingly, to treat them in a certain way, and treating them in this way may then lead them to behave in the way that we thought they would.

    For example, if you have a stereotype of African Americans as hostile, you may be inclined to be hostile to John, just because he is black. And this may lead him to react with hostility, even though he would have been friendly if you’d been friendly myself. Your prediction leads you to act in a way that makes your prediction come true.

    Rosenthal and Jacobson’s work (18.6) is relevant here. Recall that they told grade schoolteachers at the beginning of the school year that their incoming students had just been given a battery of tests and that twenty percent of the students should be expected to blossom academically in the coming year. In fact, the students in this group were selected at random. Nevertheless, these twenty percent ended up improving more than the other students. Teachers expected the students in the targeted group to blossom, which led them to act in ways that encouraged the students to do so. This sort of self-fulfilling prophecy is sometimes called the Pygmalion effect, and many studies since have documented its existence.

    Stereotypes as Self-fulfilling Prophecies

    Stereotypes can also serve as self-fulfilling prophecies. If teachers expect students from some groups to perform better than others, this may lead them to treat their students in ways that will make these expectations come true. In a society where people think that women are incapable of careers working with computers, young girls are likely to be treated in a way that suggests they can’t do such work. Furthermore, the interest they display in computers may be discouraged, and they will be encouraged to adopt different interests. Years of such treatment will make it much more difficult for a woman to have a career working with computers.

    In studies by Carl Word, Mark Zanna and Joel Cooper, it was shown that white interviewers conducting mock job interviews were more at ease with white interviewees than with black ones, and this led the white interviewees to be more at ease while the black interviewees were less so. The interviewers’ expectations that white interviewees would be easier to talk to led them to behave in ways that tended to make this expectation come true.

    Self-fulfilling prophecies can lead us to behave in ways that lead others to confirm a stereotype. Then, because of confirmation bias, we may focus on this evidence, without thinking to look for evidence that would disconfirm our stereotype, and without asking how others might have behaved if we had behaved differently toward them.

    The Power of the Situation

    We have seen (Chapter 23) the very strong power of social situations to influence how we think about people and how we explain why they do the things they do.

    Conformity Pressures

    As we grow up, we acquire many beliefs and attitudes and norms without ever thinking about them. This process is called socialization. Without it we would never become full members of a culture, which is to say we would never become fully human. But we can be socialized to hold many different beliefs, and if someone grows up always hearing that women are inferior to men (and almost never hearing that they aren’t), they must be an exceptional person not to be influenced by this.

    Situational vs. Dispositional Explanations

    In Chapter 23, we saw the importance of the distinction between internal, dispositional explanations of the causes of someone’s actions, on the one hand, and external, situational explanations, on the other. Dispositional explanations cite internal causes like character traits, whereas situational explanations cite features of the circumstances surrounding the actor.

    The distinction between internal (dispositional) explanations and external (situational) explanations is also important when we think about members of groups. For example, the average score for various minority groups on certain standardized tests (like the ACT) is lower than the average score for white students. But what does this mean? The issue is one about whether a dispositional explanation (members of the lower-scoring groups just aren’t as smart) or a situational explanation (members of the lower-scoring group had to attend much poorer schools) is the right one. More generally, explanations that cite stereotypes cite internal, dispositional causes; a group member does certain things because that it just the sort of person that he (and the others in the group) are. They are just hostile or lazy or materialistic or bigoted: “He did that because he’s an Okie, and all Okies are bigoted”.

    Blaming the victim also involves giving an internal, dispositional explanation of the victim’s plight. They suffer their misfortune because of the sorts of people they are (“O.k., she didn’t deserve to be raped, but anyone who behaves like she did is asking for trouble”). Whereas in many (though of course not all) cases people suffer misfortunes because of features of their circumstances that are beyond their control, and here external, situational explanations will be more accurate (“She just happened to be there when the prison escapee came along”).

    The Ultimate Attribution Error

    Thomas Pettigrew has called our tendency to give dispositional explanations for the negative or unsuccessful behavior of an entire group of people the ultimate attribution error. By contrast, although we often give dispositional explanations to explain the failures of the members of an outgroup, we tend to give external, situational explanations of their positive behavior and successes (“He was just lucky,” “She must have gotten some special break because she was a woman”).

    There is evidence that when a member of a stereotyped group behaves in a way that disconfirms a negative stereotype, we tend to offer a situational explanation of her behavior; we reason: “Almost anyone would do the same thing in that situation, and so, it doesn’t really tell us much about the person involved.” This is yet another way in which we can retain our stereotype in the face of disconfirming evidence. For example, Wilbur is convinced that women are bad drivers. One day he sees Sue think quickly, then steer out of the way of an on-coming truck. But he reasons (situationally), “anyone in her position would be so frightened that they would be really focused and alert, and so they’d do the same thing.”


    This page titled 25.7: Flawed Reasoning and Prejudice is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Jason Southworth & Chris Swoyer via source content that was edited to the style and standards of the LibreTexts platform.

    • Was this article helpful?