Skip to main content
Humanities LibreTexts

20.7: Giving Explanations

  • Page ID
    95209
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The last step of our scientific process is incorporating what we have learned into what we already know in order to better understand the world around us. In many ways, this is the easiest part. We thought something might be true, and now we either know it isn’t or we have stronger reasons for believing that it is. Our new understanding in turn lets us interact with the world in new ways that can have a profound and positive impact.

    Learning about things and understanding how they work can often be rewarding in and of itself, but it is vital if we are to deal successfully with the world around us. If we understand how things work, we will be able to make more accurate predictions about their behavior, and this will make it easier for us to influence how things will turn out. If you understand how your computer works, you will be in a much better position to fix it the next time it breaks down.

    On a more global scale, we have seen time and time again that the results of the process outlined in this chapter can have a profound impact on life as we know it. In ancient times, diseases were often attributed to supernatural causes, e.g., demons, but such theories did not provide very effective ways to treat or prevent disease. The work of Louis Pasteur and others toward the end of the nineteenth century led to the germ theory of disease. This theory allows us to understand the causes of many diseases and to explain why they spread in the ways they do. And this understanding in turn led to vaccines and other measures that allowed us to eliminate some diseases and curtail the spread of others.

    The stakes for successful understanding can be very high. That is why we need to be careful to not overstate what we know.

    The Explanation Reflex: Telling More Than We Can Know

    The benefits of understanding create a strong desire in us to be able to explain things. Way back in Chapter 1, we learned that this desire is called the explanation reflex. It can be very hard to accept that we don’t understand something, or that we don’t understand it well enough to be able to make informed decisions. This is why we often prefer a bad explanation to no explanation at all. The willingness to accept any explanation over no explanation is what leads us to accept illusory correlations and the illegitimate causal arguments discussed in section 20.6. The explanation reflex is good, as it is what drives us to investigate in the first place. We just need to be careful that we don’t let it lead us to overstate what we know.

    The Illusion of Explanatory Depth

    The pull to understand is often so strong, it can lead us to believe that we understand far more than we do, and to believe we understand those things in far greater detail than we do. Researchers Leonid Rozenblit and Frank Keil have labeled these interrelated beliefs the illusion of explanatory depth. We’ve all had the experience of listening to a parent, uncle or some other blowhard talk about a complex issue (single payer healthcare, the electoral college, or car repair), only to be left feeling like they don’t quite understand what they’re talking about – and it’s likely that they don’t. They have a superficial understanding of the subject that they have rounded up in their minds to a much deeper level of knowledge. What is important to remember is that we are all doing this to some extent – certainly more than we realize.

    Rozenblit and Keil first demonstrated the illusion of explanatory depth not by asking people to explain complex systems, but by focusing on everyday objects. Subjects were asked to rate the extent to which they understood how zippers, toilets, and ballpoint pens worked. Then they asked the subjects to explain in as much detail as they could how those objects worked. It turned out that virtually nobody could explain these objects in much detail (which led them to revise their beliefs about how well they understood them). This might seem funny (and it kinda is), but you should stop reading for a second and see how well you can explain how these things work.

    The illusion of explanatory depth should be setting off alarm bells for you by now. If you’ve spent most of your life mistaken about how well you understood how zippers worked, what makes you think you understand far more complicated systems about which you have strong beliefs.

    We can take a key lesson away from this discussion. When someone has a strong opinion about something, we should always ask for an explanation of the matter (this goes for our own beliefs, as well as other people). The next time your uncle is pontificating about why a single payer healthcare system won’t work, don’t argue with him. Instead, ask him to explain how such a system works, what exactly the problems are with it, how his preferred system works, and how it addresses the problems with single payer

    Be Mindful of Bias

    Lastly, it is important to keep in mind all the ways we are susceptible to error when integrating new information into our past understanding. In past chapters, we have discussed the various ways that we have trouble processing criticism and accurately assessing information that runs counter to our current views. Remember also that we are susceptible to confirmation bias, belief perseveration and all the other biases and pitfalls discussed in sections IV, VI and VII.

    The key to success on the explanatory level is to remember to be fallible. Investigation might reveal our current beliefs to be flawed. That’s ok. Every time we’re confronted by disconfirming information, we have an opportunity to refine our views and come to a more accurate understanding. This is a good thing.


    This page titled 20.7: Giving Explanations is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Jason Southworth & Chris Swoyer via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.