Skip to main content
Humanities LibreTexts

6.2: Probability and Decision Making - Value and Utility

  • Page ID
    24349
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The future is uncertain, but we have to make decisions every day that have an effect on our prospects, financial and otherwise. Faced with uncertainty, we do not merely throw up our hands and guess randomly about what to do; instead, we assess the potential risks and benefits of a variety of options, and choose to act in a way that maximizes the probability of a beneficial outcome. Things won’t always turn out for the best, but we have to try to increase the chances that they will. To do so, we use our knowledge—or at least our best estimates—of the probabilities of future events to guide our decisions.

    The process of decision-making in the face of uncertainty is most clearly illustrated with examples involving financial decisions. When we make a financial investment, or—what’s effectively though not legally the same thing—a wager, we’re putting money at risk with the hope that it will pay off in a larger sum of money in the future. We need a way of deciding whether such bets are good ones or not. Of course, we can evaluate our financial decisions in hindsight, and deem the winning bets good choices and the losing ones bad choices, but that’s not a fair standard. The question is, knowing what we knew at the time we made our decision, did we make the choice that maximized the probability of a profitable outcome, even if the profit was not guaranteed?

    To evaluate the soundness of a wager or investment, then, we need to look not at its worth after the fact—its final value, we might say—but rather at the value we can reasonably expect it to have in the future, based on what we know at the time the decision is made. We’ll call this the expected value (sometimes called expectation value). To calculate the expected value of a wager or investment, we must take into consideration:

    1. the various possible ways in which the future might turn out that are relevant to our bet,
    2. the value of our investment in those various circumstances, and
    3. the probabilities that these various circumstances will come to pass.

    The expected value is a weighted average of the values in the different circumstances; it is weighted by the probabilities of each circumstance. Here is how we calculate expected value (EV):

    EV = P(O1) x V(O1) + P(O2) x V(O2) + ... + P(On) x V(On)

    This formula is a sum; each term in the sum is the product of a probability and a value. The terms ‘O1, O2, ..., On’ refer to all the possible future outcomes that are relevant to our bet. P(Ox) is the probability that outcome #x will come to pass. V(Ox) is the value of our investment should outcome #x come to pass.

    Perhaps the simplest possible scenario we can use to illustrate how this calculation works is the following: you and your roommate are bored, so you decide to play a game; you’ll each put up a dollar, then flip a coin; if it comes up heads, you win all the money; if it comes up tails, she does. (In this and what follows, I am indebted to Copi and Cohen’s presentation for inspiration.) What’s the expected value of your $1 bet? First, we need to consider which possible future circumstances are relevant to your bet’s value. Clearly, there are two: the coin comes up heads, and the coin comes up tails. There are two outcomes in our formula: O1 = heads, O2 = tails. The probability of each of these is 1/2. We must also consider the value of your investment in each of these circumstances. If heads comes up, the value is $2—you keep your dollar and snag hers, too. If tails comes up, the value is $0—you look on in horror as she pockets both bills. (Note: value is different from profit. You make a profit of $1 if heads comes up, and you suffer a loss of $1 if tails does—or your profit is -$1. Value is how much money you’re holding at the end.) Plugging the numbers into the formula, we get the expected value:

    EV = P(heads) x V(heads) + P(tails) x V(tails) = 1/2 x $2 + 1/2 x $0 = $1

    The expected value of your $1 bet is $1. You invested a dollar with the expectation of a dollar in return. This is neither a good nor a bad bet. A good bet is one for which the expected value is greater than the amount invested; a bad bet is one for which it’s less.

    This suggests a standard for evaluating financial decisions in the real world: people should look to put their money to work in such a way that the expected value of their investments is as high as possible (and, of course, higher than the invested amount). Suppose I have $1,000 free to invest. One way to put that money to work would be to stick it in a money market account, which is a special kind of savings deposit account one can open with a bank. Such accounts offer a return on your investment in the form of a payment of a certain amount of interest—a percentage of your deposit amount. Interest is typically specified as a yearly rate. So a money market account offering a 1% rate pays me 1% of my deposit amount after a year. (It’s more complicated than this, but we’re simplifying to make things easier.) Let’s calculate the expected value of an investment of my $1,000 in such an account. We need to consider the possible outcomes that are relevant to my investment. I can only think of two possibilities: at the end of the year, the bank pays me my money; or, at the end of the year, I get stiffed—no money. The calculation looks like this:

    EV = P(paid) x V(paid) + P(stiffed) x V(stiffed)

    One of the things that makes this kind of investment attractive is that it’s virtually risk-free. Bank deposits of up to $250,000 are insured by the federal government. (They’re insured through the FDIC—Federal Deposit Insurance Corporation—created during the Great Depression to prevent bank runs. Before this government insurance on deposits, if people thought a bank was in trouble, everybody tried to withdraw their money at the same time; that’s a “bank run”. Think about the scene in It’s a Wonderful Life when George is about to leave on his honeymoon, but he has to go back to the Bailey Building and Loan to prevent such a catastrophe. Anyway, if everybody knows they’ll get their money back even if the bank goes under, such things won’t happen; that’s what the FDIC is for.) So even if the bank goes out of business before I withdraw my money, I’ll still get paid in the end. (Unless, of course, the federal government goes out of business. But in that case, $1,000 is useful maybe as emergency toilet paper; I need canned goods and ammo at that point.) That means P(paid) = 1 and P(stiffed) = 0. Nice. What’s the value when I get paid? It’s the initial $1,000 plus the 1% interest. 1% of $1,000 is $10, so V(paid) = $1,010.

    That’s not much of a return, but interest rates are low these days, and it’s not a risky investment. We could increase the expected value if we put our money into something that’s not a sure thing. One option is corporate bonds. For this type of investment, you lend your money to a company for a specified period of time (and they use it to build a factory or something), then you get paid back the principal investment plus some interest. (Again, there are all sorts of complications we’re glossing over to keep things simple.) Corporate bonds are a riskier investment than bank deposits because they’re no insured by the federal government. If you company goes bankrupt before the date you’re supposed to get paid back, you lose your money. (Probably. There are different kinds of bankruptcies and lots of laws governing them; it’s possible for investors to get some money back in probate court. But it’s complicated. One thing’s for sure: our measly $1,000 imaginary investment makes us too small-time to have much of a chance of getting paid during bankruptcy proceedings.) That is, P(paid) in the expected value calculation above is no longer 1; P(stiffed) is somewhere north of 0. What are the relevant probabilities? Well, it depends on the company. There are firms in the “credit rating” business—Moody’s, S&P, Fitch, etc.—that put out reports and classify companies according to how risky they are to loan money to. They assign ratings like ‘AAA’ (or ‘Aaa’, depending on the agency), ‘AA’, ‘BBB’, ‘CC’, and so on. The further into the alphabet you get, the higher the probability you’ll get stiffed. It’s impossible to say exactly what that probability is, of course; the credit rating agencies provide a rough guide, but ultimately it’s up to the individual investor to decide what the risks are and whether they’re worth it. (Historical data on the probability of default for companies at different ratings by agency are available.)

    To determine whether the risks are worth it, we must compare the expected value of an investment in a corporate bond with a baseline, risk-free investment—like our money market account above. Since the probability of getting paid is less than 1, we must have a higher yield than 1% to justify choosing the corporate bond over the safer investment. How much higher? It depends on the company; it depends on how likely it is that we’ll get paid back in the end.

    The expected value calculation is simple in these kinds of cases. Even though P(stiffed) is not 0, V(stiffed) is; if we get stiffed, our investment is worth nothing. So when calculating expected value, we can ignore the second term in the sum. All we have to do is multiply P(paid) by V(paid).

    Suppose we’re considering investing in a really reliable company; let’s say P(paid) = .99. Doing the math, in order for a corporate bond with this reliable company to be a better bet than a money market account, they’d have offer an interest rate of a little more than 2%. If we consider a less- reliable company—say one for which P(paid) = .95—then we’d need a rate of little more than 6.3% to make this a better investment. If we go down to a 90% chance of getting paid back, we need a yield of more than 12% to justify that decision. (Considerations like these are apparently the spark that lit the fuse on the financial crisis of late 2008. On September 15th of that year, the financial services firm Lehman Brothers filed for bankruptcy—the largest bankruptcy filing in history. The stock market went into a free-fall, and the economy ground to a halt. The problem was borrowing: companies couldn’t raise money in the usual way with corporate bonds. Such borrowing is the grease that keeps the engine of the economy running; without it, firms can’t fund their day-to-day operations. The reason companies couldn’t borrow was that investors were demanding too high a rate of interest. They were doing this because their personal estimations of P(paid) were all revised downward in the wake of Lehman’s bankruptcy: that was considered a reliable company to lend to; if they could go under, anybody could.)

    What does it mean to be a good, rational economic agent? How should a person, generally speaking, invest money? As we mentioned earlier, a plausible rule governing such decisions would be something like this: always choose the investment for which expected value is maximized.

    But real people deviate from this rule in their monetary decisions, and it’s not at all clear that they’re irrational to do so. Consider the following choice: (a) we’ll flip a coin, and if it comes up heads, you win $1,000, but if it comes up tails, you win nothing; (b) no coin flip, you just win $499, guaranteed. The expected value of choice (b) is just the guaranteed $499. The value of choice (a) can be easily calculated:

    EV = P(heads) x V(heads) + P(tails) x V(tails)
    = (.5 x $1,000) + (.5 x $0)
    = $500

    So according to our principle, it’s always rational to choose (a) over (b): $500 > $499. But in real life, most people who are offered such a choice go with the sure-thing, (b). (If you don’t share that intuition, try upping the stakes—coin flip for $10,000 vs. $4,999 for sure.) Are people who make such a choice behaving irrationally?

    Not necessarily. What such examples show is that people take into consideration not merely the value, in dollars, of various choices, but the subjective significance of their outcomes—the degree to which they contribute to the person’s overall well-being. As opposed to ‘value’, we use the term ‘utility’ to refer to such considerations. In real life decisions, what matters is not the expected value of an investment choice, but its expected utility—the degree to which it satisfies a person’s desires, comports with subjective preferences.

    The tendency of people to accept a sure thing over a risky wager, despite a lower expected value, is referred to as risk aversion. This is the consequence of an idea first formalized by the mathematician Daniel Bernoulli in 1738: the diminishing marginal utility of wealth. The basic idea is that as the amount of money one has increases, each addition to one’s fortune becomes less important, from a personal, subjective point of view. An extra $1,000 means very little to Bill Gates; an extra $1,000 for a poor college student would mean quite a lot. The money would add very little utility for Gates, but much more for the college student. Increases in one’s fortune above zero mean more than subsequent increases. Bernoulli’s utility function looked something like this (This function maps 1 unit of wealth to 10 units of utility (never mind what those units are). 2 units of wealth produces 30 units of utility, and so on: 3 – 48; 4 – 60; 5 – 70; 6 – 78; 7 – 84; 8 – 90; 9 – 96; 10 – 100. This mapping comes from Daniel Kahneman, 2011, Thinking, Fast and Slow, New York: Farrar, Strauss, and Giroux, p. 273.):

    Screen Shot 2019-10-08 at 11.24.44 PM.png

    This explains the choice of the $499 sure-thing over the coin flip for $1,000. The utility attached to those first $499 is greater than the extra utility of the additional possible $501 dollars one could potentially win, so people opt to lock in the gain. Utility rises quickly at first, but levels out at higher amounts. From Bernoulli’s chart, the utility of the sure-thing is somewhere around 70, while the utility of the full $1,000 is only 30 more—100. Computing the expected utility of the coin-flip wager gives us this result:

    EU = P(heads) x U(heads) + P(tails) x U(tails)
    = (.5 x 100) + (.5 x 0)
    = 50

    The utility of 70 for the sure-thing easily beats the expected utility from the wager. It is possible to get people to accept risky bets over sure-things, but one must take into account this diminishing marginal utility. For a person whose personal utility function is like Bernoulli’s, an offer of a mere $300 (where the utility is down closer to 50) would make the decision more difficult. An offer of $200 would cause them to choose the coin flip.

    It has long been accepted economic doctrine that rational economic agents act in such a way as to maximize their utility, not their value. It is a matter of some dispute what sort of utility function best captures rational economic agency. Different economic theories assume different versions of ideal rationality for the agents in their models.

    Recently, this practice of assuming perfect utility-maximizing rationality of economic agents has been challenged. While it’s true that the economic models generated under such assumptions can provide useful results, as a matter of fact, the behavior of real people (homo sapiens as opposed to “homo economicus”—the idealized economic man of the models) departs in predictable ways from the utility-maximizing ideal. Psychologists—especially Daniel Kahneman and Amos Tversky— have conducted a number of experiments that demonstrate pretty conclusively that people regularly behave in ways that, by the lights of economic theory, are irrational. For example, consider the following two scenarios (go slowly; think about your choices carefully):

    (1) You have $1,000. Which would you choose?
    (a) Coin flip. Heads, you win another $1,000; tails, you win nothing.
    (b) An additional $500 for sure.

    (2) You have $2,000. Which would you choose?
    (a) Coin flip. Heads you lose $1,000; tails, you lose nothing.
    (b) Lose $500 for sure. (For this and many other examples, see Kahneman 2011.)

    According to the Utility Theory of Bernoulli and contemporary economics, the rational agent would choose option (b) in each scenario. Though they start in different places, for each scenario option (a) is just a coin flip between $1,000 and $2,000, while (b) is $1,500 for sure. Because of the diminishing marginal utility of wealth, (b) is the utility-maximizing choice each time. But as a matter of fact, most people choose option (b) only in the first scenario; they choose option (a) in the second. (If you don’t share this intuition, try upping the stakes.) It turns out that most people dislike losing more than they like winning, so the prospect of a guaranteed loss in 2(b) is repugnant. Another example: would you accept a wager on a coin flip, where heads wins you $1,500, but tails loses you $1,000? Most people would not. (Again, if you’re different, try upping the stakes.) And this despite the fact that clearly expected value and utility point to accepting the proposition.

    Kahneman and Tversky’s alternative to Utility Theory is called “Prospect Theory”. It accounts for these and many other observed regularities in human economic behavior. For example, people’s willingness to overpay for a very small chance at a very large gain (lottery tickets); also, their willingness to pay a premium to eliminate small risks (insurance); their willingness to take on risk to avoid large losses; and so on. (Again, see Kahneman 2011 for details.)

    It’s debatable whether the observed deviations from idealized utility-maximizing behavior are rational or not. The question “What is an ideally rational economic agent?” is not one that we can answer easily. That’s a question for philosophers to grapple with. The question that economists are grappling with is whether, and to what extent, they must incorporate these psychological regularities into their models. Real people are not the utility-maximizers the models say they are. Can we get more reliable economic predictions by taking their actual behavior into account? Behavioral economics is the branch of that discipline that answers this question in the affirmative. It is a rapidly developing field of research.

    Exercises

    1. You buy a $1 ticket in a raffle. There are 1,000 tickets sold. Tickets are selected out of one of those big round drums at random. There are 3 prizes: first prize is $500; second prize is $200; third prize is $100. What’s the expected value of your ticket?

    2. On the eve of the 2016 U.S. presidential election, the poll-aggregating website 538.com predicted that Donald Trump had a 30% chance of winning. It’s possible to wager on these sorts of things, believe it or not (with bookmakers or in “prediction markets”). On election night, right before 8:00pm EST, the “money line” odds on a Trump victory were +475. That means that a wager of $100 on Trump would earn $475 in profit, for a total final value of $575. Assuming the 538.com crew had the probability of a Trump victory right, what was the expected value of a $100 wager at 8:00pm at the odds listed?

    3. You’re offered three chances to roll a one with a fair die. You put up $10 and your challenger puts up $10. If you succeed in rolling one even once, you win all the money; if you fail, your challenger gets all the money. Should you accept the challenge? Why or why not?

    4. You’re considering placing a wager on a horse race. The horse you’re considering is a long- shot; the odds are 19 to 1. That means that for every dollar you wager, you’d win $19 in profit (which means $20 total in your pocket afterwards). How probable must it be that the horse will win for this to be a good wager (in the sense that the expected value is greater than the amount bet)?

    5. I’m looking for a good deal in the junk bond market. These are highly risky corporate bonds; the risk is compensated for with higher yields. Suppose I find a company that I think has a 25% chance of going bankrupt before the bond matures. How high of a yield do I need to be offered to make this a good investment (again, in the sense that the expected value is greater than the price of the investment)?

    6. For someone with a utility function like that described by Bernoulli (see above), what would their choice be if you offered them the following two options: (a) coin flip, with heads winning $8,000 and tails winning $2,000; (b) $5,000 guaranteed? Explain why they would make that choice, in terms of expected utility. How would increasing the lower prize on the coin-flip option change things, if at all? Suppose we increased it to $3,000. Or $4,000. Explain your answers.


    This page titled 6.2: Probability and Decision Making - Value and Utility is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Matthew Knachel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.