Any moral situation—any situation that morality governs—has three basic components:
1. An Actor: you, me, Jesus, Buddha, Malala Yusafzai, etc.
2. An Act: killing, helping, stealing, healing, saving, protecting, tormenting, etc.
3. The consequences of the action: People died, a person is alive that otherwise wouldn’t have been, a person’s burden was lighter, a family was protected, someone was caused a great deal of physical and psychological pain.
We have three basic systems which correspond to these three components. Whichever component of the moral action or situation you think is most morally relevant suggests which of these moral systems you tend to employ in moral reasoning.
- look at the consequences
- look at the type of action: lying, helping, killing, healing, etc.
3. Virtue Theorist:
- look at the actor’s character
There are other ethical systems as I discuss in the lecture, but these are the main three. Buddhist ethics is a form of Virtue theory, but emphasizes different virtues than does Aristotle and has a different means of identifying what is virtuous and what is not (what is vicious or is a vice). Aztec ethics is also a virtue tradition, but it is one which emphasizes the community rather than individual character. Confucian ethics is similar: it’s a virtue tradition, but emphasizes communal values. Karmic ethics is a “what goes around comes around” ethic, where at least one primary motivation for acting morally is that there are consequences to our actions. It might be in the next life or in the afterlife or the like, but bad actions have bad consequences that have a way of finding their way back to us some day. Same for good actions and good consequences. Divine command theory is the view that the ethical is whatever God commands us to do and the morally wrong actions are those we’ve been commanded not to do. Filling in the content here is tough, and must rely on some form of revelation, like from a Bible or Qur’an or the like. Finally, Care ethics is the view that what is important in ethical deliberations (decision making) is that we show proper care for those around us. Responding appropriately to the vulnerabilities of others is of paramount importance, and abstract considerations like Justice or Fairness or the like take a back seat. Arguably, care ethics is again a form of virtue theory. Care ethics arose out of a feminist response to male psychologists who thought it was obvious that someone was “less fully developed” who placed more emphasis on the immediate relational needs and less on abstract notions like justice and duty. They argued that these people are not less developed, but instead simply developed differently. This is an attempt to characterize the feminine approach to ethics—an approach which tends to come more naturally to women, but isn’t necessarily associated with a particular gender or sex.
Let’s go a little more in-depth on each of our three main theories...
Aristotelian Ethics or more broadly “Virtue Ethics”
Aristotle maintains that ethics is the study of how to be successful in life, how to flourish as a human being, and so what to do in life. Ethics is very broad. It’s the study of how humans should act to be the best humans they can be.
With this in mind, Aristotle thinks that an ethical life is a life where doing what was right comes naturally. Just like an archer trains herself to shoot by repetition and the like so that when it comes time to shoot in the midst of a competition or battle it comes naturally to her, an ethical life is like training for a sport. We may need explicit guidance and rules early on in life, but after a while, the rules become less important and our own well-trained moral judgment becomes more important. Sport training similarly involves explicit guidance from a coach or trainer until we become so good that running with good form or aiming properly or throwing accurately just comes naturally—we no longer have to think about it.
What does it mean to train ourselves well? To develop habits to do the right thing and make the right judgments in the right circumstances. Habits, when we’re talking about ethical or moral habits, are called virtues. We call it virtue ethics because the focus of the theory is on training ourselves well—on instilling virtues in ourselves and in others. We’re trying to be people of good moral character, people to whom doing the right thing is second nature.
How do we know what the virtuous is? Easy, just find the mean between two extremes (this is Aristotle’s somewhat controversial method for identifying the virtuous habits). If we have too little courage, we’ll be cowards. If we have too much, we’ll go running into battle without a plan and be killed immediately. Instead, we want enough courage to be able to do the right thing at the right time, and enough restraint to know when to hold back and bide our time—we want a mean (or middle amount) of courage. Similarly, we don’t want to be lazy, but we also don’t want to be ruthlessly ambitious—we want proper ambition. We don’t want to be overly shy, but we also don’t want to be shameless and incorrigible—we want to be modest. Hopefully you can get the basic idea here.
What’s the easy test that an Aristotelian or virtue theorist can use to find out what to do in a given situation? How do we know what is right or promotes flourishing? One answer is that we can apply the “WWxD Test.” What’s that, you ask? It’s taken from the phrase “What Would Jesus Do?” found on bracelets in the 90’s, but instead of just Jesus, we put the variable x because the moral exemplar (the moral example we follow as a role model) might be someone different for different people. If you’re Muslim, it’s “what would Muhammed do?” If you’re Buddhist, it’s “what would Buddha do?” You simply replace the x with whomever you think is a moral exemplar—a virtuous person. Put that virtuous role model in your place and ask what they would do in the same situation. What sorts of things might they focus on? What might convince them to do one thing rather than another? What would they think are good reasons to act in one way or another?
Another virtue ethics test or heuristic is called the “Disclosure Test.” Just imagine, before you do something, that everyone you know and love will find out that you did it. Would that be okay with you? For this test to work, as a disclaimer, you’ll need to ignore or set aside things that are private and embarrassing and focus on things that are potentially public actions—actions that might be right or wrong. If you wouldn’t want everyone to find out that you stole a bit of cash from work, then don’t do it. If you wouldn’t want everyone to find out that you’ve been harassing a coworker repeatedly, then don’t do it. If your parents would be ashamed to find out that you got in a fight at school, then just walk away from the confrontation.
Buddhist virtue ethics and Jewish virtue ethics and... are all virtue traditions that simply put the emphasis on different virtues. Equanimity (being even-keeled and difficult to disturb from a state of inner peace) is a virtue for a Buddhist, but not for Aristotle, who focused on more social and political virtues. Study and obedience is a virtue for a Jew, but a Buddhist monk isn’t so much concerned with study as with training. It’s a bit more complicated than this in that there are more subtle differences between these groups of virtue theorists, but they’re all close cousins.
Deontological ethics focuses—instead of on virtues and moral character—on rights, duties, and general principles or rules. The focus here is on the act itself. Was it a murder? Then it was wrong. Same with lying, cheating, stealing, and torturing. No matter the intended consequences, no matter the character of the actor, no matter the actual consequences. Stealing is always wrong.
Diving right in, we can posit two special rules that will guide our actions as a Kantian deontologist (someone who follows the ethical theory of Kant). First, we should never treat our circumstances as exceptional. If there’s a moral rule, then I must follow it no matter what. After all, I’m not special or worthy of special consideration just because I’m me. Instead, I must follow the rules like everyone else. Every time we act, in fact, we should pretend that we’re broadcasting a new moral law to everyone in the world: in these circumstances, do this sort of thing! If we buy conflict-free diamonds, we’re in some way affirming to the world that everyone should try to buy conflict-free diamonds. If we drive a gas-guzzling SUV simply because we like the look and feel of it and not out of any practical necessity, then we’re in some sense telling the world that it’s okay for everyone to do it. Remember: you aren’t exceptional, the rules apply to you in the same way they apply to anyone else.
Second, you’re supposed to treat everyone only as ends in themselves and never only as means to an end. In short: don’t use people. Don’t manipulate others into doing what you want them to do even though they wouldn’t—if they stopped and thought about it—want to spend that time doing that. Don’t lie to get other people to believe something which motivates them to help you with your projects when they themselves have their own projects to attend to. Again, you aren’t special, so don’t disrespect the values, goals, and projects of others in service of your own values, goals, and projects. For example, don’t use manipulative sales techniques to try to get someone to spend more money than they need to on a product just so you can pocket the profits.
These two rules (along with a third we won’t discuss) are called Kant’s “Categorical Imperative.” Nevermind now why it has such an odd name, but the idea is that we have one basic duty required of us as rational actors—as beings who act because of reasons they have for acting and not only because of urges and appetites. That duty is to treat other rational actors with the same respect with which you want them to treat you. From that basic duty come lots of individual duties—like don’t lie, cheat, and steal. These “perfect duties” form the boundary lines of life: the “do not cross” lines. We can do as we like as long as we aren’t lying, cheating, stealing, torturing, breaking promises, etc., etc. Duty doesn’t require us to buy strawberry as opposed to cookie dough ice cream, but it does require that we buy rather than steal that ice cream.
The easy way to think about it is the follow. The categorical imperative breaks down into two “tests” or “heuristics”: every action must be universalizable and reversible. When I buy cookie dough ice cream, I’m saying it’d be okay if anyone in similar circumstances bought cookie dough ice cream. I’m not special, I’m doing something that anyone could morally do. So my action is universalizable. I can make a universal rule out of it: “go ahead, buy cookie dough ice cream.” If I lie to my teacher to get out of taking a test, though, I can’t be willing to universalize the rule. To say it’s cool if anyone lies to their teachers to get out of taking tests is to say that education is a mere game we play and anyone can bend the rules however they like. It’d undercut the very reasons we go and get an education in the first place (aside from the fact that our parents want us to go). More evocatively, to cheat on a test is to undermine the very practice of taking a test. If everyone cheated, then the test would be useless and there would be no test to take in the first place since teachers would stop giving them. Similarly, if everyone lied all the time, then no one would ever believe anything that anyone said and therefore we couldn’t really lie effectively since no one would believe us anyways.
Okay, so some actions are universalizable and others are not—we can’t consistently extend the right to do the same sort of action to everyone else. Similarly, some actions are reversible and others aren’t (just like some basketball jerseys are reversible and others are not, am I right?). If I hit you in the face for no reason, that is not a reversible action, since I wouldn’t want you doing the same to me. If I help you get up after you’ve fallen off your skateboard in the park, then that action is clearly reversible because I would definitely want someone to help me up if I had fallen.
Kant went a step further than this. You not only have to do the actions demanded by duty (by the Categorical Imperative) but you also have to do them because they are demanded by the categorical imperative. You should act because acting in that way is your duty and not because it will really impress that cute guy or gal across the room or because it will bring you esteem or fame or financial backing. Easy for Kant to say, but it’s a pretty difficult standard to live up to.
Utilitarianism is a particular form of Consequentialism—the idea that the morally important aspect of an action is its consequences. For a Utilitarian consequentialist in particular, the idea is that the type of consequence we’re interested in is maximizing relative well being or happiness (utility). We want each action to bring about the highest possible amount of well being relative to alternative actions we could have performed. I could travel across the world to deliver food to starving children, or I could simply wire money to the people already there who can get the food to them. Choosing between these two courses of action, I should weigh which one creates the most happiness or wellbeing.
It’s a bit more complicated than that, though, since we also often prevent bad things from happening rather than only ever producing more or less happiness. We also produce unhappiness in the interest of producing more unhappiness, prevent good things from happening in the interest of preventing bad things from happening, etc., etc. It gets quite complex.
Either way, though, the idea is quite simple: an action is good if and only if it produces the most possible utility and/or prevents the most possible disutility relative to the alternative actions available to us at the time. If you want to know if an action is good, look at the alternatives and weigh the likely outcomes of each.
To complicate it even further, we should ask ourselves a question: should we act so as to every single time produce the most utility possible, or should we instead act according to a set of principles or rules that will—if we follow them consistently—produce the most utility? This is the difference between Act Utilitarianism—which says that you should do the calculation every time you act—and Rule Utilitarianism—which says that you should follow general rules which serve as signposts to guide you toward producing more utility than harm.
One more complication: are all pleasures or states of happiness equal? People on drugs are often quite happy, but the pleasure you get out of hiking in a beautiful park or grasping the unity and holistic beauty of a long solo in a jazz song surely are higher pleasures, right? Shouldn’t we put more weight on the pleasure of attaining understanding of complex truths than of eating a chili dog? John Stuart Mill thought we should: higher pleasures, he argued, are “worth more” than lower pleasures and so our actions are better off for producing more higher pleasures than lower pleasures. Maybe, though, this is classist and Eurocentric nonsense. Maybe Beethoven isn’t any better than Bachata even though it’s more complex.