Tuesday, December 28, 2010

Current Thinking on Value

How many distinct meanings does the word 'value' have? Right now, I think the answer is four, but I have the feeling this list is both incomplete and not as compact as it could be. I'd like to make sense of all uses of 'value' in as few basic meanings as possible. Here's what I have now:

Value as Attitude
Mary values her alone time.
Pavel values books printed in the year he was born.
These sentences focus on a particular attitude Mary or Pavel have toward something. If we can shift the word order and still talk about this particular type of value, we get:
Mary's alone time is valued by Mary.
  or maybe
Mary's alone time has value to Mary.
  or
Mary's alone time is valuable to Mary.
  or even
Mary's alone time has value.
In the last example, "to Mary" is not explicitly mentioned but may be discernible by context. Now what's interesting here is that we can make statements which superficially sound like things have the property of "being valuable" in themselves, but really these things "have value" purely because someone has a particular attitude toward them.

This is the sort of value people have in mind when they say "value requires a valuer" or that there can't be "value without a valuer."

Value, the Facilitating Relationship
Sobriety has value for avoiding automobile accidents.
Grass has value for slowing erosion.
No attitude necessary. This sort of value could hold true in a world without any sentient beings whatsoever.

I'm tempted to count valueattitude as a subclass of valuefacilitating relationship with personal desire fulfillment (or similar) as the thing being facilitated, but consider this asymmetry:
Carlos values the painting Starry Night by Van Gogh.
The painting Starry Night has value for exciting Carlos' visual taste.

The painting Ecstasy by Parrish has value for exciting Carlos' visual taste.
Carlos has never seen Ecstasy and so he does not value it.
We might say that Ecstasy has value for Carlos but not value to Carlos.

Non-relational Value

Does a painting — or anything — have value "in itself" without any explicit or implicit reference to a personal attitude, facilitated outcome, etc? This notion is often called "intrinsic value" (though that same term is also confusingly used to mean end-value as opposed to means-value).

This is the sort of value that comes up, for example, in environmentalist arguments that a lush forest which no conscious being ever knows about would still have value just as itself.

I strongly suspect non-relational value is not only fictional, but a confused idea.

Quantitative Value

Math, basically. I don't know how to fit "X has greater value than Y when X is 5 and Y is negative" into any of the above categories. Maybe it came out of quantifying the strength of valueattitude in the form of monetary prices?

At any rate, I'm still ruminating on these categories and will likely revise my view in coming months.

Sunday, December 19, 2010

Contrast Classes

If we don't take anything for granted, it's possible to doubt just about everything. Descartes tried to discard all assumptions and found the only thing he couldn't doubt was that he, the doubter, must exist.1 This fact would hold even if all of his memories and sense experience were somehow being faked (as happens in some science fiction).

Normally, we don't question such basic assumptions about the world. If I ask you whether you drive a white car, I don't expect you to answer, "I'm not sure. I might drive a white car or I might be accessing an artificial memory of a white car." Fake memories were outside the scope of the question.

Walter Sinnott-Armstrong suggests that all beliefs are justified (or not) relative to some set of alternative beliefs.2 For example, the belief that you are wearing a cotton shirt may be justified out of the following contrast class:
{ cotton shirt, silk shirt, no shirt }
...but not this wider contrast class:
{ cotton shirt, silk shirt, perfectly faked sensory experience of wearing a cotton shirt }
...and maybe not this contrast class:
{ cotton shirt, imitation cotton shirt }
Similarly, you are easily justified in believing you hold a $5 bill in your hand as opposed to a $1 bill or a $20 bill, but you probably aren't justified in believing you hold a genuine $5 bill as opposed to a very expert counterfeit of a $5 bill. According to Sinnott-Armstrong:
"Someone, S, is justified in believing a proposition, P, out of a contrast class, C, when and only when S is able to rule out all other members of C but is not able to rule out P."3
See how that applies to the paper money example? You would be justified in believing you hold a $5 bill if — by simple inspection — you rule out the possibility that it's a $1 bill or a $20 bill, leaving only the possibility it is a $5 bill. However, you wouldn't be justified if counterfeits are also under consideration and you lack the skill to rule that possibility out.

Note: Other theories of what makes one belief justified (or more justified) compared to another can still benefit from limiting the scope to a particular contrast class without sticking to Sinnott-Armstrong's definition of "ruling out" everything else.

How This Helps

The notion of contrast classes captures what we intuitively mean by saying a person is 'justified' in a belief, even when we're not counting scenarios of extreme deception. We can acknowledge that Descartes' thought experiment has merit when we're considering things at that level, but explain why it isn't usually relevant.

Some disagreements about whether a belief is justified can be explained by differences in contrast classes. If Claude boards a train which has two stops left (A and B), his friend Edward may be justified in believing Claude will leave the train at Stop B after noticing Claude remained on board at Stop A. However, Claude's other friend, Jaquelin, may disagree with Edward because she knows Claude is prone to jumping off trains between stops. Edward had the class { Stop A, Stop B } in mind while Jaquelin had the class { Stop A, Stop B, between stops } in mind. It's important to note that Edward's belief was, in fact, justified relative to the first contrast class...just not the second.

Appropriately Weak

This understanding of the meaning of 'justified' is negative. Alternatives are ruled out, but the last remaining possibility — the proposition that is justified relative to the class — is not necessarily ruled in as correct. Something outside the class could be correct instead.

This shouldn't come as a surprise. Philosophers often talk of "justified, true beliefs" which would be redundant if all justified beliefs were true. I think what we want from 'justified' is not a guide to truth so much as an account of due diligence. Contrast classes are a way to specify which beliefs were discarded (or found to be less justified) on the way to a justified belief.

Justified Actions

Finally, contrast classes make sense of calling a particular action 'justified' when the other available options are worse, even if the same action would not be justified if a better alternative were available.

1. Meditations II.1-3
2. Sinnott-Armstrong, W. (2006). Moral Skepticisms. New York: Oxford University Press, p. 84.
3. Ibid. p. 86.

Wednesday, December 15, 2010

On "Oughts and Ends"

In his paper "Oughts and Ends,"1 Stephen Finlay explains how normative ought-statements can be broken down into non-normative components. In other words: how we can understand statements like "You ought to X" or "You ought not X" without necessarily basing them on a prior 'ought.'

Background

In the early eighteenth century, David Hume pointed out that people making moral arguments often jump from is-statements to ought-statements without justifying the sudden change.2 A modern day example might be an observation that urban expansion is likely to drive a particular species to extinction, followed by "therefore we ought to create a nature reserve." But there's a missing step! To make the logic work, it seems something like "We ought to avoid causing extinctions" is needed.

The is-ought problem comes from worrying that ought-statements might always require a prior ought-statement, or that some ought-statements are mysterious brute facts.

Demystifying 'Ought'

Finlay's approach is to defend a plausible interpretation of ought-statements which has the nice side effect of dissolving the is-ought problem (to my satisfaction, anyway).

Must, Should, Could, Shouldn't, Can't

Let's talk about these five other words first! These are all called modal auxiliary verbs because they change the way (or mode) in which the main verb is meant to be understood. For example, they can add information about probability:
"All mortals must die."
"Jones is a punctual guy, so he should arrive any time now."
"The new candidate could win this election."
"According to the weather report, it shouldn't rain tomorrow."
"Dogs can't recite epic poetry."
Or they might add a normative tone to the main verb:
"Drivers must stop at red lights."
"Mary should study for her math final."
"You could take the subway or hire a cab."
"Defendants shouldn't represent themselves in court."
"Congressional candidates can't make jokes like that in public!"
Notice how the first set of sentences passively report on the probability of things, while the second set have that distinctively action-guiding feel of normativity. Isn't it a little curious how a whole range of modal verbs just happen to have both probabilistic and normative forms? Or, wait, maybe this isn't a coincidence at all!

Watch what happens when we explicitly mention goals (or ends) for the second set:
"[In order that they avoid violating the law], drivers must stop at red lights."
"[In order that she passes the test], Mary should study for her math final."
"[In order that you make it to your appointment downtown], you could take the subway or hire a cab."
"[In order that they are adequately represented], defendants shouldn't represent themselves in court."
"[In order that they be elected], Congressional candidates can't make jokes like that in public!"
Can you see how the modal verbs are tied into probability again? Drivers who don't stop at red lights certainly violate the law. Mary has the best chance of passing her math final if she studies for it. Either the subway or a cab are ways for you to most probably make it to your appointment downtown. It's unlikely defendants will be adequately represented if they represent themselves. And there's no chance Congressional candidates will be elected if they make those kind of jokes in public.

So maybe these five modal verbs always have to do with probability, and optionally relate to some goal. When they do relate to a goal (or end), they gain normative tone! This is Finlay's end-relational theory. It accomplishes, as he puts it:
"a straightforward analysis of instrumental normative language, unifying the language of ordinary modality and normativity, and providing a univocal semantics for two isomorphic sets of terms."
Back to 'Ought'

The previous section was a bit of a trick. It turns out we've already been discussing 'ought'! It works about the same as 'should.'
"Jones is a punctual guy, so he ought to arrive any time now."
"Mary ought to study for her math final."
"[In order that she passes the test], Mary ought to study for her math final."
The first sentence uses 'ought' in a probabilistic, non-normative way. The second uses a normative 'ought.' And the third sentence reveals how the normative 'ought' was formed by relating probability to an end.

The Many Flavors of 'Ought'

If 'ought' is end-relative, then the full meaning of 'ought' varies from instance to instance, as far as ends may vary. A critic may claim this is a terrible disadvantage compared to another theory which assigns 'ought' a single, full meaning all the time. "However," Finlay says,
"although the end-relational theory recognizes a multiplicity of ways 'ought' is relativized, it also gives a universal semantics for 'ought' itself. 'Ought,' on this view, is no more semantically ambiguous than attributives like 'real', comparatives like 'big', or indexicals like 'here', which despite their complex interaction with context are not difficult to interpret."
Some...Maybe All?

If you're ready to acknowledge that some 'ought's gain their normativity in the manner described by the end-relational theory, progress has been made in demystifying normative 'ought'! Finlay takes things one step further by suggesting normative 'ought' always presupposes ends. He claims the end-relational theory can account for the empirical data and shows how some likely objections fail to show otherwise.

A critic might, for example, try to categorize the end-relational theory as an instrumental theory of 'ought' in the sense that "if you want to do X, you ought to do Y." This would be an inadequate account because we very often use 'ought' in a way that goes against a particular agent's desires. But the end-relational formula of "in order that X, you ought to do Y" avoids this limitation by not binding the meaning of 'ought' to a particular agent's desires.

The end-relational formula may sound too weak to provide practical guidance. If you tell me that I ought to do Y, but I realize the end it would serve is X — and I don't care about X — then I could shrug and go about my business. But it may be the case that X is especially important to me, so you could guide my behavior through an explicitly relativized 'ought.' From the paper:
"'In order that I don‘t kill you, you must come with me' may be end-relational, but this 'must' is not lightly ignored."
Alternatively, you may influence my attitudes or behavior by taking advantage of (exploiting) the elliptical nature of ought-statements. If you forcefully tell me what I 'ought' to do without making any end apparent, I might go along with you! In short, Finlay believes it is "plausible that categorical uses of 'ought' express demands and attitudes just as expressivists claim." 

Hume Revisited

(Going beyond the scope of the paper here...)

As I understand Hume, he was pointing out that something is missing between (I) and (III) in cases like this:
I. Urban expansion is likely to drive a particular species to extinction.
II. ???
III. Therefore, we ought to create a nature reserve.
And as I understand the is-ought problem, the worry is that (II) must either contain an infinite regress of ought-statements or a regress far enough back to hit a brute fact ought-statement. But what about something like this instead:
I. Urban expansion is likely to drive a particular species to extinction.
III. Therefore, [in order that biodiversity is maintained] we ought to create a nature reserve.
There is no separate (II). Instead, we understand the 'ought' in (III) to presuppose an end which is most likely to be brought about by creating a nature reserve. It's a qualified 'ought' brought to life from probability and a goal. The force this goal has on us relies — as Hume would no doubt agree — not ultimately on reason but on desire.


1. http://www-rcf.usc.edu/%7Efinlay/OughtsandEnds.pdf (Preprint PDF)
2. A Treatise of Human Nature III.i.i (last paragraph)

Tuesday, December 7, 2010

What Is Morality, Anyway? (Pt. 4)

...Part 1
...Part 2
...Part 3

If I'm right that moral language refers to a conventional but incoherent set of goals, why bother with it? Instead of arguing about whether a war is wrong or right, we could talk about how the war promotes or opposes specific goals, e.g: "This war greatly increases human suffering" or "This war addresses deep inequality." Discarding moral language would clear up much confusion.

A Federation of Goals

Then again, clear communication might not be the main point of using moral language. Suppose I only care about reducing suffering, not inequality. Maybe I buy slaves and treat them well enough that they're happier as my slaves than they would be on their own. Meanwhile, you care about equality issues beyond any consideration about suffering. If you point out that keeping slaves is bad for human equality, my response would be: "So what?"

Now, suppose you start talking about both suffering and inequality in unified terms. You also use shared terms to praise people who relieve suffering or inequality. Through the power of association, you may start to influence my view of slavery. This is especially effective if lots of people use the unified terminology to associate the goal I didn't care about with the goal I did care about.

I'm suggesting moral language gets its punch from artificially combining the psychological importance of several goals under one way of speaking. Think of it as a union or federation of goals.

The Concern

There's a persistent worry that understanding the nature of ethics will undermine the practice of ethics. While the view I'm expressing here does challenge the existence of any uniquely moral facts, I think it's the goals behind morality we're really committed to in the first place.

"Genocide is wrong" isn't some brute fact; we call genocide wrong because it entails so much harm and injustice...and these facts don't change if we take away the word "wrong."

When people say "Good folks donate to the needy" what are they doing?

Usually either prescribing charity directly or promoting an implied goal and recommending charity as a way to advance that goal. The Expressivists and Prescriptivists are largely right about what's going on in the "doing" part of moral discourse. I just think truth claims play a role too. For example, since moral goals are fairly conventional, a disinterested person (e.g. an observant psychopath) could be stating the fact that charity advances conventional moral goals. 

How This Helps

I hope this view can shed light on what's really going on in moral disputes. If we can sort out which goals are at stake when one person says "good!" and the other says "bad!", it may turn out they're debating facts relative to the same goal (e.g. whether a social program really is better for a particular goal) or expressing a commitment to different goals (e.g. the social program is better for a short-term goal but not for a long-term goal). Either way, the debate can be put in new and hopefully more fruitful terms.

In future posts, I plan to analyze specific moral disputes from this perspective. I'll also be looking at how goal-relative moral naturalism might handle problems in the philosophical literature.

Wednesday, December 1, 2010

What Is Morality, Anyway? (Pt. 3)

...Part 1
...Part 2

The first post in this series began:
When people say "Genocide is wrong" or "Good folks donate to the needy," what are they claiming and what are they doing?
I asked this question to introduce metaethics, but even asking a question which focuses on language acts shows bias toward the answer I already had in mind. Instead, I could have asked a question focused on the nature of goodness, the force of obligation, the process of gaining moral knowledge, etc. These are all valid approaches to metaethics, and all of them quietly set up an advantage for certain kinds of answers. I should also mention that I didn't come up with any of the following ideas. In future posts, I do plan to discuss individual papers which struck me as on-target and shaped my thinking. Now, with disclaimers out of the way, it's high time I answer my own questions.

What is morality?

Morality is the social practice of using a particular set of linguistic expressions to make goal-relative truth claims and influence others to put their attitudes and behavior in line with those goals.

When people say "Genocide is wrong" what are they claiming?

They are claiming it is the case that genocide hinders moral goals.

Is this a true claim? Without some goal specification, it can't be evaluated as true or false. If I phone you to say I'm moving at 60 mph, then — strictly speaking — you can't know what I'm claiming without first knowing that I'm moving 60 mph relative to the highway I'm driving on (since velocity is always relative). Of course we usually quickly (and correctly) gather from context that 60 mph is speed relative to the road.

Same for moral claims. Strictly speaking, "Genocide is wrong" can't be evaluated without specifying the goal. This might sound impractical and stilted. What would you think if you asked someone, "Is it wrong to kill and eat babies?" and she replied, "Hold on! I can't answer that yet. Can you specify a moral goal first?" You'd probably feel less comfortable hiring her as a babysitter! But it would also sound impractical and stilted to constantly ask people "velocity relative to what?" when they mention their speed. Most of the time, we have a good enough idea from context and we can answer accordingly. We know a person who claims genocide is wrong probably means something along these lines:
  • Genocide hurts people without a justifying reason.
  • Genocide treats one group unfairly for the convenience of another.
  • Genocide involves disgusting actions.
  • Genocide is just plain wrong, or involves things which are just plain wrong.
  • Any combination of the above.
These all imply corresponding moral goals like harm avoidance, fairness, disgust avoidance, and doing moral good. Most people would probably say genocide conflicts with all of these goals, which makes its moral wrongness very clear.

When people say "Good folks donate to the needy" what are they claiming?

If we look behind the obvious social prodding, there's definitely a claim in there that donating to the needy promotes moral goals. Which goals? Ones like alleviating harm, reducing deep inequality, obeying God, or doing moral good. Again, many people would probably say donating to the needy promotes all of these goals, which makes its moral goodness very clear.

Incoherence

Moral dilemmas occur when an action will advance some moral goals at the expense of other moral goals. It's conceivable for a claim like "Genocide is wrong" to be true relative to the goal of social equality, but false relative to the goal of maximizing happiness. "Child sacrifice is wrong" might be true relative to the goal of preserving human life, but false relative to the goal of keeping oaths.1 "War is wrong" may be true for the goal of minimizing harm, but false for the goal of reducing deep inequality.

Terms like 'good,' 'bad,' 'right,' and 'wrong' are incoherent because they've been overloaded with implied goals which sometimes conflict with each other. This is why philosophers have had such a hard time coming up with a single, intuitively-satisfying procedure for evaluating moral claims.

Error

One effective method for drawing out the moral goals someone has in mind is to challenge their claim by saying something like, "Genocide is not wrong!" and see how they respond. (Just be careful with this tactic if you ever plan to run for office.)

But what if the moral goal they have in mind really is just: promoting moral good? This would be the "brute moral facts" approach of non-naturalism, as discussed in part two of this series. My answer is simple: there's no such thing as just plain moral good for anyone to promote. It's a mistaken idea with nothing behind it, like the once-widespread idea of absolute velocity. This is an element of error theory in my view.2

Nothing Personal

Let me emphasize: these are all impersonal goals. I could correctly state that fighting a war is wrongharm-avoidance or rightequality, even if neither you nor I care about avoiding harm or improving equality in the world.

I'm not sure if facts about whether an action promotes or hinders an impersonal goal counts as a form of moral realism, but I am leaning toward "yes" since such facts would still hold in a world in which everyone's attitudes are completely different from our own. On the other hand, their moral language would most likely be hooked up to much different impersonal goals. So my goal-relative view of morality does seem to be perched on the fence between realism and anti-realism.

...Part 4


1. I have in mind the story of Jephthah in Judges 11.
2. See part two of this series for an explanation of "error theory."

Saturday, November 27, 2010

What Is Morality, Anyway? (Pt. 2)

...Part 1

Last time I discussed whether moral judgments include truth claims and, if so, how these could be claims strictly about the attitudes of individuals or groups. Now I'd like to explore ways judgments could make claims about more than attitudes.

What makes a moral judgment true? Answer #2: Brute moral facts.

What makes it true that '1 + 1 = 2'? It's not my own or anyone else's attitude that '1 + 1 = 2'. According to Theists, what makes it true that God exists? Nothing! Both are examples of brute facts. They're true in a "they just are" way, and any further attempt to explain how they're true isn't going to find some underlying, other kind of fact that makes them true. Perhaps basic moral truths work the same way and we grasp them as clearly as we understand that '1 + 1 = 2'. The view that true moral judgments are not based on any other kinds of facts — including facts about the natural world accessible to science — is called non-naturalism. It is also a very strong version of moral realism because moral facts would still exist even if there were no minds in existence at all.

What makes a moral judgment true? Answer #3: Underlying "natural" facts.

You may have already guessed this view is called moral naturalism. The idea is that there are underlying, other kinds of facts which make moral judgments true.1 Does this mean we could do away with moral language entirely and stick to making truth claims about these underlying facts (whatever they are)? Maybe not. For one thing, moral language serves a social function beyond the truth claim component. But even if we limit the question strictly to the truth claim, it's worth noticing the debate about emergent properties in Physics; it may be that even when we can describe a thing in terms of lower level components, we lose something significant by doing so. The whole may be more than the sum of its parts...or at least it's useful to talk about the whole in daily conversation.

Naturalism can be another form of moral realism (depending on how moral realism is defined). The truth of moral judgments don't change just because attitudes change. Given a particular understanding of which non-moral facts underlie moral truth claims, it's often possible — at least in theory — for scientific investigation to improve our moral knowledge. However, the selection of which natural facts we humans have linked up to moral language may be a matter of convention, not something that was true about the world before we discovered it.

What makes a moral judgment true? Answer #4: No moral judgments are true.

This is not really a separate answer. An error theory is what results when a person accepts another answer about what makes moral judgments true, but also believes these truth conditions are never met. Examples of error theories:
  • Person A accepts divine subjectivism, but is an atheist.
  • Person B accepts cultural subjectivism, but considers culture to be an incoherent concept.
  • Person C accepts non-naturalism, but thinks a realm of moral facts independent of actual people's concerns is absurd.
The usual way out of error theory is to change to a metaethical view in which moral judgments have obtainable truth conditions. However, some people don't consider such views to represent genuine morality. This can result in someone denying moral language while affirming what substantially amounts to a view other folks do label "morality."

...Part 3
...Part 4


1. Technically, attitudes could fit in this category, but they are conventionally excluded. Other mental facts — such as the experience of pain — can count. Supernatural facts can also count. Who promised philosophical terms would make sense?

Thursday, November 25, 2010

What Is Morality, Anyway? (Pt. 1)

When people say "Genocide is wrong" or "Good folks donate to the needy," what are they claiming and what are they doing?

Metaethics is the branch of philosophy that steps back from the debate over the correct procedure for generating moral judgments1 and asks about the nature of moral judgments (and related concepts like goodness and obligation).

Note: I've read several summaries of metaethical positions and each arrangement differs significantly. This post represents my own attempt to get a handle on things. Criticism welcome.

Do moral judgments include truth claims?

In a previous post about propositions2, I explained how not everything we say is appropriately labeled 'True' or 'False.' We can command, recommend, express feelings, etc. without claiming that something is the case (or isn't the case). Expressivism and Prescriptivism are two views which deny the appropriateness of labeling moral judgments as 'True' or 'False.' Instead, they characterize moral judgments as expressions of emotion or, additionally, as personal demands that other people act a certain way (prescriptions).

An expressivist might interpret "Genocide is wrong" as "Genocide? Yuck!" Meanwhile, a prescriptivist might interpret "Good folks donate to the needy" as "Hey you, donate to the needy!" According to these non-cognitivist views of morality, moral judgments may look like truth claims but this is just for rhetorical effect.

Other views of morality still allow emotional expression and demands to play a part in moral language, but affirm that moral judgments also make claims which can be true or false. Philosophers who defend these cognitivist views of morality point out how some features of moral language are hard to explain if we deny any place for truth claims. I find these cognitivist arguments convincing. And according to the PhilPapers survey3 of philosophers, cognitivism (judgments include truth claims) is much more popular than non-cognitivism (truth claims not included).

What makes a moral judgment true? Answer #1: Attitudes.

It's possible to slightly tweak an expressivist interpretation of "Genocide is wrong" from "Genocide? Yuk!" to "It's the case that I react to genocide with a 'Yuk!'" The new formulation counts as a truth claim, but not a very interesting one. Another person could say "Genocide is not wrong" and not be disputing the first truth claim any more than when one person says "I like chocolate cake" and the other says "I don't like chocolate cake." So according to individual subjectivism, moral judgments are practically all true. Moral disputes may look like truth disputes, but actually they're more like disputes over which bands are good; we talk like we're disputing facts but most of us realize we're really just expressing our personal taste in music.

Cultural subjectivism would interpret "Genocide is wrong" as something like "It's the case that our culture reacts to genocide with a 'Yuk!'" This allows for false moral judgments within a culture. If a 21st century American or European man says, "Slavery is wrong," he would be making a truth claim which turns out to be true. But if the attitudes of so-called "western culture" were to change back to the way attitudes used to be, the claim "Slavery is wrong" from anyone in the changed culture would be false.

Divine subjectivism would interpret "Genocide is wrong" as something like "It's the case that God reacts to genocide with a 'Yuk!'" This is a lot like individual subjectivism, except only the attitudes of one individual count for making a judgment true. This view has something important going for it: the truth of moral judgments is no longer grounded in human attitudes; cultures can be incorrect about the morality of genocide and slavery. However, if morality is grounded in divine attitudes, there is no way to say one set of divine attitudes would be better than another. This is a problem if divine attitudes can change, or if we want to compare the morality of two imaginable Gods, or if we want to say there's something about genocide and slavery that make them wrong besides divine attitudes.

I find all of these attitude-grounded interpretations of moral truth claims dissatisfying. When I say "Genocide is wrong," I don't just mean that I, my culture, or God has negative attitudes about genocide. I'm claiming that genocide would fail a moral evaluation even if I, my culture, or God had positive attitudes about it. Philosophers have proposed a number of ways moral judgments might include truth claims which don't (at least directly) depend on attitudes.

...Part 2
...Part 3
...Part 4


1. http://wordsideasandthings.blogspot.com/2010/11/intuitions-and-algorithms.html
2. http://wordsideasandthings.blogspot.com/2010/11/lingo-propositions.html
3. http://philpapers.org/surveys/results.pl

Tuesday, November 23, 2010

Varieties of Justification

Beliefs are justified when they are held for a good reason. Well, then...what constitutes a good reason for holding a belief? In his book Moral Skepticisms, Walter Sinnott-Armstrong makes the following point1:

When someone asks us whether a belief is justified, we often find ourselves wanting to answer both ‘‘Yes’’ and ‘‘No,’’ even when all other facts are settled. This ambivalence is a signal that we need to distinguish different ways in which a belief may be said to be justified. These distinctions are often overlooked, but the failure to draw them creates countless confusions in moral epistemology and in everyday life. Let’s try to do better.

His distinctions are a series of dichotomies. I'm not sure this is the best way to divide up types of justification, but it's certainly better than using an ambiguous term when more precision would be helpful.

Instrumentally vs. Epistemically Justified

In philosophy lingo, "instrumental" has to do with whatever it takes to get something done, often to the exclusion of other concerns. Think about how tools or instruments help get things done, without regard for whether it's something that should be done from a broader perspective.

Suppose an Atheist is married to a Theist and this is a source of conflict and unhappiness. The Atheist wishes she could manage to believe in God because her family life would be much improved. One day, an oddly credible stranger offers her a pill which will chemically alter the way she thinks so that she will believe God exists. She takes the pill, which works as advertised. Her new belief would be instrumentally justified because it's held for a reason that's good for the instrumental goal of having a happier family life.

The problem with instrumentally good reasons is that they are totally independent of truth. Sinnott-Armstrong's own example was a drug to make a person believe there are aardvarks on Mars for the goal of winning ten million dollars. Most people use "justified" to mean good reasons that have something to do with improving a belief's chance to be true; they intend epistemic justification. Epistemology is the philosophical study of knowledge, which requires beliefs to be true...not just useful.

Permissively vs. Positively Justified

I'm reluctant to call permissive justification a form of justification at all. A permissively justified belief is one that is not held for a bad reason, but not necessarily for any good reason either. If I have no reason to believe an external world exists outside my mind and no reason to believe the opposite, I would be permissively justified in believing either side. Why not just say it's permissible — but not justified — to believe something so long as it's not believed for a bad reason?

Positively justified beliefs are those I indicated in the first sentence of this post: beliefs held for a good reason (for some reason and not a bad reason).

Slightly vs. Adequately Justified

A belief is slightly justified when there is some good reason in favor of it being true, but not enough — given the context — to nullify good reasons in favor of something else being true instead.

Suppose a fingerprint is found at a crime scene. If it's matched to a man who isn't likely to have ever been to the scene otherwise, the fingerprint is a good reason to think he is the perpetrator. However, it might not be a good enough reason if there's also reason to think the perpetrator is very smart and the print was on something which could have been brought in to throw off the police. For a very serious crime like murder, the print may not adequately justify the belief the print-owner committed the crime compared to the belief he is being framed.

Slight and adequate justification could probably be split up into several finer-grained distinctions.

Personally vs. Impersonally Justified ...and Wholly Justified

The basic idea here is that a person can be personally justified if she does all the things we can reasonably expect a person to do in order to have good reasons for her beliefs. Unfortunately and despite our best efforts, we are always somewhat vulnerable to false or limited information.

Suppose a practical joker went around to every clock and electronic device in the house and set them all one hour early, hoping to get a laugh when you leave for work early. You might be personally justified in believing it's time to go to work, but not impersonally justified. The third-person narrative of the facts shows a problem with your belief.

Sinnott-Armstrong gets into Gettier problems, which complicate matters. For this post I'll just say beliefs are wholly justified when they're both personally and impersonally justified.


1. Oxford University Press, 2006. Pg. 63.

Monday, November 22, 2010

Lingo: Propositions

Mark each sentence True or False:

__ "Where are my shoes?"
__ "Ouch!"
__ "Beware the Jabberwock, my son!"

Seems inappropriate to call any of these true or false, right? What about this:

__ "He is the tallest man in town."

Out of context, neither answer would fit. If we knew who "he" is and which town we're talking about, then we'd have an assertion which is either true or false (even if we don't know which), in other words: a proposition. Example propositions:

__ "There were more cows than horses in the United States on November 1, 2010."
__ "Gold is an element."
__ "Mental states are fully determined by brain states."

Think of propositions as declarations that something is the case (or is not the case). Or think of them as statements of fact as opposed to emotional expressions, advice, questions, commands, etc. There's a fair amount of controversy about what exactly counts as a proposition, but this is the general idea.

Sunday, November 21, 2010

Comparing Worlds

Imagine a parallel world which is exactly like ours, except Mars has an Earth-like atmosphere of nitrogen, oxygen, and argon. There's no more (or less) life on Mars than there is in our world, but it could much more easily support transplanted life from Earth.

Which is the better world?

My first inclination is to say, "The world in which Mars has an Earth-like atmosphere is the better world!" After all, the Earth keeps getting more crowded. We could benefit from a nearby planet with breathable air to colonize. But wait! The question wasn't: "Which world is better for humans?" If the two questions were identical, it wouldn't be possible to say one human-less world is better than any other human-less world.

Two other questions which aren't identical to the original:

"Which world is better for conscious beings?" (Including dogs, aliens, etc.)
"Which world is better for living beings?" (Including plants, bacteria, etc.)

Or variations of this sort:

"Which world is better for allowing complex structures?"
"Which world is better for maximizing happiness?"

Even when we can answer these more specific questions, the answers may conflict. A better world for humans might be a worse world for wolves, buffaloes, and whales. A better world for maximizing happiness might be a worse world for maximizing social equality. Can the original question be answered without changing it into something more specific? How could a world be simply better rather than better for some things or ideas?

I've become suspicious of this "simply better" idea. It might be a mistake which comes from so often hearing "good" and "better" without an explicit qualification. We usually communicate accurately anyway because the intended meaning is clear from context. But when the meaning isn't clear, I think it's most appropriate to ask "better how?" If that can't be answered, I would assume the other person is merely expressing a preference rather than making a claim.

So my answer to "Which is the better world?" is "Better how?"

Wednesday, November 17, 2010

Intuitions and Algorithms

Sentence A — My dog chased the ice cream truck.

Sentence B — My chased dog the ice cream truck.

Native English speakers immediately recognize Sentence A as valid and Sentence B as invalid. But if you ask them what exactly is wrong with Sentence B, their answers won't be so quick or unified; we can't necessarily explain how our intuitions give the results they do. Noam Chomsky referred to this intuitive ability as linguistic competence, then set out to discover explicit rules which generate the same answers. A more concrete way to think about this is:

What would it take to write a computer program that makes the same "valid" / "not valid" judgments as native English speakers?

Since computers don't operate on intuition, they have to be told how to process candidate sentences in a very explicit, step-by-step way until a solution is reached. Programmers call this process an algorithm. There are algorithms to sort numbers, compare dates (usually of the calendar sort), and simulate physics in video games. If anyone ever writes an algorithm which kicks out the same linguistic answers as intuition, we still might not know how the intuition works in our minds (there may be multiple ways to generate the same answers), but we would at least know what all the relevant factors are and how they interact with each other. Plus, non-native English speakers could ask a computer for any number of validity judgments without the computer getting annoyed. A computer could even be set up to generate millions of random but valid sentences.


There is another kind of intuitive judgment that can be hard to explain: moral judgments.

Situation A — Bill sees Charlene poison David's drink, but declines to tell David. David dies.

Situation B — Bill poisons David's drink. David dies.

Most people will immediately judge Bill's inaction in Situation A as crummy, but Bill's action in Situation B as significantly worse. Why? As with the language judgment at the beginning of this post, the answer to "Why?" won't come as quickly and it won't be as unified.

Imagine if we had a morality algorithm: a step-by-step procedure for making moral judgments that match up with our intuitions. We would know what all the morally relevant factors are and how they interact with each other. We could even program a computer to calculate moral judgments for any given situation.

Wait a minute! What about conflicts between moral intuitions? There are a few ways to handle that.
  • It could be the case that everyone's moral intuition works the same, but we disagree about morally relevant facts. We would just need to make sure we put the true non-moral facts into the computer. For example, we might need to determine the truth of some religious claims.
  • Once we see the morality algorithm works great for most of our deeply held intuitions and tuning it further doesn't seem to help, we might start to trust the algorithm over our own intuitions for the remaining cases. (Philosophers will recognize this state as reflective equilibrium.)
  • If our moral intuitions do sometimes operate in fundamentally opposed ways even when processing the same beliefs about non-moral facts, it might make sense to talk about different moralities. The algorithm could still accurately predict an individual's judgments if it were given the additional inputs of which moralities are under consideration and how they interact with each other.
As I will explain in a forthcoming post,1 I think the last option is the only realistic one. It would require something more like a set of algorithms depending on which kind[s] of morality an individual is using to draw conclusions.

1. http://wordsideasandthings.blogspot.com/2010/11/what-is-morality-anyway-pt-1.html

    Sunday, November 14, 2010

    Consequence-Based Morality

    What makes a particular action good or bad, morally speaking? Consider lying. Is lying "just wrong" with no further explanation for what makes it wrong? Is lying always wrong? If not, then what makes it sometimes permissible or even right?

    One answer is that lying is usually wrong because it usually has bad consequences, but sometimes lying isn't wrong because any bad consequences are outweighed by good consequences. This makes sense of the intuition that it's wrong to lie on tax forms, but not wrong to lie about hiding Jews in the attic.

    This view of moral judgments is called consequentialism. While many philosophers and non-philosophers are attracted to the basic idea of consequences determining right and wrong, the devil is in the details.

    What counts as a good consequence (or bad consequence)?

    If we use consequences to determine moral judgments, the consequences themselves must not ultimately call out for moral evaluation. Otherwise, we'd be stuck in a loop! There are two ways out of the loop:
    • The consequences which ultimately determine the morality of our actions must be so unquestionably morally good (or bad) that there's no need to justify them. Pleasure and pain have long been put forward as moral basics. From Plato's Protagoras: "Then you think that pain is an evil and pleasure is a good: and even pleasure you deem an evil, when it robs you of greater pleasures than it gives, or causes pains greater than the pleasure." In other words, the only time we do question the goodness of pleasure is when its consequence is less pleasure or more pain overall.
    • The determining consequences could be morally neutral in themselves. It strikes me as a little weird to call pain "morally bad" or pleasure "morally good." Sure, we all have strong motivation to seek pleasure and avoid pain for ourselves, but is this a moral motivation? I don't avoid pain for myself because pain is morally bad! It's possible to characterize morally good acts as those which bring about more pleasure than pain overall (and morally bad acts as predominantly causing pain) without putting a moral label on pleasure and pain themselves.
    Philosophers have used other consequences in place of pleasure and pain: well-being and suffering, equality and inequality, beauty and ugliness, high information and low information (in a technical sense), desire satisfaction and desire thwarting, companionship and alienation, and others. These are things people have considered always good to increase (or decrease) all other things being equal. Which of these — if any — are appropriate determiners of moral judgments and how we're even supposed to decide between them are perennial hot topics.

    Consequences for whom?

    Suppose we do settle on pleasure versus pain as the consequences by which true moral judgments are determined. One particularly self-centered way of judging actions is whether they ultimately bring about mostly pleasure or mostly pain for one's self: ethical egoism. As flimsy as that sounds, it is enough to evaluate some desired actions as morally bad and some undesired actions as morally good. For example, I may want to drink whiskey at every opportunity, but if this would bring about more pain than pleasure for me in the long run, it's wrong for me to drink whiskey at every opportunity. Or maybe I find exercise completely unpleasant, but if regular exercise would bring me more pleasure than pain in the long run, it's right for me to exercise.

    Morality that's purely self-regarding strikes a lot of people as a contradiction. The Golden Rule isn't, "Do unto others...if it helps you out." Most forms of consequence-based morality take into account consequences to other people. The trouble is figuring out how to do this without extremely counter-intuitive results. Jeremy Bentham's classical utilitarianism draws an analogy between the way individuals seek to increase their happiness over their pain to a community of individuals increasing their group happiness over their group pain. This sounds reasonable at first, but it has troublesome implications like it being better for a few people to be in extreme pain if it means a small increase of happiness for enough other people. Modern utilitarians try to find new ways of counting up consequences that don't have such immoral-seeming results, or in some cases they'll question whether our gut reactions are justified.

    What we should and shouldn't expect.

    Consequence-based morality has a hard time justifying natural rights, as opposed to legally granted rights. Jeremy Bentham famously called natural rights "nonsense upon stilts" (insult standards have fallen in the modern age!). This is because rights are only good so long as they have good consequences and we can always make up a special circumstance in which respecting a right has bad consequences.

    Another basic concept in other views of morality which isn't handled as easily by consequentialism is giving people what they deserve. If a particular action will benefit a habitually bad person at the expense of a habitually good person — and the benefit is slightly greater than the expense — then it would be a good action, all else being equal.

    It may normally lead to better consequences if we respect rights and give people what they deserve, but these can't be fundamental moral elements if consequences are the only things that ultimately determine right and wrong.