My latest (academic) book, Moral Uncertainty, co-authored with @Tobyordoxford and Krister Bykvist, is out today! It’s open access - download it at https://www.moraluncertainty.com/ or order a hard copy via Amazon or OUP. :)
Here’s an informal history and summary, in tweet form. (1/20)
Here’s an informal history and summary, in tweet form. (1/20)
I first had the core idea of this book way back in early 2009 in an argument in a broom cupboard (yep) with another philosophy grad student. The argument was about vegetarianism (me pro, him against). The argument I came up with was this: (2/20)
Even if you think it’s *unlikely* that animals have moral status, surely, given the existence of seemingly compelling arguments to the contrary, you shouldn’t be *extremely confident* in your view. But the stakes are highly asymmetric. (3/20)
If you eat veggie & eating meat is permissible, well, you’ve only lost out on a bit of pleasure. But if you eat meat & eating meat is impermissible, you’ve done something very wrong. In light of uncertainty about animal ethics, the morally safe option is to eat vegetarian. (4/20)
This is just applying expected-utility reasoning to take into account not just uncertainty about what will *happen*, but also uncertainty about what is of *value* or about what we fundamentally ought to do. (5/20)
EU theory has a good theoretical basis. It’s also common sense: If there’s some chance of a child playing round a blind corner, you take care when driving round it. Why? Because the stakes are asymmetric: driving fast is a small gain to you; killing a child is very wrong. (6/20)
But, to my knowledge at the time, the extension of expected utility theory to moral uncertainty had not been done. (And indeed I learned that the modern literature exploring the idea was very small indeed - one book and a handful of articles.) (7/20)
This was particularly striking because the range of applications is so wide. In the economics of climate change, a huge question is how to trade off harms to future generations vs harms to the present. But we could just use EU-theory to take that uncertainty into account. (8/20)
Or, consider Singer’s drowning child argument. If he’s wrong and you donate, you’ve still done a good thing. If he’s right, and you don’t donate, it’s like you let a child drown in front of you. Again, asymmetric stakes: under moral uncertainty, the safe option is to give. (9/20)
Anyway, I asked my supervisor, John Broome, about it and he said I should speak to some guy called Toby Ord. We met for coffee in a graveyard; as well as moral uncertainty, he told me about an idea for a project called Giving What We Can…. but that’s another story. (10/20)
I worked on it as my Master’s thesis and then PhD (DPhil), supervised by Krister Bykvist. The idea to co-author a book on the topic was the product of many cocktails in Stockholm in the spring of 2011. This has been a long time coming! (11/20)
We’ve tried to fulfill two aims with this book. First is providing an introduction to the topic. If you want to learn about moral uncertainty (as a philosopher, or economist, or psychologist), this is the first port of call. (12/20)
For that reason, we’ve tried to be broad. We’ve covered the meta-ethics, practical ethics and decision theory of moral uncertainty. (Though we still haven’t been able to cover everything.) (13/20)
The second aim is providing our own novel account. The core argument we give is in favour of an ‘information-sensitive’ account of moral uncertainty. Often, moral views don’t give you quantitative strengths of wrongness that you can compare across different views. (14/20)
For example: Is killing one person to save five more wrong, on Kant’s ethics, than the wrongness of failing to kill one person to save five, on utilitarianism? How could we tell? (15/20)
When we can’t compare strengths of wrongness across different moral views, it’s impossible to apply expected utility theory. What we do is borrow insights from voting theory to show how, even in those circumstances, you can still make better or worse decisions. (16/20)
That’s our core argument, but we cover a lot else, too. We give novel arguments for thinking that there’s a serious subject matter here: for thinking that there are facts about what you ought to do under moral uncertainty. (17/20)
We give an account of how to make comparisons of value (or degrees of wrongness) across different moral views. We address some other extant problems facing decision-making under moral uncertainty, like the problem of ‘infectious incomparability’ and of fanaticism. (18/20)
We look at the practical implications of taking more uncertainty into account. We argue that moral uncertainty poses a problem for non-cognitivism. And, finally, we look at the value of gaining moral information, including the value of doing moral philosophy itself. (19/20)
Ok, that’s my tweetstorm done… for now. I’ll take the time to tweet some more moral uncertainty related thoughts in the coming weeks. (20/20)