Meta-ethics and the Multiverse
How maybe the existence of a multiverse has meta-ethical implications
I had a very enjoyable discussion with Lance S. Bush over the weekend. It was my first stream so I had some problems I need to improve on with poor audio quality and excessive muttering, but it prompted some thoughts.
My main philosophical preoccupation these days seems to be with issues pertaining to multiverses of various kinds, whereas one of Lance’s is meta-ethics. One of the issues that came up during the chat has inspired me to write this post about how the two interests might (this is a bit half-baked) intersect.
During the chat, Lance mentioned that he had at one time been committed to utilitarianism, and that this was no longer the case. He mentioned, for example, that he is not disposed to treat his daughter impartially as if she were just some arbitrary baby, like an act utilitarian might.
I’ve had a similar journey, but for different reasons. First off, we’re both moral anti-realists. We agree that there are no stance independent facts about morality. But, like Lance, I did at one time consider myself to be a utilitarian. Despite being an anti-realist, I still have moral preferences, and sometimes it’s good to have a system by which to operationalise those preferences as a guide to how to think about tricky moral decisions. Utilitarianism seemed like a pretty reasonable system.
To be honest, it still does, despite the issue Lance raises. A society where people have no special relationships, where everybody is completely impartial, looks like a pretty cold one. We are not adapted to live in such a society, and to try to live up to those ideals is probably a recipe for misery. We need close, special relationships in order to flourish, and we expect our loved ones to be on our side. So a rule utilitarian might give Lance a pass for being partial to his daughter. And I’m sure Lance has considered all this and has a good response, so that’s just a digression with my own reaction to this question.
There are also issues I won’t go into concerning extreme cases where utilitarianism starts to look implausible (Utility Monsters and Repugnant Conclusions and so on). My attitude here is that, as an anti-realist, I can always choose to depart from utilitarianism if I want to. Utilitarianism is a useful framework for thinking through difficult moral decisions, but it is not determinately true. If there seem to be cases where the results of utilitarianism seem unacceptable, then I’m comfortable with overriding it. Overall, my morality comes from my core moral preferences. On the whole, I have a preference for maximising utility. But if there are extreme cases where maximising utility leads to results I am sure I do not prefer, then there is nothing that compels me to stick with utilitarianism.
My problem with utilitarianism these days is more that it seems to be a little bit in tension with my views on personal identity and the existence of many worlds. Even in cases where utilitarianism appears to give the answers that feel right, it does not seem to work as an explanation of why they are right.
Suppose I’m thinking about whether I should give money to some charitable cause or not. On utilitarianism, to do so might increase utility, so I should. And that feels like the answer I want to get to. The problem is that if there are many worlds where all possibilities are realised, then the fact is that there is a world where I give to charity and a world where I don’t, and so, overall, across all worlds, utility is the same. I’m not choosing an outcome, globally. If I give the money to charity, then it doesn’t stop there being a world where I don’t. If I keep the money, it doesn’t stop there being a world where I donate. It feels like I’m not making a choice that has any global consequences; I’m only choosing which world I want to live in. A choose-your-own adventure where the text is static and all outomes exist, and I’m just choosing a path through it for myself.
This does not feel like utilitarianism any more. It feels more like virtue ethics, because it’s a choice centred on me as a moral agent, and not on the consequences. This does not necessarily have any strong implications for what I should choose. Perhaps the overriding virtue I value in myself is my preference for utility maximisation within my own world, so I choose to live in the world where I have made choices which maximise utility, just like a utilitarian would.
Maybe it doesn’t matter very much, practically. But these considerations have made me much more sympathetic to virtue ethics than I used to be.

Good discussion you had with Lance! Now when we have our conversations I’ll hear an Irish accent in your responses. :-)
It seems like everyone goes through their utilitarian phase. But as a moral-anti-realist myself, I’ve reached the point where both consequentialist and deontological philosophies feel more like attempts to justify the answer we want than something anyone seriously uses to decide how to act.
Virtue ethics isn’t perfect either, but at least it has no conceit that it’s finding some objective answer of good or bad. It’s more identifying what’s right for someone to live the good life, or in some cases, what the signs are of someone living the good life.
Maybe I lack imagination, but I’ve just never felt the pull of the many-worlds interpretation on any of this. The other versions of me, if they’re there, feel too much out of my experience or control.
“I know Frank, thank you. What everyone gets wrong about an infinite possible number of universes is that doesn't mean every possible thing you can ever imagine can exist in one. There are an infinite number of numbers between zero and one, but not a single one of them is two. Some paths will never be, not in this reality or any other, this is sometimes the most painful thing to accept.”
Dr Ling Xi taking aim at what most people get wrong about the many worlds hypothesis.
(From my cyberpunk fiction)