@Jacob Bowden December 19, 2024

On 18th November, 2023, I submitted the following article for the Cambridge Meridian Office 24-hour sprint competition:

In discussion groups, I have noticed that differences in judgements as to what should be prioritised often stem from disagreements over answers to certain difficult metaphysical questions.

For example, in a recent discussion it came to me to pose the question of whether a ‘sentient virus’ could exist. Now, if it were possible, then it would pose quite a considerable threat, and whether we think it is possible or not is dependent upon our answers to difficult questions in the philosophy of mind. As to the first of these two claims; this is because a sentient virus would be aware of the fact that it is antithetical to their interests to kill their host[1], and so we might rightly worry that, were it to come into existence, it would inflict great suffering upon other sentient organisms for as long as possible.[2] On the second; there is great debate and little consensus amongst philosophers as to what sentience consists in, what it is produced by, and therefore what the necessary conditions are for sentience to arise in a thing or become associated with a thing.

This is a crude example, but there are many more. Indeed, the only things that members of the Xrisk community seem to agree on is that we ought to be doing the most good that we can do, and that this includes considering the interests of future people (and even that these are agreed upon is debatable).

Despite the existence of these kinds of ambiguities, however, we still want to ensure that we are doing the best we can to try to have the biggest impact we can with our research, work, donations and resource allocation.

But how can we ensure that we are doing so?

Answering this question involves answering the question of how we can reasonably ground a belief that what we are focusing on is more likely than any other option to be one of the biggest risks to sentient life, given what we know. This ‘given what we know’ caveat is necessary, because there is no other grounding on which we can base the decision to prioritise one area over another. But, when the metaphysical bases influencing our assessment are weak or unclear, we still do not want to do nothing. Nor, indeed, do we want to pick between potential options arbitrarily. So how do we choose?

I suggest that, in deciding what to prioritise, we take into account the likelihood of the relevant influencing metaphysical claims being true.

My argument is as follows:

P1. It is often that the truth or falsity of a proposition p influences the probability that a catastrophic event e takes place.

P2. In many of these cases, whether p is true or false is far from clear.