@Jacob Bowden | Created: June 3, 2024 | Last edited: August 16, 2024

A version of this article was submitted for the Cambridge Meridian Office 24-hour research sprint competition the deadline of which was 18/11/23.

Let us say that a certain metaphysical question is ‘difficult’ when there is little consensus, even amongst philosophers, as to its answer. Uncertainty about the likelihood of a certain threat sometimes stems from uncertainty about the answer or answers to a certain difficult metaphysical question or set of questions. Consider:

The intelligent virus

A nasty virus that is furthermore sentient, self-aware, and intelligent. It consciously optimises its chances of spreading and knows that it is antithetical to its interests to kill its host. As such, it keeps its host alive as long as it possibly can, whilst nevertheless still inflicting great suffering upon them over the course of its lifespan, and it spreads rapidly.

It is clear that the existence of the intelligent virus would pose quite a significant threat to us. Also clear, however, is that whether it will come into existence depends on whether it could come into existence – and the latter of these questions is metaphysical. What is more; the answer to this metaphysical question is unknown, and there is little consensus amongst philosophers as to it.

We will want to put the intelligent virus somewhere in our hierarchy of things to worry about. How, then, can we do so, when it seems that philosophy leaves us completely in the dark as to whether and to what extent we should worry about it?

In this article, I will outline my proposed solution: whenever assessment of the likelihood of a certain threat depends  on how we answer a certain difficult metaphysical question, we should refer to survey data on the answers given to this question by relevant experts.

Let’s return to the intelligent virus: whether it could come into existence is a difficult metaphysical question. There is great debate and little consensus amongst philosophers regarding what is required for something to have ‘phenomenal consciousness’ – that is, something that it is like to be it. For some of the puzzles in this debate, I encourage the reader to see ‘Epiphenomenal Qualia’, by Frank Jackson (1982). Because the intelligent virus would have to have something that it is like to be it, and philosophers are unsure as to what is required for something to have phenomenal consciousness, whether it could come into existence is an open question.

Nevertheless, as I have said, we will want to weigh up the risk of the intelligent virus coming into existence against that of the other threats that we face (crudely, to put it in our ‘hierarchy of things to worry about’). Below, I will sketch how we might do so.

First, let me expound why this is a problem. One might suppose simply that, when there is philosophical uncertainty underpinning the assessment of the risk of a threat, we should just rank the risk according to our own philosophical intuition. Call this the ‘intuition view’. Suppose that my intuitions regarding the philosophy of mind are such that I deem there to be a 1% chance that the intelligent virus could come into existence (to re-emphasise: this is not a question of whether it would, but of whether it could). Suppose also that I have the intuition that, if the intelligent virus could come into existence, then there would be a 50% chance that it will come into existence within the next 100 years. According to the intuition view, in the above case, I’d be right to assess the likelihood that the intelligent virus will come into existence to be 0.5% (because this is the likelihood that it could multiplied by the likelihood that it will, given that it could).

The intuition view is an unsatisfactory suggestion of how we can weigh up the risk of threats whose risk-assessment relies on answers to difficult metaphysical questions. This is because it treats all assessments as to the likelihood of the metaphysical possibility as being equally likely to be accurate, and it makes it very difficult for us to agree, even roughly, as to how much of a risk the potential threat poses to us. When the metaphysical question is difficult, we will largely diverge in our intuitions as to its answer, and so we will correspondingly diverge largely on our assessment as to the risk of the threat in question. This is undesirable because many of these risks will require that we work together in order to reduce them, and so will require that we agree, at least roughly, about their significance. Consider again the intelligent virus; unless we were to converge to some extent on our assessment of the risk that the virus will come into existence, we would be unable to effectively implement measures to try to reduce this risk. Such a measure might be, for example, increasing security measures against the spread of (certain kinds of) viruses, and almost any such measure would require cooperation; we will have to converge on our assessment of risks if we are going to reduce them.

I propose a method by which we can achieve such convergence. This method, I believe, is the best and most reasonable for placing risks whose assessment depends on answers to difficult metaphysical questions in our collective hierarchy of things to worry about. The method is thus: whenever assessment of the likelihood of a certain threat depends  on how we answer a certain difficult metaphysical question, we should refer to survey data on the answers given to this question by relevant experts. Applied to the intelligent virus: if, in answer to the question, ‘Could a virus be intelligent in the ways described?’, 2% of philosophers (or, more optimally, philosophers of mind) replied, ‘Yes’ (and 98% replied, ‘No’), then we assess the likelihood that the intelligent virus could come into existence to be about 2%; if, in answer to the further question, ‘If a virus could be intelligent in the ways described, what would the likelihood be that it would come into existence within the next 100 years?’, surveyed philosophers (and perhaps also relevant scientists) gave an average assessment of 50%, then we assess the likelihood that it will come into existence given that it could to be 50%. Bringing these together, we would assess the risk of the intelligent virus coming into existence within the next 100 years to be 1% (as this is 2% multiplied by 50%).

This method is superior to the intuition view, because it allows us to have convergence as to the likelihood of the risks whose assessment relies on answers to difficult metaphysical questions. For intuitive reasons, it is also better than simply ignoring the potential for such threats completely. What is more, there is little reason to suppose that deploying the method that I propose will lead us consistently into error, at least compared to any other method. Putting arbitrary valuations as to the risk of a certain threat, for example, will leave us reliant entirely upon good luck, and going off of our own intuitions, as I have shown, will leave us unable to do anything in the face of these risks. We could survey the general public, rather than relevant experts, but there is little reason to suppose that the average layman will be more likely to be correct about the answer to a difficult metaphysical question than the philosopher (and about the more empirical question than the philosopher and the relevant scientist), and in fact good reason to suppose that the reverse is true.

So it would seem that this is the best option we have: when assessment of the likelihood of a certain threat depends  on how we answer a certain difficult metaphysical question, we should refer to survey data on the answers given to this question by relevant experts.

Uncertainty about the likelihood of a certain threat sometimes stems from uncertainty about the answer or answers to a certain difficult metaphysical question or set of questions, and I have illustrated this with the example of the intelligent virus. Nevertheless, we will want to put these threats in our hierarchy of things to worry about. In this article, I have outlined my proposed solution to this dilemma: whenever assessment of the likelihood of a certain threat depends  on how we answer a certain difficult metaphysical question, we should refer to survey data on the answers given to this question by relevant experts.

References

Jackson, F. (1982). Epiphenomenal qualia. Philosophical Quarterly 32. 127-136