On Virtual Minds
Failing to persuade Pete
I enjoyed the opportunity to join Keith Frankish and Pete Mandik on their YouTube channel yesterday. I’ve had a few chats with Keith before, and have had long social media correspondences with Pete, but it was my first time actually talking to him. I took the chance to discuss a topic we had disagreed about in the past: virtual minds.
The Chinese Room and the Systems Reply
The background to this is the famous Chinese Room thought experiment from the recently deceased John Searle. Searle imagines himself in a room, being fed symbols from outside he doesn’t understand. But he does have a rulebook, describing some complex set of operations he should perform on the receipt of such symbols, culminating in the production of a new sequence of symbols he passes to the outside. In this manner, we imagine he is conducting a conversation in Chinese, despite having no understanding of what is being said. This, to Searle, proves that computers cannot understand if all they do is to follow rules (such as algorithms or computer programs).
A common reply to Searle is to suggest that if the person in the room does not understand, the system, of which he is but a part, understands perfectly well. Searle’s next move is to suggest that he memorises and internalises all the rules, so that the distinction between the system and the person disappears. Searle now is the system, so if the system understands, then Searle understands. Searle maintains that he still doesn’t understand, so neither does the system.
Virtual Minds
From my perspective, the most attractive response to Searle’s move is inspired by an analogy to computing, and specifically to the concept of virtual machines. It is possible to have a physical computer emulate or simulate a virtual computer, so that, effectively, the substrate of the virtual computer is the software of the physical computer, which in turn has a physical substrate. The physical computer might be running a Windows operating system, while the virtual machine might be running some version of Linux. The two computers speak and understand different languages, in effect.
I think something like this would be the case if Searle internalised and applied the rules in order to have a conversation in Chinese. By doing so, I think he instantiates a second, virtual mind, and it is that mind which understands Chinese. Searle’s response to this, I think, is more or less to dismiss this crazy idea with an incredulous stare. As such we need not concern ourselves with Searle. What’s more interesting to me is why a very smart, erudite and thoughtful philosopher like Pete Mandik would not be so keen on this response.
Pete’s Qualms
It turns out, unsurprisingly, that Pete has some very smart, erudite and thoughtful reasons for rejecting the virtual mind response. For a full account, watch the video, but I think they can be summarised as follows.
A resistance to realism about what functions are actually being performed.
An argument that this would contradict one of the tenets of physicalism (fine-grained supervenience).
A rhetorical instinct that we should reject rather than accommodate the assumptions and intuitions of phenomenalists (those who take seriously phenomenal consciousness, qualia, the hard problem, etc.).
The remainder of this post discusses and outlines my response to these issues.
Anti-realism about functions
Pete interprets the virtual mind response as claiming that what is really happening when Searle conducts his conversation in Chinese is that there are determinately two minds being implemented. In a move reminiscent (at least to me) of Putnam’s argument that a rock implements every finite state automaton, or of Searle’s own argument that a wall could be intepreted to be implementing the WordStar word processing program, Pete resists realism about what functions are actually being implemented by a physical system. What is real is the physical system. The functions are just interpretations we layer onto it, which are not determinately there. As such, it cannot be the case that there really are two minds being implemented by Searle’s brain.
Pete therefore prefers to talk about Searle as a physical system. Searle, the physical system, really does understand Chinese, and really does understand English, because he can correctly answer questions posed in those languages. At least, to a point, as we shall see. As such, when Searle claims that he does not understand Chinese, he is simply mistaken.
I agree with Pete on some of this, and broadly I accept that it is a coherent response. In particular, I agree that there is no fact of the matter about what functions a physical system is implementing. However, I don’t see the virtual minds response as requiring the two minds to be determinately there. If all there is is the physical system, we are yet left with some freedom in how we describe it, in how we carve it up into parts so we can talk sensibly about it. We are none of us Laplace’s daemon, so some sort of coarse-graining is necessary to navigate the world. In my view, therefore, what is at stake is what sorts of descriptions are most useful.
A description of Searle as possessing two minds is most useful, in my view, because there is much more going on here than whether Searle believes he can speak Chinese. In the scenario we are considering, there is no reason at all that the Chinese speaker should have the same beliefs, memories, desires, preferences, goals or personality traits as Searle himself. The Chinese speaker could, for example, identify as female, have memories of growing up in Sichuan province in the 1980’s, have little interest in philosophy, and be charming, modest and funny.
Searle won’t necessarily know any of this. And the Chinese speaker won’t necessarily know anything about Searle. Even if there is one physical system, there are two clusters of mental attributes being implemented by that system. As Pete acknowledged during the conversation, Searle may be completely unable to translate between English and Chinese, something we would normally expect someone who can speak both languages to be able to do. For Pete, this just means that we must accept the surprising result that it is possible to understand two languages without being able to translate.
For my part, I don’t see any reason to resist the impulse to regard these as distinct agents, and indeed, distinct minds. This is a useful way to carve things up, allowing us to get a handle on how the system will behave. For example, it is no longer surprising that Searle cannot translate; it is perfectly natural.
The story that there is only Searle as a physical system and no minds really is not, therefore, a useful one. A more useful story, indeed, the better story, is that there are two minds.
Fine-grained supervenience
In in his paper Supervenience and Neuroscience, Pete introduces and discusses the principle of fine-grained supervenience (FGS), which implies that, on physicalism, every mental property must be implemented by a distinct physical property. Pete takes this prinicple to rule out virtual minds, because if there is one set of physical properties, there is then one set of mental properties. If the physical properties of Searle’s brain underlie Searle’s mind, then there is no room to accommodate a second mind without bringing in additional physical properties, which would perhaps require a second brain.
At least, that is how I think the argument works. I am trying to be charitable, but I just don’t see this at all. I have no problem with the FGS principle, but I don’t see that virtual minds contradict it.
The set of physical properties instantiated by a brain is pretty much limitless. One can conceptually carve a brain up in all sorts of arbitrary ways and call each way a property. It has the property of having that one atom over there jiggle that way, and the property of having these 37 atoms over there jiggle that way. Etc. Given the structural complexity of the brain, the combinatorics of how one could construct arbitrary properties are virtually inexhaustible. There’s plenty of room.
The physical properties on which Searle’s English-speaking mind supervene exist at one level of description. We might, for the sake of argument, describe it as supervening on physical neurons. The physical properties on which the Chinese speaker supervenes exist at another level of description. If Searle’s brain supervenes on physical neurons, then the Chinese brain supervenes on the mental properties corresponding to Searle’s representation of virtual neurons, and then ultimately on whatever physical neurons those representations supervene on.
It all, ultimately, supervenes on physical neurons, but the properties we are discussing are at different levels because different numbers of physical neurons are involved. Where some mental property of Searle involves some configuration of physical neurons, a similar mental property of the Chinese speaker would require orders of magnitude more, because each virtual neuron will presumably require many physical neurons to represent and manipulate it.
FGS requires that there be distinct physical properties for distinct mental properties. And, in virtual minds, there are!
Rejecting the phenomenalist assumptions
Pete seems to be consistent in preferring to reject phenomenalist assumptions rather than trying to accommodate them. This is an interesting rhetorical strategy, and I do genuinely admire this sort of skeptical contrarianism. We should indeed be skeptical of our intuitions regarding far out thought experiments.
This has come up before in a disagreement about the Knowledge Argument, where I try to accommodate the intuition that Mary would learn something new when she leaves the room, and Pete prefers to insist that she wouldn’t. The pattern repeats itself here, where I try to accommodate the intuition that Searle does not understand Chinese, and Pete prefers to insist that he does (even if Searle doesn’t realise it).
However, if we should be skeptical of our intuitions, this cuts both ways. It’s no better to assert that Mary doesn’t learn something new, or to assert that Searle does understand Chinese, than to assert the converse. If physicalism or functionalism really did critically depend on which intuitions were true, then this would be a wash. We would be left at an impasse, with conflicting intuitions and no way forward.
Interestingly, the critical importance of such intuitons is something that Pete tacitly seems to accept. In this video, he says something to this effect—that the game is over as soon as we concede this much to the phenomenalists. But this is just where I disagree with him.
I think by instead accommodating the phenomenalist intuitions, as I have done here and in my post on the Knowledge Argument, there is in principle some hope of progress. If phenomenalist intuitions can be accommodated, then it turns out that it doesn’t matter either way whether Mary learns or Searle understands, and this is enough to defuse either argument, whatever our intuitions about these cases.
Of course, it could be said that such tactical rhetorical considerations don’t really matter, because people rarely seem to change their minds on these entrenched philosophical positions. Whatever about its tactical success rate, this is nevertheless why I have an aesthetic preference for my approach. If it could be shown that Pete’s approach were more effective, I might change my mind.

Very nice post! I agree with most of it. Although not the Mary stuff, naturally.
It was real fun to hear you and Pete talk on TP!
I do not have time right now to view the video, but here is a tangential response: LLMs and machine translation have reached the point where they can do all of the tasks required to implement a Chinese Room - not perfectly or even undetectably, but to the point where at least I would regard it as tendentious to insist the gap can not be closed. Given that, it seems difficult to insist that the the human-operated Chinese Room is impossible in principle.
Of course, Searle did not say that implementing a Chinese Room was impossible, but if it is possible, then it seems he must fall back on the position that it does not need understanding to work. That might be the case - whether current AI understands anything is debated today - but if it does not, then Searle simply chose the wrong sort of task to make his point; he might as well have chosen an electronic calculator / optical scanner combination and asked whether it understood arithmetic. This shows, I think, how much the argument, when it was presented, depended on pumping anti-computationalist intuitions shaped by the technology of the time, and it needs a different, more difficult task to remain relevant today.