Sunday, March 22, 2009

Chinese Room

Today I was reading once again about the Chinese Room argument by Searle (check out the wikipedia article which is pretty good). I always thought that the argument made no sense because Searle always argues that "it is obvious that the guy inside of the room does not understand Chinese", when for me it was "obvious" that it was the complete room and not any of its parts (like the human inside) who understood Chinese. Reading about it today, I discovered that my argument had actually been made before, and it's called the "systems reply". Searle had a very good response to the reply (in my opinion) consisting on imagining a new Chinese room, where the guy inside just memorizes all the rules in his head, so that there is nothing else but the guy. He claims it is obvious he still does not understand Chinese, and I have to admit that I'm having a hard time belying Searle here.

But far from admitting my defeat, I kept reading just to discover a new version of Searle's argument that he published in 1990, in which he "tries" to be more formal. His argument is constructed by three premises (that should be taken "obviously true"), and a direct conclusion from them:

(A1) Programs are formal (i.e. syntactic)
(A2) Minds have mental contents (i.e. they are semantic)
(A3) Syntax by itself is neither constitutive nor sufficient for semantics

Therefore:

(C1) Programs are neither constitutive nor sufficient for minds.

Well, I've to admit that if we assume A1, A2, and A3, then C1 follows quite nicely. However, let's look a bit at A1, A2 and A3. If you look at it with detail, A3 is the Chinese Room argument, and in most texts I've been reading refer to it as the only controversial one. However, looking at them with detail, I came to think that actually the one that is hard to admit for me is A2. What does it mean that minds use semantics while programs use only syntactics?

It seems to me that the big hole in both the Chinese Room argument and in Searle's new formulation is his assumption, without any more explanation, that "minds" are semantic. It seems to me that if we explain what do we mean by "minds are semantic", then we would be able to encode that in the form of a program. Thus, the key here is define semantics. If we take the logics definition of semantic as "mapping between two symbol systems", then minds are semantic because we have a "symbol system" in our minds, and we know the mapping between that symbol system and another symbol system, the real world. Thus, we "know" what the symbols in our mind mean. However, if we go that route, then a program running in a robot with sensors can do exactly the same, since its sensors can provide the bridge between his internal symbol system and the real world.

Digging more into articles in the internet, I discovered that my previous argument has also been made before and it's called the "robot reply", to which Searle replies that a robot would only convert the inputs to its sensors into symbols and then just use syntactics with them. Thus, not achieving semantics. Well, I would reply two things: first, humans do the same (we map our senses to neuronal impulses and then operate with those); and second, and most important, Searle is only trying to defend A3, while I'm actually attacking A2.

Conclusion: after trying to find why Searle's Chinese room argument was wrong, I realized, that the Chinese room might be wrong or right, it does not matter, since the key question here is whether minds are semantic or also syntactic like computer programs. Until someone proves me wrong, from today I'm convinced the human mind is also purely syntactic. Thus, I don't think semantics is required for intelligence (if we assume that humans are intelligent, of course :)).

5 comments:

  1. Interesting article Santi.

    I think you're right to question A2 here. Any attempt to define semantics rigorously probably either:

    1) Involves a form of hypercomputation.

    OR

    2) Involves second order logic.

    Its the second option that I think most philosophers would go for although Penrose might opt for (1).

    If the universe operates according to a computable set of laws (with perhaps some coin tossing thrown in) then neither option can be an explanation for observed intelligent behaviour. Furthermore even if the universe doesn't follow computable laws 1 and 2 still fail if the physics relevant to the brain's operation (in a functional sense) is a computable set of laws.

    But all of modern physics is computable (although perhaps with some randomness added). If physics were to be discovered that was not computable this would be a bigger break through than either Newtonian or Einsteinian physics. This isn't to say it will never be found but rather that it is not something that should be claimed without overwealming proof. Certainly the odds that such physics is relevant to the brain's operation are slim to say the least.

    A couple of other observations. Regarding Searle's response to the systems reply: Firstly its necessary to establish that what Searle is imagining destroys the validity of our intuitions regarding psychology. Any program capable of passing the Turing test would necessarily be gigantic in size (either in memory or code). Memorizing such a program and all its data would certainly be impossible for a normal mortal. So the thought experiment is really imagining human like beings with planet sized brains. This means everyday ideas about psychology should not be used in evidence!

    Secondly we need to observe that Searle's argument essentially assumes that any brain can only sustain one mind. Although this might be true for human sized brains its certainly not obvious for planet sized neurvous systems. We can observe that people often recognize a fact at one level of their thinking whilst simultaneously not recognizing it at another. This demonstrates that minds cannot be thought of as monolithic and indivisible.

    Without this assumption Searle's response just doesn't hold up.

    ReplyDelete
  2. Good point about the supersized brains required for Searle's response to System's reply. I think you are right. Once we move out of the "ordinary human", then any kind of intuitive reasoning is invalid (and certainly Searle calls for intuition when he claims that "obviously the human that memorized all the rules still does not understand chinese").

    ReplyDelete
  3. I would say that programs are also semantic, i.e. they have an operational semantics.

    Searle indirectly gave a solution to this topic in his work on the "Construction of Social Reality" where he states that brute facts can be translated into institutional facts using rules of the form "X counts as Y in C". For instance, waiving a hand usually means a greeting but in English auctions it may count as a bid.

    What he proposed is just how to interpret facts, and since interpretation is used in Logics to give semantics to syntactic formulae, in my humble opinion, we may conclude that semantics is just the interpretation of some facts. In the human case, our concepts might be seen as the interpretation (or translation into a semantic representation in our brain) of some neuronal impulses.

    I agree that mere syntax translation does not provide semantics, but I believe we might be following the right clue to achieve that machines understand concepts in the way we do.

    In the case of the Chinese Room, we might use a syntactic translator between Chinese and English, but a grammar translator would obtain better results.Then, in the first case the guy has to interpret the bad translation, in the second the interpretation is easier as the translation follow grammar rules but the guy does not understand Chinese yet. Whereas in the first case, the trick may easily discovered, in the second case, he might fool the other participant. This topic seems very related with semantic alignment.

    ReplyDelete
  4. I forgot to make clear that as it is not necessary to know machine language to create a sematically correct computer program, it is also not necessary to know how human brains represents concepts internally, only the appropiate translation is needed. As some automatic translation programs have been achieved, it suggests that understanding might be computable. However, as different computational solutions of the same problem may vary in complexity, I would say that translation of different representations of the same concept(s) may also vary in complexity.

    I would conclude that we are achieving machine understanding of a subset of our concepts with logics. However, to pursue human understanding in machines, we need to break free from the limitations of Logics. Because Logics is a mind product and not a mind constructor.

    ReplyDelete
  5. This post originated from a question I sent to the cognitive science mailing list in Georgia Tech, and the very first thing they asked me is: "what do you mean by semantics?". So, I think I need to work on that part. :)

    About the operational semantics reply, my question here would be: can a computer program create the operational semantics of itself? Otherwise, I'd say that such semantics is assigned by an external programmer.

    And btw, I find that last remark ("logics is a mind product and not a mind constructor") quite interesting!

    ReplyDelete