But far from admitting my defeat, I kept reading just to discover a new version of Searle's argument that he published in 1990, in which he "tries" to be more formal. His argument is constructed by three premises (that should be taken "obviously true"), and a direct conclusion from them:
(A1) Programs are formal (i.e. syntactic)
(A2) Minds have mental contents (i.e. they are semantic)
(A3) Syntax by itself is neither constitutive nor sufficient for semantics
Therefore:
(C1) Programs are neither constitutive nor sufficient for minds.
Well, I've to admit that if we assume A1, A2, and A3, then C1 follows quite nicely. However, let's look a bit at A1, A2 and A3. If you look at it with detail, A3 is the Chinese Room argument, and in most texts I've been reading refer to it as the only controversial one. However, looking at them with detail, I came to think that actually the one that is hard to admit for me is A2. What does it mean that minds use semantics while programs use only syntactics?
It seems to me that the big hole in both the Chinese Room argument and in Searle's new formulation is his assumption, without any more explanation, that "minds" are semantic. It seems to me that if we explain what do we mean by "minds are semantic", then we would be able to encode that in the form of a program. Thus, the key here is define semantics. If we take the logics definition of semantic as "mapping between two symbol systems", then minds are semantic because we have a "symbol system" in our minds, and we know the mapping between that symbol system and another symbol system, the real world. Thus, we "know" what the symbols in our mind mean. However, if we go that route, then a program running in a robot with sensors can do exactly the same, since its sensors can provide the bridge between his internal symbol system and the real world.
Digging more into articles in the internet, I discovered that my previous argument has also been made before and it's called the "robot reply", to which Searle replies that a robot would only convert the inputs to its sensors into symbols and then just use syntactics with them. Thus, not achieving semantics. Well, I would reply two things: first, humans do the same (we map our senses to neuronal impulses and then operate with those); and second, and most important, Searle is only trying to defend A3, while I'm actually attacking A2.
Conclusion: after trying to find why Searle's Chinese room argument was wrong, I realized, that the Chinese room might be wrong or right, it does not matter, since the key question here is whether minds are semantic or also syntactic like computer programs. Until someone proves me wrong, from today I'm convinced the human mind is also purely syntactic. Thus, I don't think semantics is required for intelligence (if we assume that humans are intelligent, of course :)).