Saturday, April 25, 2009

SSH + Screen

This one is not about AI, but about something extremely useful I learned today. Have you ever been running an experiment in a server through SSH and all of asudden the connecton drops? damn! you lose the experiment... try this:

1) ssh to your server
2) screen
3) launch your experiment
4) "Ctrl-a" "d"

now you can quit your ssh and the experiment keeps running in the server. Whenever you want to check how is it going, go and:

5) screen -ls
6) screen -r

boom! you are back in your experiment! Amazingly easy, and amazingly useful. Check the site from where I got the info, here: http://www.rackaid.com/resources/linux-tutorials/general-tutorials/using-screen/

Sunday, March 22, 2009

Chinese Room

Today I was reading once again about the Chinese Room argument by Searle (check out the wikipedia article which is pretty good). I always thought that the argument made no sense because Searle always argues that "it is obvious that the guy inside of the room does not understand Chinese", when for me it was "obvious" that it was the complete room and not any of its parts (like the human inside) who understood Chinese. Reading about it today, I discovered that my argument had actually been made before, and it's called the "systems reply". Searle had a very good response to the reply (in my opinion) consisting on imagining a new Chinese room, where the guy inside just memorizes all the rules in his head, so that there is nothing else but the guy. He claims it is obvious he still does not understand Chinese, and I have to admit that I'm having a hard time belying Searle here.

But far from admitting my defeat, I kept reading just to discover a new version of Searle's argument that he published in 1990, in which he "tries" to be more formal. His argument is constructed by three premises (that should be taken "obviously true"), and a direct conclusion from them:

(A1) Programs are formal (i.e. syntactic)
(A2) Minds have mental contents (i.e. they are semantic)
(A3) Syntax by itself is neither constitutive nor sufficient for semantics

Therefore:

(C1) Programs are neither constitutive nor sufficient for minds.

Well, I've to admit that if we assume A1, A2, and A3, then C1 follows quite nicely. However, let's look a bit at A1, A2 and A3. If you look at it with detail, A3 is the Chinese Room argument, and in most texts I've been reading refer to it as the only controversial one. However, looking at them with detail, I came to think that actually the one that is hard to admit for me is A2. What does it mean that minds use semantics while programs use only syntactics?

It seems to me that the big hole in both the Chinese Room argument and in Searle's new formulation is his assumption, without any more explanation, that "minds" are semantic. It seems to me that if we explain what do we mean by "minds are semantic", then we would be able to encode that in the form of a program. Thus, the key here is define semantics. If we take the logics definition of semantic as "mapping between two symbol systems", then minds are semantic because we have a "symbol system" in our minds, and we know the mapping between that symbol system and another symbol system, the real world. Thus, we "know" what the symbols in our mind mean. However, if we go that route, then a program running in a robot with sensors can do exactly the same, since its sensors can provide the bridge between his internal symbol system and the real world.

Digging more into articles in the internet, I discovered that my previous argument has also been made before and it's called the "robot reply", to which Searle replies that a robot would only convert the inputs to its sensors into symbols and then just use syntactics with them. Thus, not achieving semantics. Well, I would reply two things: first, humans do the same (we map our senses to neuronal impulses and then operate with those); and second, and most important, Searle is only trying to defend A3, while I'm actually attacking A2.

Conclusion: after trying to find why Searle's Chinese room argument was wrong, I realized, that the Chinese room might be wrong or right, it does not matter, since the key question here is whether minds are semantic or also syntactic like computer programs. Until someone proves me wrong, from today I'm convinced the human mind is also purely syntactic. Thus, I don't think semantics is required for intelligence (if we assume that humans are intelligent, of course :)).

Tropezar con la misma piedra

Recently, Axxon has published a science fiction story I wrote a couple of years ago. If you can read Spanish and you like science fiction, check it out here. It's a bit long, but I had a lot of fun writing it. Just as a teaser, I can tell you it's about time travel.