I was thinking about capchas. You know those sequences of letters that you have to enter in order to post a comment to this blog, for example. And I was thinking what a sad statement it is from an AI perspective that computers aren't able to read these.
But what this basically says is that current OCR (Optical Character Recognition) software is incapable of reading some slightly mangled characters with a noisy background. How sad. I remember buying my first flat bed scanner that came with OCR software and thinking that I would easily be able to scan in a page of text from a book and have the OCR software read it in. It turned out the OCR software was about 80 or 90 percent accurate. But manually fixing the last 10 percent was almost as much trouble as typing in the whole thing from scratch.
This was perhaps 15 years ago and I assume that OCR technology has come a long way since then. But apparently not far enough to read a simple capcha. Because if you think about it, all that is necessary is to have a script that grabs the capcha image; runs it through an OCR program to get the text; and then inserts the text in a field on the webpage. Sounds pretty simple. Except that the OCR software is apparently not up to the task.
OK, but eventually there will be OCR software that is quite capable of performing this task that will run on a home PC. Then what? Well we won't be able to use a capcha to distinguish between a human and a machine. So some clever person will have to come up with a different way. But if you take this process to its logical extreme you end up with a Turing test. So at that point can you say that you have an AI?
I've always thought that the concept of the Turing test is flawed. The concept is absurdly vague and unscientific. I wonder whether Turing ever expected anyone to take him seriously. The whole thing seems like a joke that people have accepted as if it were some profound Truth.
There are parts to being human which have nothing to do with intelligence. An AI could be perfectly intelligent without having a physical body that resembles a human and therefore be completely unaware of many areas of human experience that would have no relevance to its existence. Should we then come to the conclusion that this AI is not intelligent?
My suspicion is that the first true AI will exist in some sort of cyber-space rather than in the form of a robot. That AI could exist in some sort of simulated world. Or it could exist as an intelligent agent (IA) on the web. Either way, it would not have a body at all.
Even if it were a robot, it could be some sort of industrial robot used in manufacturing. In that case it would probably not be humanoid at all. Such a robot could be quite intelligent without passing a Turing test. It might not even be able to speak in a human language. Would that disqualify it from being intelligent?
It seems that the Turing test is really a sort of "racist" test of intelligence. It tests whether the AI can duplicate the intelligence of a member of the human race, but clearly there are other types of intelligence. And certainly it is to be anticipated that an advanced AI could be far more intelligent than any human.
At that point humans would fail the AI's own version of the Turing test. From this advanced AI's perspective, humans would be considered biological machines of limited information processing capability - but certainly not intelligent. Perhaps much in the same way that we look upon other animals. You wouldn't consider a dog or a monkey as intelligent - would you?