Recently a British AI program won an award for being close to passing the Turing Test. The Turing Test is an idea that a computer can be said to be intelligent if, during a five minute conversation, it can fool you into believing you are talking to a human and not a computer.
You can speak to the 'bots at this address:
http://www.jabberwacky.com/
My problem with this form of test is that why should a "conversation with a human" be confused with "intelligence". I am not being facetious as I'm sure the concept of intelligence that Turing was talking about is present in almost every human. The problem is that a computer's "social life" could be far more limited than a human's. If a computer was asked "What did you have for breakfast?" wouldn't it be a far more intelligent response to say "I didn't have anything, I'm a computer" rather lie or give an evasive response.
It would mean the computer would fail the Turing test, but it doesn't seem to me to be proof of lack of intelligence.