Notices
Results 1 to 4 of 4

Thread: Turing Test

  1. #1 Turing Test 
    Forum Freshman PiMaster's Avatar
    Join Date
    Jul 2010
    Posts
    14
    Recently my father and I got into a debate about artificial intelligence. He didn't believe we'd ever be able to fully simulate human intelligence, and even if we ever did, it wouldn't actually be intelligent. Obviously, I thought the exact opposite.

    An interesting thought concerning this is from the classic book, "Xenocide" by Orson Scott Card (if you haven't read it, I recommend it). At some point in the book, Miro and Ender discuss free will, and if it even exists at all. Ender pointed out that if something with "free will" is created by someone/something else, the creator is almost like a puppet master. The master programs his puppets to behave in set ways. Computer programs are created by people, so they don't have any free will. He also pointed out that people are created by their environment, and therefore don't really have free will - it's only a script they play out.

    This then begs the question - if we're ever able to simulate human intelligence, what's the difference between a "simulation" and the real thing? How do we know that the simulation isn't genuine, that the program isn't acting out a preset script given to it by the programmer?


    3.141592653589793238462338327950288419716939937510

    http://i240.photobucket.com/albums/ff149/leonskennedy666/grammarnazi.jpg
    Reply With Quote  
     

  2.  
     

  3. #2  
    Time Lord
    Join Date
    Apr 2008
    Posts
    5,305
    ...and if we were simulations, with the smart capacity to internally churn over ideas, shouldn't we simulate a mechanical certainty that we're "the real thing"?


    ***

    I think you forgot that we may make things run unpredictably. For example some gardeners will just plant a lot of random stuff and "let nature decide". A pattern emerges, and the gardener works with that as it evolves, trying not to spoil it with vain intervention. Open source software may evolve this way. I'm pretty sure some AI models are essentially unpredictable in nature also.

    I applied the example of gardening to dispel today's notion of AI as scripted software. That's just flavour of the month, like a century ago people questioned if clanking steam-driven mechanical men could be real men.


    A pong by any other name is still a pong. -williampinn
    Reply With Quote  
     

  4. #3  
    Forum Freshman
    Join Date
    Aug 2010
    Location
    Santa Fe, NM
    Posts
    5
    Quote Originally Posted by Pong
    I'm pretty sure some AI models are essentially unpredictable in nature also.

    I applied the example of gardening to dispel today's notion of AI as scripted software. That's just flavour of the month, like a century ago people questioned if clanking steam-driven mechanical men could be real men.
    Absolutely. It's not the flavor of the month -- it's the flavor of the 60's. The idea of "scripting" or "cracking" intelligence in a clean, logical form is long dead in research efforts -- though it's the first thing people think of when they envision intelligent machines.

    AI has been eclipsed by the fields of machine learnings and "computational" intelligence, which focus on building models from data. In general, these methods are adaptive and evolutionary in nature, and many of them cannot be easily reverse-engineered.

    I for one see AI arising someday through these sorts of avenues. In short, we'll know how we managed to design an adaptive process to create, but we won't actually understand how what we created works.

    Whether or not it will actually fit what the common person thinks of as "AI" (which is basically a human being built out of computer parts) is kind of a moot point, since that colloquial image is not well defined. That questions can really be broken into two parts: "Can we make a machine that's pretty damn clever" and "can we make a machine with consciousness."

    I expect the first to happen eventually (maybe within our lifetimes, maybe not), and I think the second can occur along with it if we're evolving complex systems. But we may well be clever enough to find a way to create smart machines without evolving things as complex as (conscious) biological organisms.

    Siggy
    Reply With Quote  
     

  5. #4  
    New Member
    Join Date
    Oct 2010
    Location
    Melbourne
    Posts
    2
    The purpose of the Turing Test is to simulate intelligence. Turing believed that machines are capable of 'thinking' (beyond processing) but responses such as the Chinese Room (John Searle) have cast a lot of doubt on Turing's original view.

    It is arguable that computers can not think because they are not self aware, they may demonstrate self awareness, but the gap between imitating and actually being self aware is a huge chasm.

    Humans when born are not self aware and not until the age of 4-5 can they completely separate themselves (mentally) from others. The mechanism behind this hugely accelerated learning of humans in this early period is not well understood and at this time can not be replicated in computers.

    Siggy's response best sums up the likely difference causing the disagreement between you and your father,
    That question can really be broken into two parts: "Can we make a machine that's pretty damn clever" and "can we make a machine with consciousness."
    Reply With Quote  
     

Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •