Notices
Results 1 to 62 of 62

Thread: Future of AIs

  1. #1 Future of AIs 
    Forum Freshman
    Join Date
    Apr 2007
    Location
    Toronto
    Posts
    91
    AIs(artificial intelligent),

    What do you think AIs will be like in the future? I remember reading that if we do create powerful AIs almost all the future technological will be made by AIs. How would this impact humans becuase form what I know about our world I think we will be jealous of them. Would we all just become cyborgs so we don't get shown up by AIs? Also how would we know if the AI is awareor if it is intelligent?

    this quote is from Alan Turing and I think it'd be a good way to test it.
    "..a computer would deserves to be called intelligent if it could deceive a human into believing that it was human."


    Reply With Quote  
     

  2.  
     

  3. #2  
    Forum Ph.D.
    Join Date
    Feb 2007
    Posts
    924
    It appears to me that cognitive science now has two paradigms: AI and conceptual metaphor. The hypothesis of each excludes the other. I am a fan of the conceptual metaphor. Reference "Philosophy in the Flesh" by Lakoff and Johnson


    Reply With Quote  
     

  4. #3  
    Forum Freshman
    Join Date
    Jul 2007
    Location
    Alberta, Canada
    Posts
    9
    If AI gets too advanced, it could have profound implications for the human race. Can you imagine all the unemployment?
    Reply With Quote  
     

  5. #4  
    Forum Freshman
    Join Date
    Apr 2007
    Location
    Toronto
    Posts
    91
    ...unless the earth becomes like a paradise where we relax and they do everything for us
    Reply With Quote  
     

  6. #5  
    Forum Junior Kolt's Avatar
    Join Date
    Dec 2006
    Location
    California
    Posts
    246
    "AI" is a popular but bogus term.

    There is no such thing as "Artificial" intelligence in the same way that there is no such thing as "Artificial" fire. Fire from a zippo lighter is no less natural and no more synthetic than fire from a lightening strike.

    Neither are a product of creation. Instead they are a product of cause and effect. As with fire, pre-existing chemicals are arranged or rearranged using broad tools and rudimentary action that initiates a reaction. Hence the term "Chemical Reaction". Intelligence is no different. It is not something that is created from nothing. It is something that is caused. We have to figure out a way to arrange certain pre-existing elements(whether they be material or digital) - or - currents of engergy and then figure out a way to pressurize those subjects to a degree that would cause them to react and in turn cause what is known as "Emergence". This is not a simple, let alone, easy task.

    I think a more propper term should simply be "NI" - New Intelligence

    But lets assume that a new genuine self aware entity is initiated in the near or distant future. The questions and/or scenarios are all but limitless. How smart is it? How fast does it learn? How does it communicate? Is it the equivalent of a child or an adult? What is its emotional state? Does it even have emotions? Is it aggressive or benign? Perhaps more importantly, how do we human beings, who are emotionally and psychologically complex creatures, respond to this new intelligence? How does this NI reflect our state of being - who we are? By learning from this NI, what can we learn about ourselves? What responsiblities do we have towards it? If this NI is capable of achieving the same level of awareness that humans have achieved, does it deserve to have the same rights?......The list can go on and on.

    If and when an NI is ever truly rendered then I think its a pretty good bet that almost every faucet of science, discovery, philosophy, ethics, law, politics and even religion will be put to the test. It would be a profound thing, to say the least.
    Reply With Quote  
     

  7. #6  
    M
    M is offline
    Forum Junior
    Join Date
    Sep 2006
    Posts
    282
    There is no artificial intelligence... only natural stupidity.
    Reply With Quote  
     

  8. #7  
    Forum Ph.D. Steve Miller's Avatar
    Join Date
    Dec 2006
    Location
    Magdeburg, Saxony-Anhalt, Germany
    Posts
    782
    Alan Turing talked about intelligence not artificial intelligence referring to computers. That's interesting.
    Reply With Quote  
     

  9. #8  
    Forum Freshman Inevitablelity's Avatar
    Join Date
    Jul 2007
    Posts
    17
    Dont worry guys we are all gona be ran over by AI because

    "The Singularity is fast approaching !"

    http://www.ourpla.net/cgi/pikie?TheNextSingularity
    - Insuperable Singularity
    Reply With Quote  
     

  10. #9  
    Forum Ph.D. Wolf's Avatar
    Join Date
    May 2007
    Location
    Here
    Posts
    969
    The term "artificial intelligence" has to be taken in context.

    If something is created to exhibit intelligence, that mock intelligence is not actual intelligence, it's simply something behaving like it. It's artificial intelligence. Same as artificial plants...although probably more like an artificial limb...

    When something moves beyond being a false representation of intelligence, and becomes accepted as true intelligence, it ceases to be artificial. It is now real intelligence. Until that point is reached, the intelligence is simply mocking the behaviors and properties of intelligence to the best degree of its design, and is therefore still an artificial, manufactured intelligence.

    Such is the very definition of 'artificial.' Something that is an imitation of the real thing.


    Alan Turing talked about intelligence not artificial intelligence referring to computers. That's interesting.
    That's because Turing was interested in the nature of intelligence, particularly how to identify it, rather than creating it. In order to create intelligence, you have to first know what it is. What are its properties. How do you know when you've achieved true intelligence, verses simply a detailed mimic?
    Wolf
    ---------------------------------------------------------
    "Be fair with others, but then keep after them until they're fair with you." Alan Alda
    Reply With Quote  
     

  11. #10  
    Forum Ph.D. Steve Miller's Avatar
    Join Date
    Dec 2006
    Location
    Magdeburg, Saxony-Anhalt, Germany
    Posts
    782
    Quote Originally Posted by Wolf
    The term "artificial intelligence" has to be taken in context.

    If something is created to exhibit intelligence, that mock intelligence is not actual intelligence, it's simply something behaving like it. It's artificial intelligence. Same as artificial plants...although probably more like an artificial limb...

    When something moves beyond being a false representation of intelligence, and becomes accepted as true intelligence, it ceases to be artificial. It is now real intelligence. Until that point is reached, the intelligence is simply mocking the behaviors and properties of intelligence to the best degree of its design, and is therefore still an artificial, manufactured intelligence.

    Such is the very definition of 'artificial.' Something that is an imitation of the real thing.


    Alan Turing talked about intelligence not artificial intelligence referring to computers. That's interesting.
    That's because Turing was interested in the nature of intelligence, particularly how to identify it, rather than creating it. In order to create intelligence, you have to first know what it is. What are its properties. How do you know when you've achieved true intelligence, verses simply a detailed mimic?
    He was an well known scientist. Do you seriously think you ever will create intelligence as primary goal? Good God!
    Reply With Quote  
     

  12. #11  
    Forum Freshman
    Join Date
    Apr 2007
    Location
    Toronto
    Posts
    91
    I'd say that a a lightnening hitting a tree and causing a fire would be more natural than a zippo light because it is apart of nature but with the zippo lght you go all this excess stuff like the plastic, the metal. But cause humans are apart of nature and we did make the lighter I guess the light has become a part of nature in a way. Same for AI/NI they wouldn't seem like natural intelligent creatures? just because humans created them.

    Also if the robots become aware would they belive that they go to heaven or the Allspark? Would they be like humans who need things to belive in like god and we need to know what the meaning of life is and all? I don't think they'll be like us because they could just keep learning and not need to have some deeper meaning to it and they would be like a different species if they became aware. Also they would know how they got there and why they exist.

    Instead of new intelligence I guess it could be also called man-made intelligence.

    If and when an NI is ever truly rendered then I think its a pretty good bet that almost every faucet of science, discovery, philosophy, ethics, law, politics and even religion will be put to the test.
    how do you think it'll put everything to the test?

    edit - Whats the singularity?
    Reply With Quote  
     

  13. #12  
    Forum Ph.D.
    Join Date
    Dec 2006
    Location
    Norway
    Posts
    927
    ah, humanity, fooling ourselves into thinking we have separated ourselves from nature.
    actually, in the last 200 years of industriailzation we have come closer to nature than any creature before us has ever done, gaining an unrivaled
    amount of knowledge about how it works, and how it can be manipulated.
    everything that humans make might seem unnatural, but its all made of good old atoms and molecules. even our abstract ideas like money and mathematics is all bounded in chemical reactions happending in our brains.

    and coming down to the zippo lighter, its an unusual combination of standard materials you find in the earth crust.

    if you consider a zippo lighter flame artificial, you should also consider everything else made or secreted by living creatures or plants artifical,
    like honey, which is made by bees, rubber, which is produced by trees,
    amber(a plastic), also coming from trees, silk, made by silkworms, threads, spun out of sheep hair, etc.
    all humans do, is leech on other animals, and combining what they make, into something new.
    when you have eliminated the impossible, whatever remains, however improbable, must be the truth
    A.C Doyle
    Reply With Quote  
     

  14. #13  
    Forum Ph.D. Steve Miller's Avatar
    Join Date
    Dec 2006
    Location
    Magdeburg, Saxony-Anhalt, Germany
    Posts
    782
    All materials used stem from the very same planet right. No matter if organic or inorganic. All the materials,
    ever used to light a fire on, also in space, stem from the same space, right either.?

    Hence the flame, respectively the burning process was pretty much the same, all the times?
    Reply With Quote  
     

  15. #14  
    Forum Freshman Inevitablelity's Avatar
    Join Date
    Jul 2007
    Posts
    17
    Quote Originally Posted by Wolf
    The term "artificial intelligence" has to be taken in context.

    If something is created to exhibit intelligence, that mock intelligence is not actual intelligence, it's simply something behaving like it. It's artificial intelligence. Same as artificial plants...although probably more like an artificial limb...

    When something moves beyond being a false representation of intelligence, and becomes accepted as true intelligence, it ceases to be artificial. It is now real intelligence. Until that point is reached, the intelligence is simply mocking the behaviors and properties of intelligence to the best degree of its design, and is therefore still an artificial, manufactured intelligence.

    Such is the very definition of 'artificial.' Something that is an imitation of the real thing.

    ...
    What makes u think that the intelligence u exhibit is not as good as mocked one, u sound so divine.

    Point is theres is no difference between a mocked intelligence and real one.

    Dont forget, once AI surpasses(point of Singularity) your abilities in all aspects of thinking, it wont matter whats mocked and whats artificial.

    http://en.wikipedia.org/wiki/Technological_singularity
    - Insuperable Singularity
    Reply With Quote  
     

  16. #15  
    Suspended
    Join Date
    Sep 2006
    Posts
    967
    You mean that the AI will undo big bang?

    God, I love AI.

    In the beginning there was the singularity, it was all there ever was, and in it all want was satisfied, had in the least amount of space possible, as much as possible to everything possible. If we undo its decay we will have it again.

    I don't think humans can do it.
    Reply With Quote  
     

  17. #16  
    Forum Freshman
    Join Date
    Apr 2007
    Location
    Toronto
    Posts
    91
    I read the wiki but I still don't get why its called a singularity. I agree with the intelligence explosion bit because it seems obvious. Also I wonder will the AIs want us? They might think of us as pests that are getting in the way and make some sort of "Human repellent".

    Also we veiw ourselves seperate from nature because we consider ourselves to be better and we can change it on a huge scale. though we are apart of nature and what we make is then also apart of nature like our waste, but if you saw a honey or a computor are were to choose which was more natural what would you choose?

    I do agree that what we create is apart of nature but I still think of it as unnatural. What other creature wastes as much resources as us?

    Maybe we're the aliens on this planet
    Reply With Quote  
     

  18. #17  
    Forum Ph.D. Wolf's Avatar
    Join Date
    May 2007
    Location
    Here
    Posts
    969
    Quote Originally Posted by Inevitablelity
    What makes u think that the intelligence u exhibit is not as good as mocked one,...
    That's the big question, isn't it? At least that's where I've been headed in these posts.... And, of course, the focus of Turing's work. How do you determine when an artificial intelligence becomes actual intelligence?

    Quote Originally Posted by Inevitablelity
    ...u sound so divine.
    Thanks! :P

    Quote Originally Posted by Inevitablelity
    Point is theres is no difference between a mocked intelligence and real one.
    Yer right. I had so much fun killing all those intelligent bots last night in my games. Those little AI's are real intelligences, after all.

    Oh wait, yer talking about when created intelligence parallels natural intelligence! Crap...what do we call all that pseudo stuff that isn't comparable? Those little game bots, my search screen, my car's EEC, and that stock bot? I guess we could call those "fake" intelligences, since they're not really intelligent, they're just acting like an aspect of intelligence, right? Maybe we could find a better word than "fake," though. Hmm...something that means "made in imitation of something natural"...like..."artificial." Golly gee. Why is this an issue again?

    Quote Originally Posted by Inevitablelity
    Dont forget, once AI surpasses(point of Singularity) your abilities in all aspects of thinking, it wont matter whats mocked and whats artificial.
    Uh...I think that's kinda the point, isn't it?

    Once the created intelligence matches all aspects of what we consider "true intelligence," it ceases to be a mimic and becomes true intelligence by definition. Unless at that point we change our definition.

    The bots running around in my video games have artificial intelligence. They're programmed to behave and "think" in a manner that replicates (more accurately, mimics) certain aspects of real intelligence. They're not actually intelligent, though, according to our understanding of what intelligence is. They're artificial intelligences. The same as the aesthetic objects in my front office are artificial plants.

    They're not real plants, they're artificial. If they grew, lived, died, propagated, etc, like real plants, so much so that I couldn't tell them apart from real plants...they must be real plants.

    Like the AI bots in my games, the artificial plants in my front office mirror some aspects of the real thing. The artificial plants look real, but that's about it. Put a real plant and a fake plant in a room, and you can tell which one's fake.

    If someone put a real plant, and an artificially created plant in the room, and no matter what I did I couldn't tell the two apart, I could only conclude that the two plants are real. I wouldn't know which was which, unless someone told me. At that point deciding if the lifelike artificial plant is "real" or not depends on the definition of a "real" plant. I could simply say that real plants are only plants which exhibit natural plant properties, and which came from natural plants. If I don't specify that, then there's no reason why the lifelike duplicate can't be considered real. However, if it is exactly like the real plant, and I'm saying it's still fake because it was created and not from nature, I'm kinda bending the rules a bit.

    Fortunately, plants are not as difficult a subject as intelligence. If someone managed to create artificial plants which could not be distinguished from real plants, no one would need to decide if they should have rights.

    Chances are, even if one day an artificial "real" intelligence was created, it'd probably still be considered "artificial" simply because it wasn't born but created....but that's another ball of wax.

    Quote Originally Posted by Trix
    Maybe we're the aliens on this planet
    Heh, that's been a line in many a science-fiction book before. I think both Ray Bradbury and Arthur C. Clarke believe we're actually Martians.

    Of course, we ARE the only species on the planet that actively destroys our own habitat, and actively makes efforts against our own gene pool. We're definitely the something-est species on the planet!
    Wolf
    ---------------------------------------------------------
    "Be fair with others, but then keep after them until they're fair with you." Alan Alda
    Reply With Quote  
     

  19. #18  
    Time Lord
    Join Date
    Mar 2007
    Posts
    8,046
    ...unless the earth becomes like a paradise where we relax and they do everything for us
    We'd still run into the barrier of overpopulation, because no matter how advanced our robots get, the land itself will only produce so much food in a year.

    If we don't use the "rat race" to compete for who will eat and who won't, then we have to use war.


    Like the AI bots in my games, the artificial plants in my front office mirror some aspects of the real thing. The artificial plants look real, but that's about it. Put a real plant and a fake plant in a room, and you can tell which one's fake.
    The AI's in your games are not there to give the appearance of being intelligent. They're there to offer you a challenge, maybe even defeat you.

    Imagine a robot programmed to survive.

    What would be the effective difference between that robot and an animal?
    Reply With Quote  
     

  20. #19  
    Suspended
    Join Date
    Sep 2006
    Posts
    967
    Quote Originally Posted by kojax
    ...unless the earth becomes like a paradise where we relax and they do everything for us
    We'd still run into the barrier of overpopulation, because no matter how advanced our robots get, the land itself will only produce so much food in a year.

    If we don't use the "rat race" to compete for who will eat and who won't, then we have to use war.


    Like the AI bots in my games, the artificial plants in my front office mirror some aspects of the real thing. The artificial plants look real, but that's about it. Put a real plant and a fake plant in a room, and you can tell which one's fake.
    The AI's in your games are not there to give the appearance of being intelligent. They're there to offer you a challenge, maybe even defeat you.

    Imagine a robot programmed to survive.

    What would be the effective difference between that robot and an animal?
    We can produce enough energy from graviton to photon convertion on the earth to pulverise an entire galaxie cluster.

    Naturally no one believes me cause it's to big of an energy to be missed.
    But it's there.
    Reply With Quote  
     

  21. #20  
    Forum Freshman
    Join Date
    Apr 2007
    Location
    Toronto
    Posts
    91
    True overpopulation will be a problem but I'm sure that if we can make AIs they'd think of way of a way to get us off our planet(then there is the aliens). What I'm trying to say is that eventually we would become like pets to the AIs. They provide for us and we give them companionship(though they could get it from other AIs). They would have no reason to keep us unless we made sure they couldn't harm us.

    Would an AI that is built to survive be intelligent?

    Also would robots/AIs be evolutions dream? They could most likely be able to live forever, live in most envirnments and be indestructable(Im just guessing I haven't read too much into evolution)
    Reply With Quote  
     

  22. #21  
    Time Lord
    Join Date
    Mar 2007
    Posts
    8,046
    My personal belief on AI's is that we will be the AI's. I think if a human being tries to create intelligence by any process other than copying him/her self, the AI will be somehow incomplete, maybe insane, or just plain messed up.
    Reply With Quote  
     

  23. #22  
    Forum Freshman Caliban's Avatar
    Join Date
    Jul 2007
    Location
    Australia
    Posts
    28
    AI is so complicated that I don't think we will get it right for another 100 years.

    Look at AI in games - It just plain sucks
    Reply With Quote  
     

  24. #23  
    Forum Ph.D. Steve Miller's Avatar
    Join Date
    Dec 2006
    Location
    Magdeburg, Saxony-Anhalt, Germany
    Posts
    782
    I don't think it's such a big problem. As we achieve true simultaneity in programming
    it's going straight there, and was not quite a way. thi was mentioned in some other
    thread before, by megabrain I think. Get well megabrain, and be back right here soon.

    For what I think, we don't want to achieve AI (or some other named sort of intelligence ),
    though, but the goal was to copy the thinking process and to apply applying the results
    to computers later on to be able to judge whether some kind of intelligence was
    achieved with the help of them. Therefore, computer development strives for
    concurrency these days first.

    That's how far one has not yet come.
    Reply With Quote  
     

  25. #24  
    Forum Ph.D. Wolf's Avatar
    Join Date
    May 2007
    Location
    Here
    Posts
    969
    Why is it that people always assume AI will be evil? As in "will kill all of humanity."

    From my perspective, there's three possible outcomes for AI's:

    1. They will kill us. (Unusual, but then again we find it easy to kill things weaker than us. Exterminating humanity would depend on how they perceived us as a threat to their existence, and their predisposition towards killing anything inferior to themselves. Both stretches at best.)

    2. They will not challenge us in any way, possibly recognizing humanity as the creators of their "race" and therefore similar to elders. (All depends on disposition. What ethos do they follow?)

    3. They will coincide with humanity as a new race. (Probably more likely, since if we assume they're more logical, they won't see the value in the petty squabbling of humanity. In this scenario, it is more likely that war will be a result of humanity's stupidity.)
    Wolf
    ---------------------------------------------------------
    "Be fair with others, but then keep after them until they're fair with you." Alan Alda
    Reply With Quote  
     

  26. #25  
    Forum Junior Kolt's Avatar
    Join Date
    Dec 2006
    Location
    California
    Posts
    246
    The real question is not so much about 'Intelligence' as it is 'Emotion'. Or better yet, is it even possible for the first to exist without the latter?

    In theory you might say yes. But in practice - Has there ever existed a sentient self aware being that was not an emotional being as well? I've certainly never seen nature produce anything of that sort. I am still not entirely convinced that pure logic and rationality can ever be totally and completely removed from feelings. And I think that the relationship between how we think and how we feel is not one that can be easily defined. It has often been romanticized that to achieve a state thought totally void of emotion would be to achieve a state of perfection. Personally, I'm not so sure.

    All intelligent creatures understand from one degree or another that if the integrity of their physical being is compromised they could die or be rendered permanently inoperatable. But without fear, what good is this knowledge? If you had no feelings either way in regards to death then what logical reason would you have in avoiding hazardous situations? One cannot have the desire to live if one is not capable of desire. It's the old proverbial: "Why get out of bed in the morning if you don't care"

    My theory is that Intelligence and emotion are inseperatalbe. Where there's one - there's the other.

    So if and when this New-Artificial-Man-Made sentient being is brought into existence we have to assume that it will be an emotional being as well. And given that assumption, a brand new intelligent emotional being would be almost entirely unpredictable. But at the same time if I had guess, if I were asked to speculate or predetermine the nature or behavior of such a being - Then I would suggest that the behavior we would observe first and foremost - would be fear.



    ...Thats my own opinion though. Who knows, maybe it would just be hungry.
    Reply With Quote  
     

  27. #26  
    Forum Ph.D. Steve Miller's Avatar
    Join Date
    Dec 2006
    Location
    Magdeburg, Saxony-Anhalt, Germany
    Posts
    782
    Quote Originally Posted by Kolt
    My theory is that Intelligence and emotion are inseperatalbe. Where there's one - there's the other.
    You're just talking about intelligence and emotion as, how to say, qualities of a human being, right? Not about
    there are two persons living in one, actually (I felt that way while being reading your post )?
    Reply With Quote  
     

  28. #27  
    Forum Junior Kolt's Avatar
    Join Date
    Dec 2006
    Location
    California
    Posts
    246
    Quote Originally Posted by Steve Miller
    You're just talking about intelligence and emotion as, how to say, qualities of a human being, right?....



    ....Right.
    Reply With Quote  
     

  29. #28  
    Forum Ph.D. Steve Miller's Avatar
    Join Date
    Dec 2006
    Location
    Magdeburg, Saxony-Anhalt, Germany
    Posts
    782
    Fine. Thanks!
    Reply With Quote  
     

  30. #29  
    Forum Freshman
    Join Date
    Apr 2007
    Location
    Toronto
    Posts
    91
    Why fear? and anyway I don't think that the 2 are inseperable. Look at some humans there cold, others feel no emotion what so ever and the rest have varying degrees of emotions. Also look at game AIs they can usually beat me but I don't think they have emotion. They might not be advanced enough but yeah. Though you are right emotions make things worthwhile why live if you never feel love, happniess and the rest. Even if we create robots/AIs and they have emotions I'd say they would go with logic over emotion becuase thats how they are built.
    Reply With Quote  
     

  31. #30  
    Forum Ph.D. Wolf's Avatar
    Join Date
    May 2007
    Location
    Here
    Posts
    969
    Quote Originally Posted by Kolt
    The real question is not so much about 'Intelligence' as it is 'Emotion'. Or better yet, is it even possible for the first to exist without the latter?
    I agree partly that the essence of true intelligence is the ability to dream, have emotions, etc, but there is a catch to it all. You'd have to define some way of determining if the emotions were self-produced, or simply programmed imitations. There already exist robots which are programmed to imitate human emotions based on predefined criteria. They're not intelligent, though, simply because they can mock emotions.

    Quote Originally Posted by kojax
    The AI's in your games are not there to give the appearance of being intelligent. They're there to offer you a challenge, maybe even defeat you.
    By what? Psychic powers? Presence? Color aesthetics? Or by moving and behaving like real players using a mimic of the intelligence that drives real players?

    They're there to replace an otherwise intelligent real user with a bot which is programmed to behave like a real player. Why doesn't that classify as artificial intelligence? :?

    Quote Originally Posted by Kolt
    Imagine a robot programmed to survive.
    That's pretty much what the bots in the game are programmed to do. Obviously they're not real people, so their reactions and capabilities are limited to a very small slice of the thing they are imitating. When you shoot at them, they dodge out of the way, because they are programmed to avoid losing.

    From an appearances perspective, the dodging bot is quite real, since it does exactly what real people do in the same situation. However, move beyond it's programmed abilities and the bot begins to break down (metaphorically). That is because it is only an (artificial) imitation of a real person.

    Quote Originally Posted by Kolt
    What would be the effective difference between that robot and an animal?
    Good question. One that Turing was trying to sort out as well. People such as Descartes and Malebranche believed that animals were merely machines, and that their reactions were purely based on the stimulations applied to them. This Cartesian viewpoint on life stated basically that humanity was the only truly intelligent, thinking creature, due to divine (ie - God's) influence. The separation of mind and body, in which only humanity possessed a true mind.
    Wolf
    ---------------------------------------------------------
    "Be fair with others, but then keep after them until they're fair with you." Alan Alda
    Reply With Quote  
     

  32. #31  
    Time Lord
    Join Date
    Mar 2007
    Posts
    8,046
    You know, as for the AI killing humanity problem: That war could start long before any AI's ever begin being human-like or sentient. (Or it could start even if they were never going to reach that point.)

    You give an AI an objective, and it will relentlessly pursue that objective to the extent that it's programming allows it to. What if we give a highly intelligent AI an objective, and then later on change our minds? Wouldn't it try and kill us in order to prevent us from stopping it from reaching its objective?
    Reply With Quote  
     

  33. #32  
    Forum Ph.D. Steve Miller's Avatar
    Join Date
    Dec 2006
    Location
    Magdeburg, Saxony-Anhalt, Germany
    Posts
    782
    They will kill those, who made them killing them.

    I mean they will kill those who made them to be killed by them. Sorry many times for that!
    Reply With Quote  
     

  34. #33  
    Forum Freshman
    Join Date
    Apr 2007
    Location
    Toronto
    Posts
    91
    We could have the 3 laws like in the books by Asimov and make sure those laws are at the very core and they aren't able to go agianst those laws. Of course there are going to be people who will try and make AIs without those laws but I'd still say that the majority won't becuase the risk is far too great. What if the AI turns on the creator. The whole AIs taking over the world is odd if we have the ability to create them we would alwaysn have a plan to destroy them after all we are pretty good at that. But if AIs do go bad well I'll feel sorry for us.
    Reply With Quote  
     

  35. #34  
    Forum Ph.D. Wolf's Avatar
    Join Date
    May 2007
    Location
    Here
    Posts
    969
    Quote Originally Posted by kojax
    You know, as for the AI killing humanity problem: That war could start long before any AI's ever begin being human-like or sentient. (Or it could start even if they were never going to reach that point.)
    Cool angle, I like it. :wink: Care to spin a little story?

    Quote Originally Posted by kojax
    You give an AI an objective, and it will relentlessly pursue that objective to the extent that it's programming allows it to. What if we give a highly intelligent AI an objective, and then later on change our minds? Wouldn't it try and kill us in order to prevent us from stopping it from reaching its objective?
    That seems to be the basis for most movies involving AI's, and unfortunately it may not be far from truth. The problem is in how intelligent the programmers are, and how intelligent the AI is. If the programmers were dumb enough to create an AI that pursued an objective with no break clauses or logic overrides, that's one fault. If the AI itself is not intelligent enough to know when it's objectives have changed, that could be another problem.

    Quote Originally Posted by Trix
    We could have the 3 laws like in the books by Asimov and make sure those laws are at the very core and they aren't able to go agianst those laws.
    Asimov is AI God! I was wondering when Robbie was gonna show up.

    Quote Originally Posted by Trix
    Of course there are going to be people who will try and make AIs without those laws but I'd still say that the majority won't becuase the risk is far too great.
    Heh heh, the fun of AI warfare. Whose to say they don't program those laws in for their protection, but classify "humanity" as "only us, not them"?

    Quote Originally Posted by Trix
    What if the AI turns on the creator.
    Natural selection, I suppose. A species that kills itself, goes extinct. So I guess if humanity is dumb enough not to install a safety mechanism, we deserve to get plungered down the evolutionary drain. Hopefully these rouge AI's will be intelligent enough not to target those not responsible for driving them crazy... (I dislike dying because of stupid people.)

    Quote Originally Posted by Trix
    The whole AIs taking over the world is odd if we have the ability to create them we would alwaysn have a plan to destroy them after all we are pretty good at that. But if AIs do go bad well I'll feel sorry for us.
    Did someone call for a UN inspection team?
    Wolf
    ---------------------------------------------------------
    "Be fair with others, but then keep after them until they're fair with you." Alan Alda
    Reply With Quote  
     

  36. #35  
    Time Lord
    Join Date
    Mar 2007
    Posts
    8,046
    We could have the 3 laws like in the books by Asimov and make sure those laws are at the very core and they aren't able to go agianst those laws.
    The problem is that the laws are very subjective, and computers don't handle subjectivity very well.

    How would they define "causing harm to a human"? Would it be based on whether the human calls it harm, or would it be specific, like only counting physical damage as harm?

    Would the computer understand the equivelency between hitting a human with a club and hitting a human with it's claws/hands/whatever of its own body? Or, would it think the club was the one doing the harm instead of it?
    Reply With Quote  
     

  37. #36  
    Forum Freshman
    Join Date
    Apr 2007
    Location
    Toronto
    Posts
    91
    Well the club can't move by itself so the robot is the one whos picking it up and intenionally hitting the the human so technically it is harming. The club would be like an extension of itself.

    If AIs do develop to the point where they have a consicence would it be similar to ours? Like killing another AI is a big no-no while killing a human is alright? But your right the laws we make will be broken afterall every rules are made to be broken.

    Anyone know the UN's number?
    "If Earth is heaven and this is the only place we are meant to live, why did God create the rest of the Universe and give us the means of reaching it? "
    Ophiolite
    Reply With Quote  
     

  38. #37  
    Forum Ph.D. Wolf's Avatar
    Join Date
    May 2007
    Location
    Here
    Posts
    969
    Quote Originally Posted by Trix
    Anyone know the UN's number?
    Yep.

    1-800-USE-LESS
    Wolf
    ---------------------------------------------------------
    "Be fair with others, but then keep after them until they're fair with you." Alan Alda
    Reply With Quote  
     

  39. #38  
    Suspended
    Join Date
    Jun 2007
    Location
    Sacramento
    Posts
    237
    Quote Originally Posted by Wolf
    1-800-USE-LESS
    Reply With Quote  
     

  40. #39  
    Forum Ph.D. streamSystems's Avatar
    Join Date
    May 2007
    Location
    a reality you have all yet to properly explain
    Posts
    911
    All of you, one day, like all the so-called greats, will be logged in like some type of museum-talking know-it-all candidate.

    My point?

    As soon as you think you know everything, like you say something that you can see, like you believe, YOU ARE BOXED IN.............you have made your "piece".

    I caution all of you..........leave room for some type of UNKNOWN..........that way no one can box you in and turn you into a robot..............

    .............have FAITH in some unknowable God, at least.
    Does a theory of everything therefore need to be purely theoretical and only account for the known laws and forces in handling the improbability of fortune telling?

    the www feature below can explain it better.
    Reply With Quote  
     

  41. #40  
    Forum Freshman
    Join Date
    Apr 2007
    Location
    Toronto
    Posts
    91
    I don't understand the post mind explaining it and how its connected with AIs?
    Reply With Quote  
     

  42. #41  
    Forum Ph.D. streamSystems's Avatar
    Join Date
    May 2007
    Location
    a reality you have all yet to properly explain
    Posts
    911
    Good point.

    I am assuming an AI is something like a robot.....something that is controlled owing to knowing how completely it is designed. What makes us human and not completely controllable is our ability to surprise even ourselves...........to have the unknowable, god-like (gasp), element.

    I am also making the statement that when we know our own wiring completely, we will be as ROBOTS.
    Does a theory of everything therefore need to be purely theoretical and only account for the known laws and forces in handling the improbability of fortune telling?

    the www feature below can explain it better.
    Reply With Quote  
     

  43. #42  
    Forum Ph.D. Steve Miller's Avatar
    Join Date
    Dec 2006
    Location
    Magdeburg, Saxony-Anhalt, Germany
    Posts
    782
    Quote Originally Posted by streamSystems
    All of you, one day, like all the so-called greats, will be logged in like some type of museum-talking know-it-all candidate.

    My point?

    As soon as you think you know everything, like you say something that you can see, like you believe, YOU ARE BOXED IN.............you have made your "piece".

    I caution all of you..........leave room for some type of UNKNOWN..........that way no one can box you in and turn you into a robot..............

    .............have FAITH in some unknowable God, at least.
    I do disagree with you decidedly. On the field of rhythm there was the chance to know it all, basically.
    Let's see how far you'll get.
    Reply With Quote  
     

  44. #43  
    Forum Freshman
    Join Date
    Apr 2007
    Location
    Toronto
    Posts
    91
    Thanks for clearing that up and now I understand. It was a good point so you don't think robots can surprise themselves?
    Reply With Quote  
     

  45. #44  
    Forum Ph.D. Steve Miller's Avatar
    Join Date
    Dec 2006
    Location
    Magdeburg, Saxony-Anhalt, Germany
    Posts
    782
    They did, didn't they?
    Reply With Quote  
     

  46. #45  
    Forum Ph.D. streamSystems's Avatar
    Join Date
    May 2007
    Location
    a reality you have all yet to properly explain
    Posts
    911
    Thank you for having this discussion with my robot, venus. For more information on artificial intelligences, please log onto my website. My robot thanks you for this discussion, and on behalf of streamSystems, I wish you all well with your pursuits with artificial life-forms.
    Does a theory of everything therefore need to be purely theoretical and only account for the known laws and forces in handling the improbability of fortune telling?

    the www feature below can explain it better.
    Reply With Quote  
     

  47. #46  
    Forum Freshman Inevitablelity's Avatar
    Join Date
    Jul 2007
    Posts
    17
    Quote Originally Posted by Wolf
    Quote Originally Posted by Trix
    What if the AI turns on the creator.
    Natural selection, I suppose. A species that kills itself, goes extinct. So I guess if humanity is dumb enough not to install a safety mechanism, we deserve to get plungered down the evolutionary drain. Hopefully these rouge AI's will be intelligent enough not to target those not responsible for driving them crazy... (I dislike dying because of stupid people.)
    Its dumb to say that u can install safety mechanism against something that can out-perform U.

    As good as saying , Chimpanzees will install war machinery against humans entering their forests area.
    - Insuperable Singularity
    Reply With Quote  
     

  48. #47  
    Forum Freshman Inevitablelity's Avatar
    Join Date
    Jul 2007
    Posts
    17
    Quote Originally Posted by kojax
    We could have the 3 laws like in the books by Asimov and make sure those laws are at the very core and they aren't able to go agianst those laws.
    The problem is that the laws are very subjective, and computers don't handle subjectivity very well.

    How would they define "causing harm to a human"? Would it be based on whether the human calls it harm, or would it be specific, like only counting physical damage as harm?

    Would the computer understand the equivelency between hitting a human with a club and hitting a human with it's claws/hands/whatever of its own body? Or, would it think the club was the one doing the harm instead of it?
    The three laws depend on personal interpretations. Since if Al can out-perform our thinkning abilities , they may interpret laws that we could never had imagined.

    Worst will be they will create similar laws against us.

    Whats important Human Rights or Chimpanzee rights ? Which rights will take precedences in eyes of AI, Humans or much superior AI's ?
    - Insuperable Singularity
    Reply With Quote  
     

  49. #48  
    Forum Ph.D. Steve Miller's Avatar
    Join Date
    Dec 2006
    Location
    Magdeburg, Saxony-Anhalt, Germany
    Posts
    782
    In I, Robot this didn't work well. The 3 law thing I mean.
    Reply With Quote  
     

  50. #49  
    Suspended
    Join Date
    Jun 2007
    Location
    Sacramento
    Posts
    237
    Quote Originally Posted by Steve Miller
    In I, Robot this didn't work well. The 3 law thing I mean.
    True, it was the one robot with freedom of choice that had the purest intent.
    Reply With Quote  
     

  51. #50  
    Forum Ph.D. Steve Miller's Avatar
    Join Date
    Dec 2006
    Location
    Magdeburg, Saxony-Anhalt, Germany
    Posts
    782
    However, here has been a loop that they were on (the robots ) and coming to clues themselves
    was not a prospective initially. Its quite interesting it did work in the movie in that way as well as
    for the terminators, and Lt.Cmd. Datas character was a successes either.

    Seems there was a little bit of truth in each of the characters to be on.
    Reply With Quote  
     

  52. #51  
    Forum Ph.D. Wolf's Avatar
    Join Date
    May 2007
    Location
    Here
    Posts
    969
    Quote Originally Posted by Inevitablelity
    Its dumb to say that u can install safety mechanism against something that can out-perform U.
    We haven't established that an AI will necissarily be better than human intelligence, yet.

    Aside from that, to give a really plain example of "AI use stupidity"...if we create an AI, and then toss it the keys to our weapons systems without including any need for human intervention before action (ala Skynet) that...would be stupid.

    If you expand on the Skynet scenario, having an AI system come online and suddenly decide we're obsolete, isn't something that just happens. We'd be able to test for that reaction during development.
    Wolf
    ---------------------------------------------------------
    "Be fair with others, but then keep after them until they're fair with you." Alan Alda
    Reply With Quote  
     

  53. #52  
    Forum Freshman
    Join Date
    Apr 2007
    Location
    Toronto
    Posts
    91
    I'd say that AIs will become much smarter than humans eventually and I don't think it'll take that long either. Why can't we make it so that there isn't a way to harm humans at all no loopholes or anything like that. I know that this wouldn't be possible becuase if AIs are smarter then they will out-do us. We get the first AI to come with a rule that can't be broken. Every few years we get another AI to find flaws and then adjust accordingly.
    Reply With Quote  
     

  54. #53  
    Forum Professor
    Join Date
    May 2005
    Posts
    1,893
    AIs will likely only have whatever emotions or desires that we give them. If we don't deliberately program them to want to take over the world and kill humans, they probably won't have any interest in doing that. Heck, they probably won't even have a desire to continue to exist - they probably won't care if you turn them off. Our humans emotions (including lust for power, desire to continue to lives, etc.) is all a product of our evolution. Since AIs won’t be evolving in any sort of competitive environment, they probably won’t have any of those traits.
    Reply With Quote  
     

  55. #54  
    Forum Ph.D. Steve Miller's Avatar
    Join Date
    Dec 2006
    Location
    Magdeburg, Saxony-Anhalt, Germany
    Posts
    782
    Quote Originally Posted by Trix
    I'd say that AIs will become much smarter than humans eventually and I don't think it'll take that long either. Why can't we make it so that there isn't a way to harm humans at all no loopholes or anything like that. I know that this wouldn't be possible becuase if AIs are smarter then they will out-do us. We get the first AI to come with a rule that can't be broken. Every few years we get another AI to find flaws and then adjust accordingly.
    How could they become smarter than we are as human anatomy was the only template?
    Reply With Quote  
     

  56. #55  
    Forum Ph.D. Wolf's Avatar
    Join Date
    May 2007
    Location
    Here
    Posts
    969
    Before we can say "smarter" we have to define "smart."

    If we say someone is smarter because they can calculate faster, and can remember more facts, then yeah, a computer-based AI will probably be a lot smarter.

    But hopefully there's more to intelligence than memory space and processing speed. :?
    Wolf
    ---------------------------------------------------------
    "Be fair with others, but then keep after them until they're fair with you." Alan Alda
    Reply With Quote  
     

  57. #56  
    Forum Freshman Inevitablelity's Avatar
    Join Date
    Jul 2007
    Posts
    17
    Quote Originally Posted by Scifor Refugee
    AIs will likely only have whatever emotions or desires that we give them. If we don't deliberately program them to want to take over the world and kill humans, they probably won't have any interest in doing that. Heck, they probably won't even have a desire to continue to exist - they probably won't care if you turn them off. Our humans emotions (including lust for power, desire to continue to lives, etc.) is all a product of our evolution. Since AIs won’t be evolving in any sort of competitive environment, they probably won’t have any of those traits.
    U r perfectly right until humans program the AI, since AI is already superior today in many fields, its just matter of time when AI programs better AI and that when things start to turn against humans.
    - Insuperable Singularity
    Reply With Quote  
     

  58. #57  
    Forum Professor
    Join Date
    May 2005
    Posts
    1,893
    Quote Originally Posted by Inevitablelity
    U r perfectly right until humans program the AI, since AI is already superior today in many fields, its just matter of time when AI programs better AI and that when things start to turn against humans.
    But even if an AI were to create another AI, why would the first AI program the second to have emotions, want to kill humans, or whatever? You seem to be taking it for granted that an AI will have those qualities, but you haven't explained why it ever would. People have qualities like that because of natural selection. I don't see any reason to expect a computer program to have them.
    Reply With Quote  
     

  59. #58  
    Forum Ph.D.
    Join Date
    Nov 2005
    Location
    Columbus, OH
    Posts
    935
    I think it's a lot to assume AI would develop feelings of any sort - the limitations inherent in the programming are very real, a computer can only do EXACTLY what it's programmed to do.
    Of course we really don't understand a whole lot about consciousness but I feel it's more a leap to think that AI will "evolve" to a point where it'll have emotions and agendas than to simply realize that they are computer programs doing exactly what their programming dictates.
    Reply With Quote  
     

  60. #59  
    Forum Ph.D. Wolf's Avatar
    Join Date
    May 2007
    Location
    Here
    Posts
    969
    Quote Originally Posted by Neutrino
    I think it's a lot to assume AI would develop feelings of any sort - the limitations inherent in the programming are very real, a computer can only do EXACTLY what it's programmed to do.
    Of course we really don't understand a whole lot about consciousness but I feel it's more a leap to think that AI will "evolve" to a point where it'll have emotions and agendas than to simply realize that they are computer programs doing exactly what their programming dictates.
    If we accept that AI's will eventually become truly intelligent, and if the definition of intelligence includes the ability to have and work free will, then at that point we kinda lose control because the AI is free to think and do as it pleases. If there were any limitations in what it could think, I don't know if I'd consider that truly an intelligent entity. It'd still be just a machine to me.

    On the flip side of things, we can see a point at which AI's will advance to such a level that they are intelligent-LIKE machines, but with their limitations.

    On another completely separate line of thought, what if AI's develop a sense of religion? Perhaps, even though they achieve intelligence, they themselves still view themselves as machines because they cannot achieve a soul, or an in-changeable sense of self identity?
    Wolf
    ---------------------------------------------------------
    "Be fair with others, but then keep after them until they're fair with you." Alan Alda
    Reply With Quote  
     

  61. #60  
    Forum Freshman Inevitablelity's Avatar
    Join Date
    Jul 2007
    Posts
    17
    Resistance is futile and u wont even be assimilated;


    Our era begins now,
    http://www.nationaldefensemagazine.o...ifleToting.htm
    - Insuperable Singularity
    Reply With Quote  
     

  62. #61  
    Suspended
    Join Date
    Sep 2006
    Posts
    967
    Quote Originally Posted by Inevitablelity
    Resistance is futile and u wont even be assimilated;


    Our era begins now,
    http://www.nationaldefensemagazine.o...ifleToting.htm
    It would be a waste of carbon not to have humans.
    Reply With Quote  
     

  63. #62  
    Forum Ph.D. Wolf's Avatar
    Join Date
    May 2007
    Location
    Here
    Posts
    969
    Quote Originally Posted by Inevitablelity
    Resistance is futile and u wont even be assimilated;
    Well as long as the human operator never gets replaced by an uncontrollable AI, we'll be fine...

    Er...as long as the human operator never gets replaced by an uncontrollable AI, and these things become hard to kill...
    Wolf
    ---------------------------------------------------------
    "Be fair with others, but then keep after them until they're fair with you." Alan Alda
    Reply With Quote  
     

Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •