Notices
Results 1 to 12 of 12

Thread: Artificial intelligence- can it become bilinear?

  1. #1 Artificial intelligence- can it become bilinear? 
    Forum Freshman
    Join Date
    Mar 2009
    Location
    Great Barrington
    Posts
    62
    When many people think of Artificial intelligence, they think of science fiction movies that feature robots, computers or the like, which have sentient awareness. But in my thoughts, I've thought of this as not possible.

    Equation Result ASCII #
    01000001=A 65

    If you look at this binary equation, you'll see that 01000001=65 in decimal, and in ASCII, the number 65 equals capital A. So, this is a linear process like a math problem. In biological things, thoughts and the mind are comprised of chemical messages, which can be very versatile and diverse in their application because they are random. If computer artificial intelligence is like math, then I don't see how AI could ever make a computer sentient and with free will, as true randomness does not exist outside of biology.


    Reply With Quote  
     

  2.  
     

  3. #2  
    Forum Masters Degree Numsgil's Avatar
    Join Date
    Jan 2009
    Posts
    708
    Bilinear means something different than the way you're using it. I think what you mean is analog vs. digital. There are analog computers in existence, (they were the first computers), but let's examine if AI in digital form first.

    Or let's first examine wet AI (ie: us). Let's assume that there's nothing "magical" about our intelligence, that it's entirely contained within the physical matter of the brain and its neurons and their connections. And let's assume that those neurons and their connections follow the laws of physics.

    Now in classical mechanics, the world is deterministic (that is, entirely predictable). But in the world of quantum mechanics things have an inherit randomness. Possible events are predictable, but the exact event chosen isn't.

    Now if neurons work only using classical mechanics, it should be relatively easy to someday simulate them on a computer just by simulating the way the molecules interact inside each neuron. This would take a crazy amount of processing power, but it would still be possible. We could also construct "slow" AI that are as smart as a man, but their "brains" run much slower because our computers aren't fast enough. If it takes 1 year of processing power for our AI to have 1 second equivalent of human processing power, well... that's a start.

    But suppose neurons work through quantum mechanics, and intelligence is deeply rooted in non determinism (deeply rooted in events which have a strong random element). Modern computers simulate randomness using complex mathematical functions. But these methods have periods (meaning the sequence repeats if you go long enough) and may have biases in higher dimensions. So it's possible that to properly simulate intelligence you need access to real quantum randomness. In this case you can hook up a digital computer to something like a decaying atom to generate random bits to build numbers from. Such devices are possible with current technology (used in quantum cryptography), but they aren't all that common.

    But ultimately computers are digital, and no matter how many bits of precision are used there will always be numbers which can't be represented exactly (like 0.1). And even if you get around that, you can never properly represent irrational numbers (pi) in calculations, so there's almost always some non 0 error in any calculation. Now it's possible that on the Planck scale the real universe isn't using real numbers anyway, but let's ignore that for the moment.

    If intelligence is somehow intertwined with irrational numbers and the entire real number line, something a digital computer really struggles with, then it is possible that what we want from AI might only be possible on analog computers. But I consider this unlikely. I would side with either the first or second case I presented above: brute force simulation of the molecules in the brain using a quantum random bit generator if necessary. That should give you human level intelligence.


    Reply With Quote  
     

  4. #3  
    Forum Junior DrmDoc's Avatar
    Join Date
    Nov 2008
    Location
    Philadelphia, USA
    Posts
    286
    This may not be germane to your discussion, but some time ago, at another site, I speculated on the probability of truly intelligent machinces. Here is an excerpt from a proposed discussion of the topic:

    I was thinking about how our understanding of brain evolution could contribute to the development of artificially intelligent machines. Ideally, the design and programming of such machines should approach that of human brain structure and function. If for reasons other than our technological inadequacies, truly intelligent machines may be years away because we are approaching their development through a perspective of the brain that does not consider its evolution. Human intelligence arose from the combined contribution of successive neurological adaptations influenced by the survival needs of ancestral animals over millions of years. If we desire to construct machines truly capable of humanlike intelligence, shouldn’t we predicate their design—if not their programming—on those crucial steps in brain evolution leading to human intelligence?

    Because the brain is a product of very precise steps in its evolution, any effort to emulate how the brain learns and develops should be inclusive of those steps. From what we know of how the brain likely evolved, our very first step towards intelligent machines should involve the development of their afferent subsystems; specifically, the development of those subsystems that will deliver palpable information into the processing centers of these machines.

    Those sensory subsystems capable of detecting physical (palpable) stimuli were likely the first to evolve in the brains of ancestral animals because such systems are what we find in the most primitive parts of the contemporary brain. In the myelencephalon of the contemporary human brain, we find afferent neural systems that deliver taste and tactile sensory information into brain structure. In intelligent machines, taste and tactile equivalent subsystem would be those that would activate the processing centers of these machines whenever they are tactilely stimulated. Evidence in the contemporary brain suggests that tactile sensory detection assumed a different form with the evolution of the metencephalon.

    Contiguous with the myelencephalon, the metencephalon evolved those afferent neural systems capable of detecting the indirect perception of potentially tactile sensory through the detection of minute changes in air pressure. What we know as sound sensory detection is merely a sophisticated from of tactile perception. With the distinction sound sensory detection brings to intelligent machines, tactile stimuli processing could be categorized as either direct (physical stimuli) or indirect (sound stimuli). Tactile stimuli should be given the highest processing priority and such processing should initially evoke an assessment of the potential threat to the physical status of the machine. This threat assessment process should also initiate a visual equivalent recognition process.

    The construction of visual equivalent subsystems should lead to the development of a sensory hub equivalent to what we find in the human brain through the function of the thalamus. This sensory hub should be capable of integrating divergent sensory data; i.e., it should be capable of assigning data cues that link incoming visual sensory with tactile sensory. These data cues are what the mechanized thalamus will summon to recall the entirety of a sensory experience and compare what it recalls to incoming sensory data. Instead of filing or storing incoming sensory data in its congruous form, the data should be sorted and stored by the details of its sensory type with the key data links that will pull together the divergent sensory. For example, the visual characteristics of hair would be stored by its distinct attributes such as length, textural appearance, color, shape, etc. Along with each attribute, a data link such as “hair” would be store with each to bring the separate attributes together to form the perception of hair in the brains of intelligent machines. The distinction in this type of processing would be that each attribute would be store without respect for their overall connection. For example, hair can be black as well as a coffee pot. Therefore, the color black may be stored with enumerable unique data links, which the mechanized brain will summon in response to appropriate stimuli.

    This theorized construction of an artificially intelligent machine isn’t nearly complete without the drive, emotion, and conscience that make human brain function unique. There is evidence in the brain, suggested by its evolution, that these last three components may be essential to the design of our intelligent machine. However, their discussion is for another time.
    When I engaged this idea, I was hoping to flesh it out with someone more knowledgeable than I in computer science. If this is not the forum for this type of discussion, forgive my intrusion.
    Reply With Quote  
     

  5. #4  
    Forum Masters Degree Numsgil's Avatar
    Join Date
    Jan 2009
    Posts
    708
    Seems you're arguing that consciousness can't be removed from outside stimuli. Which makes me think of Helen Keller when she was a little girl. In her book, Helen says she was more animal than human, without abstract thinking.

    If that's you're thinking, I think you should explore alife, which is sort of like AI but instead of trying to create smart programs it tries to recreate the fundamentals necessary for evolution. It might be that the first "real" AI comes from alife and evolved. And it might be we're as clueless about understanding such an entity as we are understanding ourselves (I can imagine trying to read through evolved assembly to understand how the AI works. What a nightmare!)
    Reply With Quote  
     

  6. #5  
    Forum Freshman
    Join Date
    Mar 2009
    Location
    Great Barrington
    Posts
    62
    I've got to go, but I'll read all of this when I get back. Have a good day, everyone!
    Reply With Quote  
     

  7. #6 a good read 
    New Member Starmind's Avatar
    Join Date
    Apr 2009
    Posts
    1
    Dear fellows

    Attached please find a very interesting though lengthy article about "Why minds are not like computers" from the http://www.thenewatlantis.com/public...like-computers forum, composed by Ari N. Schulman.

    The Article you can find here for download:

    http://www.ifi.uzh.ch/arvo/ailab/peo...tComputers.pdf

    And feedback is of course highly welcomed enjoy!

    Kind regards
    Pascal
    Reply With Quote  
     

  8. #7  
    LGM
    LGM is offline
    New Member
    Join Date
    Apr 2009
    Posts
    4
    I dont know exactly what you mean by Bilinear. Curves like linear may represent a meaning in a specific area which may differenciate from meanings as if they were represented in another area.

    As far as computers go right now those things have to be defined as the 32/64 bit hardware is not entirely capable of producing results to the fullest representation of the code.
    in an ideal situation where a standard binary processing unit was available such as the formula 180*2.14/3 would procuce accurate results as it is not bit dependant. If i sent that processor A... B would come out.. its because the binary is not square.. it is merely evaluating the binary.. AB to the processor and A software would be avalable. its distance from A distance from Z. The best way to come up with it is send them through a processor using the formula like i have listed there.

    to send through that formula its + / i think...
    01000001 + "180*2.14/3" / "180*2.14/3" should produce a result as if it would run through it. This is very important when working with binaries and they may not even have this working yet..
    Now as far as an AI is concerned I believe it must be first defined in a linear representation before another representation is derivable. Then the AI would be able to effect the other representation.
    :-D

    It is possible to form the binaries in such a way to effect each other in a square logical representation made by multiplying them. Once they effect each other the output would be related to the information supplied in the ai and the other binaries.. producing an intelligent process of information. This kind of development would open doors to all kinds of new hardware and uses.
    Dont think these kinds of things are not possible because they very much are.. just because intel uses a 32bit processer does not mean i cannot process a 512bit strand. its just a matter of manipulating the binaries using +*-/. Windows uses something called a runtime library which is not far off from an ai for the operating system.
    Hope this helps
    Reply With Quote  
     

  9. #8 Re: Artificial intelligence- can it become bilinear? 
    Suspended
    Join Date
    Apr 2008
    Posts
    2,178
    Quote Originally Posted by RosenNoir
    When many people think of Artificial intelligence, they think of science fiction movies that feature robots, computers or the like, which have sentient awareness. But in my thoughts, I've thought of this as not possible.

    Equation Result ASCII #
    01000001=A 65

    If you look at this binary equation, you'll see that 01000001=65 in decimal, and in ASCII, the number 65 equals capital A. So, this is a linear process like a math problem. In biological things, thoughts and the mind are comprised of chemical messages, which can be very versatile and diverse in their application because they are random. If computer artificial intelligence is like math, then I don't see how AI could ever make a computer sentient and with free will, as true randomness does not exist outside of biology.
    I think sometimes, what makes the living thing a living thing. Is that he is not worried about his or anyone else's survival.

    He is being pressed into a wall, and he decides out of principle, that he will do something live or die. Like the American Revolution. Without it we would be trying to be fat like our king and act like a big fat king. Rabbles wanted more. They even changed England.

    Evil cannot see it coming because they are computers. They are just doing whatever seems to be something that someone else would want. Often just taking it and destroying it. But they cannot take freedom. Because they cannot have it.

    They are alive and just fooling themselves. They were just hurt, and programmed themselves to combat the situation they could no longer face. That is all evil is really a machine.

    So yes a machine would make a great evil leader, or law maker. But a rabble never.

    Sincerely,


    William McCormick
    Reply With Quote  
     

  10. #9  
    Forum Professor marcusclayman's Avatar
    Join Date
    Mar 2009
    Posts
    1,704
    I think we could do a lot in analog using the resonant properties of various mediums

    "orchestral programming" sort of

    since sound can very easily be converted into electricity
    Dick, be Frank.

    Ambiguity Kills.
    Reply With Quote  
     

  11. #10  
    Forum Freshman thedrunk's Avatar
    Join Date
    May 2009
    Posts
    48
    Considering we have A.I what i belive you are reffering to is a N.I. Nural intelligence. where the computer can give a origonal idea, thought, action or, solve a real time equation. a A.I. will only be able to do what it is programmed to do. and they exist today. a lexus car can at the press of a button parell park it self. even though its jsut a command it makes desicisions as any A.I. would from Sci-Fi movies and fiction writing.

    The complexity of creating a N.I. will not exist till Quantom Computer's are developet to the point of computers today. the system to run a A.I. would be complex, yet simple one can do it on a analog computer, even a digital computer. but the proc of running mutiple tasks to form a discion would over bear all but select few system's and even those systems are myth based on conspericy.
    one problem with a N.I computer is it learning. it has pre programmed function's with the ability to learn from its experience, sorta liek a child learning the big world out side in order to de3velope a N.I. with thought and actions of its own would take 16 years of training and education and the N.I. experiencing the real world.
    Reply With Quote  
     

  12. #11  
    Forum Professor marcusclayman's Avatar
    Join Date
    Mar 2009
    Posts
    1,704
    a genetic computer could be programmed to evolve to solve problems, it evolves randomly until it finds a solution that works and then when it's not needed anymore it is recycled back into the system

    if one system isn't enough to handle a problem, two systems evolve, and so on and so forth until a problem is solved.

    if there are two incompatible systems, there will need to be a third to translate between the two

    this could be an analog/digital hybrid
    Dick, be Frank.

    Ambiguity Kills.
    Reply With Quote  
     

  13. #12  
    New Member
    Join Date
    Aug 2009
    Posts
    3
    I used to think randomness was the answer. However, I don't know anyone who makes "random" decisions. What appears random is a result of a whole lot of variables. "I'm tired (lack of energy resulting from lack of sleep) so I don't 'feel' like going with you today". Obviously, an AI that doesn't get tired and doesn't sleep could not make this particular human decision. To me, using randomness in programming is simply making up for a LACK of programming. I mean, if we're making a game, we're programming enemies for the purpose of militaristic logic and basic survival instinct, nothing else. Enemies do not have parents to disappoint or kids to worry about. Its our experiences through life that give us a certain perspective and bias.

    I think what you'd have to do is break down different aspects of life into different equations. Then you take the experiences of representatives from all races and both genders, young and old. Create a database of likes, dislikes, and a virtual world to where an AI can make connections and parallels. It can see the similarities between a young boy and a middle aged man as opposed to a woman. And AI mainly has to "live" and catch up to whatever age you want it to be in order to simulate a human of that age. This is not something that can be given, but something that must be experienced. That's why I think AI life is a better approach.
    Reply With Quote  
     

Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •