Notices
Results 1 to 3 of 3

Thread: AI vs. Computational Neuroscience

  1. #1 AI vs. Computational Neuroscience 
    Forum Freshman
    Join Date
    Feb 2008
    Posts
    24
    Hi, does anyone know the difference between Artificial Intelligence and Computer Science? Do they both require programming skills?

    Thanks.


    Reply With Quote  
     

  2.  
     

  3. #2 AI vs. Computational Neuroscience 
    Forum Sophomore Vaedrah's Avatar
    Join Date
    Aug 2008
    Posts
    155
    Hi Infinitism

    I am interested in AI just as an idle hobby so perhaps others can present a more accurate reply. I do however consider "artificial intelligence" to represent a behavior that emulates some aspect of human? reasoning but implemented in a man made fashion (possibly electronic/mechanical). Perhaps there is a better definition (as animals can also exhibit intelligence)?

    Neural Networks (NN) are often used for AI as these can show adaptive memory and recognize patterns. For example a NN may have X input nodes and Y output nodes and be trained to recognize Z input patterns (vectors). Each input pattern can be thought of as a "question" and each output pattern can be thought of as an "answer". These answers can be compared to a set of target "answers" much the same way as a test is presented to students in a classroom setting.





    Unlike a dedicated software script, the outcome is somewhat unpredictable. Given enough iterations the internal NN "weights - analogous to inter neuron interconnections in a biological system" the NN "learns" to match questions to target answers. This can be useful as the NN can be seen now as an "arbitrary mapping device".

    One property claimed is that these NN's can "generalize" to some extent; if the input question is fuzzy or noisy, the NN will present a best guess answer to it.

    Also, if some internal nodes (neurons) are lost the NN will survive with slightly impaired performance. The same phenomenon is seen in biological brains - slight damage causes slight impairment and can be recoverable. In contrast, one error in s/w code can crash a program. This makes NN's fault tolerant.

    I have been playing with NN's today on MathCad - I am testing them by multiplying two 4-bit binary numbers to produce an 8 bit binary output (256 combinations). I am curious to see if the NN will learn a subset of this 256-set faster if exposed to another subset earlier. This will suggest some generalization.

    I have set the MathCad program to emulate 3, 4, 5 and 6 layer NN's and all seem to work. I first tried "back propagation" i.e. supervised learning, but it gets inaccurate for larger NN's as the differentials are cumulative (from output to input) and I guess gradient errors build up. The most tolerant weight training seems to come from perturbation methods.

    The problem is speed - the NN has a lot of computations needed and these are performed sequentially on a PC. For example even a small 8 node NN with 3 layers has two 8*8 weight matrices = 128 computations per iteration (each requiring many processing steps)

    I have been wanting to place a NN on a Field Programmable Gate Array (FPGA) for some time (Xilinx, Alterra) as these have up to 2 million logic cells that can clock up to 2 GHz. This silicon device would allow most of the computations to occur in parallel rather than one after the other. The result would be a much faster NN that could also be made dense wrt the number of nodes or "neurons"

    I hope this answer isn't dragging on too much but a contrasting approach would be to code AI as software script based on conditional if-then statements. Although this could "appear" to show independent behavior, the actual s/w script would need careful design. In contrast a NN is presented with a task and it learns the task.

    Also, a FPGA implementation would not exhibit unique convergence - two NN's presented with the same task would "learn" differently to some extent as they operate in a pseudo-analog fashion so electrical noise voltages will modify convergence.

    My guess is that both approaches have merit and a hybrid approach will be best. I would possibly prefer the s/w code approach to act in a supervisory fashion and allow the FPGA(s) to perform pattern recognition and abstraction tasks.

    The FPGA is s/w configurable and can be used to emulate a PC processor (e.g. Pentium) as well as a NN - alternative many FPGA's can be interconnected to build up an entity. Some of the cost effective devices are less tha $20 each!

    In any case the subject fascinates me :-D


    "The sky cannot speak of the ocean, the ocean cannot speak of the land, the land cannot speak of the stars, the stars cannot speak of the sky"
    Reply With Quote  
     

  4. #3 AI vs Computer Science 
    Forum Freshman
    Join Date
    Aug 2008
    Posts
    39
    I concur with Vaedrah, AI is surely a machine that can replicate a human being, whether that be mental or physical. Surely computer science is not like that, I am no expert on this but I would guess that you need programming skills for both.
    Reply With Quote  
     

Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •