Having been thoroughly enjoying the new terminator series I have repeatedly been thinking about a few questions and was hoping I could throw a few ideas around this forum.
Is seed AI the only real precursor to a machine such as described in the terminator series?
If so, how far away are we from obtaining a first seed AI?
From what I understand we are essentially at a brick wall! Maybe someone more informed could comment?
If we were to build an intelligent program that could learn, self optimize and then recompile wouldn’t it still be limited by the amount of available processing power and therefore singularity (‘as it is described’) wouldn’t happen overnight and in fact could take a substantial period of time?
Why is there an assumption that the program/being, would feel threatened by us and in the process become hostile? To me it logically occurs that any being that is significantly more intelligent/enlightened than a human would naturally evolve to a ‘docile’ and ‘non violent’ creature.
As we humans are evolving it seems as though we do tend to become less violent. Look at the Aztecs who lived only 400-500 years ago.
Finally, does it not seem that even if we were able to create such a machine we would never put it in control of the world’s nuclear arsenal? What benefits could be obtained from doing that anyway?
Would love to hear some thoughts.