Notices
Results 1 to 26 of 26
Like Tree7Likes
  • 1 Post By RedPanda
  • 1 Post By MagiMaster
  • 1 Post By Sci-fi guy
  • 1 Post By dan hunter
  • 1 Post By RedPanda
  • 2 Post By tk421

Thread: The Technological Singularity

  1. #1 The Technological Singularity 
    Forum Freshman
    Join Date
    Jan 2014
    Posts
    9
    Hey, I'm new to the forum and science in general. I used to "research" the technological singularity theory that Kurzweil is so obsessed with. At the time I would blindly believe his predictions, but I do remember reading about scientists who feel the singularity is an impossibility.

    What do you guys think about the theory?

    Do you know of any good papers on the subject?

    Thanks.


    Reply With Quote  
     

  2.  
     

  3. #2  
    ▼▼ dn ʎɐʍ sıɥʇ ▼▼ RedPanda's Avatar
    Join Date
    Aug 2012
    Location
    UK
    Posts
    2,737
    Quote Originally Posted by Lostnoob View Post
    What do you guys think about the theory?
    My only real objections are to the claims that the singularity is inevitable and that the growth is exponential.
    But it seems possible to me*.

    * But I do like sci-fi a lot.


    Lostnoob likes this.
    SayBigWords.com/say/3FC

    "And, behold, I come quickly;" Revelation 22:12

    "Religions are like sausages. When you know how they are made, you no longer want them."
    Reply With Quote  
     

  4. #3  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Exponential growth is not enough to reach a vertical asymptote (infinity; a singularity) in a finite time. That said, I don't think anyone could accurately guess what life will be like in another 100 or 200 years except that it will likely be more like today in more ways than most people would guess.
    Lostnoob likes this.
    Reply With Quote  
     

  5. #4  
    Forum Freshman
    Join Date
    Sep 2012
    Location
    Turkey
    Posts
    72
    Lostnoob likes this.
    Reply With Quote  
     

  6. #5  
    Forum Cosmic Wizard
    Join Date
    Dec 2013
    Posts
    2,408
    One thing I have learned in my life is the only thing people who predict the future have in common is that they all get it wrong.
    KJW likes this.
    Reply With Quote  
     

  7. #6  
    ▼▼ dn ʎɐʍ sıɥʇ ▼▼ RedPanda's Avatar
    Join Date
    Aug 2012
    Location
    UK
    Posts
    2,737
    Quote Originally Posted by dan hunter View Post
    One thing I have learned in my life is the only thing people who predict the future have in common is that they all get it wrong.
    Are you predicting that any predictions will be wrong?

    "I never predict anything, and I never will."
    Paul Gascoigne
    dan hunter likes this.
    SayBigWords.com/say/3FC

    "And, behold, I come quickly;" Revelation 22:12

    "Religions are like sausages. When you know how they are made, you no longer want them."
    Reply With Quote  
     

  8. #7  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    8
    Why pseudoscience? If anything, singularitarianism is the logical progression of not believing in pseudoscience. If essentially we are a "meat computer" and there is no soul or any other other metaphysical thing that makes humans intelligent, then intelligence at the level of of a human being is possible to be performed by a computer. Computers have 2 major advantages over humans: 1) they can be instantly copied, and most importantly 2) they are much faster than humans. Biological computers perform at the speed of chemical reactions, whereas technological computers perform at the speed of light. This gives an intelligent machine the ability to perform at the very least 10^6 times faster than humans. So then a computer only as smart as a person could copy itself into as many versions as possible given the storage space and processing available, and then proceed to "think" about a problem for one year that it would take the equivalent number of humans to think about in over 3 million years. This is the singularity. What problem would the computer likely be trying to solve? Most likely how to design itself to be even better.

    Imagine the Singularity in this way-- Google is the first AI and it truly knows all of human knowledge and can answer any question asked of it in as detailed a response desired. Since Kurzweil is the chief engineer of AI for Google, this is not an unlikely scenario. At that moment, Google's knowledge is expanding faster than humanity is adding to it, thus rendering human beings obsolete in the realm of thinking. On top of that, intelligent machines could build any body for itself that it wanted to, so humans are obsolete in the realm of doing things as well. You could just ask Google how to turn yourself into a machine, and it would probably tell you the answer in 573 milliseconds.
    Reply With Quote  
     

  9. #8  
    Genius Duck Moderator Dywyddyr's Avatar
    Join Date
    Jan 2013
    Location
    Scunthorpe, UK
    Posts
    11,849
    Quote Originally Posted by avatar4281 View Post
    If anything, singularitarianism is the logical progression of not believing in pseudoscience.
    Citation needed.

    whereas technological computers perform at the speed of light
    Citation needed.

    This gives an intelligent machine the ability to perform at the very least 10^6 times faster than humans.
    Yeah, apart from the speed being wrong we also need an intelligent computer. Seen one yet?

    Imagine the Singularity in this way-- Google is the first AI
    No it's not.

    and it truly knows all of human knowledge
    No it doesn't.

    and can answer any question asked of it in as detailed a response desired.
    Hardly.

    You could just ask Google how to turn yourself into a machine, and it would probably tell you the answer in 573 milliseconds.
    Would that be a "lean, mean productivity machine", a "green, lean health machine" or a "killing machine"?
    OTOH that took only (0.45 seconds).
    "[Dywyddyr] makes a grumpy bastard like me seem like a happy go lucky scamp" - PhDemon
    Reply With Quote  
     

  10. #9  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    8
    Uh yeah, I was talking about the future Google not now obviously, which really makes me think you are trolling me and didn't understand anything I was saying. And yes I have seen an intelligent computer. Watson was able to defeat the three greatest human champions of Jeopardy. I suppose you can define intelligence any way you want to so that machine intelligence is excluded, but you still have not refuted my most basic assertion that there is no good reason to think that intelligence is exclusive to humans and that technologically designed intelligence will be orders of magnitude greater than humans. Just look at every other example (and I am not bothering with exact numbers or citations because these are simple facts, just like the fact that computers operate with electrons that move at the speed of light and animals operate with slow chemical reactions): Fastest animal: 80mph Fastest car: 1,000mph Fastest bird: 120 mph Fastest Rocket: 16,000 mph. There is simply no logical reason to exclude intelligence from the list of things that machines will be able to do much better than humans other than you are very very scared. Which you should be.
    Reply With Quote  
     

  11. #10  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    8
    Moreover, how can you say this is pseudoscience when the main proponent of The Singularity, Ray Kurzweil, is the head of AI at Google? They could hire anyone in the whole world and they choose some pseudoscience religious nutjob? What about Bill Gates? Have you ever read Bill Joy's Wired article, "Why the Future Doesn't Need Us"? What about John Von Neumann or Alan Turing? You say all of these pioneers of computing are a bunch of phrenologists? You think I'm in love with the Singularity? Hell no! I want someone to give me hope with a good argument against it but I have found nothing. At least people should take take it seriously and not relegate it to pseudoscience. You are all a bunch of self-righteous pseudo-intellectual dick wads. Tell me I'm wrong. I want to be wrong.
    Reply With Quote  
     

  12. #11  
    Forum Masters Degree
    Join Date
    Apr 2014
    Posts
    592
    The Chinese Room Argument (Stanford Encyclopedia of Philosophy)

    Here is a good argument for why Google doesn't actually know anything. Not to say a computer couldn't comprehend knowledge, but it would first require actual language comprehension. Right now, Google doesn't know what a cow is. It just cross references your searches with keywords. Smart computer people are, however, working on bridging that language barrier. http://www.nytimes.com/2012/06/26/te...anted=all&_r=0

    Of course, it takes thousands of computers to comprehend a single word. Not to mention context, synonyms, syntax, etc. etc. We are nowhere near computers actually understanding language. Not yet. But strides are being made.
    Reply With Quote  
     

  13. #12  
    Forum Isotope
    Join Date
    Feb 2012
    Location
    Western US
    Posts
    2,893
    Quote Originally Posted by avatar4281 View Post
    Moreover, how can you say this is pseudoscience when the main proponent of The Singularity, Ray Kurzweil, is the head of AI at Google? They could hire anyone in the whole world and they choose some pseudoscience religious nutjob? What about Bill Gates? Have you ever read Bill Joy's Wired article, "Why the Future Doesn't Need Us"? What about John Von Neumann or Alan Turing? You say all of these pioneers of computing are a bunch of phrenologists? You think I'm in love with the Singularity? Hell no! I want someone to give me hope with a good argument against it but I have found nothing. At least people should take take it seriously and not relegate it to pseudoscience. You are all a bunch of self-righteous pseudo-intellectual dick wads. Tell me I'm wrong. I want to be wrong.
    Calm down. Really: Calm down.

    First, you're making the same, standard logical error that many cranks do (I'm not calling you a crank -- yet -- but I am pointing out a shared characteristic): "If you can't prove X wrong, it's right." However, that "logic" erroneously assumes that X is true by default.

    Next, it is curious that you say that you want us to provide a good argument against The Singularity, yet you seem fervently to believe in it, to the point that you've invented a title for Kurzweil that he does not hold. He is one of several directors of engineering at Google; he is not the "head of AI" at that company.

    So, let's move the discussion away from an assumption that Kurzweil is correct by default, and instead examine a few weaknesses in his chain of reasoning. Perhaps most conspicuous, and one that has been much commented on by critics, is his faith in a continuing exponential increase in computing power. In short, he has extrapolated from past trends. But one must always be wary of extrapolations, particularly of dramatic ones involving exponentials. The late Stephen Jay Gould used to use the example of the price of chocolate bars. Extrapolation from the past would lead us to conclude that we would someday be treated to the one-dollar, zero ounce chocolate bar.

    Already there are signs that advances in computing power have stalled at the hardware level. Shrinking chips used to mean "better, faster, cheaper," but now it means "hotter, and not much faster." The thermal limit actually started to take a big bite a few years ago, and the few years left before we hit a limit on how small a transistor can get (a few nm channel length, at which point the wavelength of an electron exceeds the transistor's critical dimension, and the "off" state disappears) will be characterised by ever-diminishing returns. The electrical power to run computers is now a significant percentage of the total power generated. If the per-computer performance per watt stops improving, then exponential growths in compute power imply exponential growths in electrical power. That simply isn't going to be practical.

    That saturation in computing power doesn't mean that Kurzweil's vision won't ever come to pass, but it does mean that his argument isn't nearly as solid as you've assumed. Again, sustaining exponentials is not trivial.
    RedPanda and Cogito Ergo Sum like this.
    Reply With Quote  
     

  14. #13  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    There are ways around a lot of those problems, and other routes to more processing power, but that aside an exponential curve doesn't have a singularity (aka a vertical asymptote).
    Reply With Quote  
     

  15. #14  
    Forum Isotope
    Join Date
    Feb 2012
    Location
    Western US
    Posts
    2,893
    Quote Originally Posted by MagiMaster View Post
    There are ways around a lot of those problems, and other routes to more processing power, but that aside an exponential curve doesn't have a singularity (aka a vertical asymptote).
    Kurzweil's definition of "singularity" differs from the conventional mathematical one. He uses it in the sense that von Neumann used it back in the 1950s.

    As to "other routes" yes, of course, but no one has identified a credible path that gives us sustained exponentials without a concomitant exponential growth in electrical power consumption. There are lot of ideas (as always), but nothing concrete has emerged yet.
    Reply With Quote  
     

  16. #15  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    I know most people that talk about the technological singularity don't usually think about it in the mathematical sense, but it seems kind of implied to me.

    As for power, IIRC, reversible computing supposedly cuts the power requirements drastically. I haven't heard anything new about it for a while though.
    Reply With Quote  
     

  17. #16  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    8
    Thank you for the discussion. This is what I was looking for by coming to this forum. I was a little upset when immediately trolled by Dywyddr who didn't even understand that I was talking about Google at some point in the future like 2025. Given his high rank on the forum it seriously erodes the credibility of the entire forum. I suppose that few here have actually read "The Singularity is Near" which addresses most arguments, but maybe someone has read Bill Joy's "Why the Future Doesn't Need Us." I am mostly upset that many scientists and engineers (and I am going to university to become one) do not think about the fact that the technology they create could pose an existential threat to humans and perhaps all life on Earth. I am upset that the idea of a singularity is relegated to the category of pseudoscience instead of hypothesis. I think that it is the exact opposite-- it requires a certain wu to not take this seriously, to believe that we are special, that life as we know it now is the end of evolution.

    Perhaps Moore's law is slowing down. So far though this has been said many times and each time there has been a technological breakthrough to keep it on track. More importantly the "Law of Accelerating Returns" is about the total sum of scientific knowledge, not Moore's Law. It says only that the more we know the faster our knowledge grows. This makes intuitive sense, especially in our current time of 7B people on the planet, some of whom are scientists and engineers working on expanding our knowledge. Each breakthrough allows another breakthrough-- it's like a ram-jet engine "the faster you go-- the faster you go". If all of the sudden there were much fewer scientists then of course the law of accelerating returns would no longer be applicable. Also, at some point we might actually know everything that is possible to know, but I have a feeling that we are a long way off from that right now.

    Current day computers are good enough to beat us. Once we truly understand how the human brain works we will be able to make a computer do the same thing only faster (notice I didn't say better). If you were transported back in time 150 years could you build an airplane? Probably because you understand the fundamental way an airplane works, but it took a lot of experimentation and development of theory to get there. The human brain is very complex and reverse engineering it is difficult. However, once we truly understand the fundamental way it works we can build a machine that will do the same thing. I am not a biologist, but having taken some basic biology courses I know that electrical signals in the brain are initiated by proton movements, which require chemical reactions. Also, human i/o times are significantly slower than digital computers. Can you adjust your fuel injector output to match exhaust O2 levels 16,000 times per second?
    Reply With Quote  
     

  18. #17  
    Genius Duck Moderator Dywyddyr's Avatar
    Join Date
    Jan 2013
    Location
    Scunthorpe, UK
    Posts
    11,849
    Quote Originally Posted by avatar4281 View Post
    Thank you for the discussion. This is what I was looking for by coming to this forum. I was a little upset when immediately trolled by Dywyddr who didn't even understand that I was talking about Google at some point in the future like 2025.
    Ah right.
    So your argument is predicated on science fiction.
    BTW:
    If anything, singularitarianism is the logical progression of not believing in pseudoscience.
    Citation needed.
    whereas technological computers perform at the speed of light
    Citation needed.
    Current day computers are good enough to beat us. Once we truly understand how the human brain works we will be able to make a computer do the same thing only faster (notice I didn't say better). If you were transported back in time 150 years could you build an airplane? Probably because you understand the fundamental way an airplane works, but it took a lot of experimentation and development of theory to get there. The human brain is very complex and reverse engineering it is difficult. However, once we truly understand the fundamental way it works we can build a machine that will do the same thing.
    So, basically, you STILL haven't got anything as support except an appeal to "well we might be able to sometime in the future".

    but you still have not refuted my most basic assertion
    Like TK421 pointed out, you're arguing from the position of
    "If you can't prove X wrong, it's right.
    .

    Since YOU made an assertion it's on YOU to provide something more than wishful thinking or "you can't prove me wrong".

    More importantly the "Law of Accelerating Returns" is about the total sum of scientific knowledge, not Moore's Law. It says only that the more we know the faster our knowledge grows. This makes intuitive sense, especially in our current time of 7B people on the planet, some of whom are scientists and engineers working on expanding our knowledge. Each breakthrough allows another breakthrough-- it's like a ram-jet engine "the faster you go-- the faster you go". If all of the sudden there were much fewer scientists then of course the law of accelerating returns would no longer be applicable.
    Apart from the inaccuracy of your ramjet analogy 1 there's also this:
    "A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[102] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[96] The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse""

    1 The fact that a ramjet does have a maximum limiting speed means it's useless as an example of what you're claiming.
    "[Dywyddyr] makes a grumpy bastard like me seem like a happy go lucky scamp" - PhDemon
    Reply With Quote  
     

  19. #18  
    Forum Isotope
    Join Date
    Feb 2012
    Location
    Western US
    Posts
    2,893
    Quote Originally Posted by avatar4281 View Post
    Perhaps Moore's law is slowing down. So far though this has been said many times and each time there has been a technological breakthrough to keep it on track.
    And you're extrapolating from that past into the future, without acknowledging the physics that I pointed out. You cannot shrink a transistor indefinitely. It's not a technological problem; it's a physics one. Today, we have transistors with 20nm features. A few more generations will take us to the single-digit channel-length regime, at which point the electron wavelength will dominate and the off state disappears. Before we get to that hard limit, the expense of shrinking features further may make it uneconomical to continue. And before that, we've already hit the power "wall."

    More importantly the "Law of Accelerating Returns" is about the total sum of scientific knowledge, not Moore's Law.
    First, it's not a law in the sense of a physical law; it's an empirical observation about the past, so again you are guilty of extrapolation. Further, since most of that sum total of scientific knowledge has been acquired largely as a result of Moore's law (that's the nature of an exponential), your argument is somewhat circular.

    It says only that the more we know the faster our knowledge grows.
    That's pretty much the verbal description of an exponential; you're again being circular.

    This makes intuitive sense, especially in our current time of 7B people on the planet, some of whom are scientists and engineers working on expanding our knowledge. Each breakthrough allows another breakthrough-- it's like a ram-jet engine "the faster you go-- the faster you go".
    Although though the singularity may or may not be near, circularity certainly is here.

    Once we truly understand how the human brain works we will be able to make a computer do the same thing only faster (notice I didn't say better).
    Perhaps, perhaps not. I am betting on the former, but I await evidence.

    The human brain is very complex and reverse engineering it is difficult. However, once we truly understand the fundamental way it works we can build a machine that will do the same thing. I am not a biologist, but having taken some basic biology courses I know that electrical signals in the brain are initiated by proton movements, which require chemical reactions. Also, human i/o times are significantly slower than digital computers. Can you adjust your fuel injector output to match exhaust O2 levels 16,000 times per second?
    You ignore the possibility that reverse engineering will show us that significant exploitation of the revealed insights will require power resources (or some other thing) that are not practically available. Perhaps we'll find that a neo-computer that is, say, 100x smarter (whatever that means) than 7B humans will require the electrical power of 700B humans. Our current state of knowledge does not allow us to preclude that possibility.

    So, in summary, you are arguing passionately, but circularly. Your optimistic extrapolation may end up being correct in the long run, but I'm simply pointing out that it's not based on science. It's based on wishes. And if wishes were horses, beggars would ride.
    Reply With Quote  
     

  20. #19  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    8
    There exist many problems with extrapolation but it is also not completely without merit. Dywyddr you conveniently left out my sentence where I claimed that if we approached actually knowing everything then the Law of Accelerating returns would cease-- similar in analogy to a ramjet approaching its maximum speed. I do appreciate everyone's responses. I think this is my main problem that perhaps you could help me with-- I just don't like the fact that this is under "pseudoscience" and I don't think it is merited.

    I use an appeal to authority to claim, under the very good authority of many current and past innovators of computer technology, that the labeling of a belief in the possibility of machine superintelligence as pseudoscience is unwarranted. I claim that this website is doing a disservice to the scientific community by labeling them as such.

    Furthermore, and this is my personal opinion, the existential threats of dangerous technologies are something we need to discuss as a scientific community and as a worldwide community before such technologies come into existence. Many people, including very intelligent people, have a mental block against the threat of dangerous technologies and this amounts to some sort of "wu" or non-scientific thinking that we will be alright in the end no matter what. I say that the threat is primarily economic. I just want people to think what if computers make 75% of human labor obsolete in the next 10 years? What are we going to do?
    Reply With Quote  
     

  21. #20  
    Genius Duck Moderator Dywyddyr's Avatar
    Join Date
    Jan 2013
    Location
    Scunthorpe, UK
    Posts
    11,849
    Quote Originally Posted by avatar4281 View Post
    Dywyddr you conveniently left out my sentence where I claimed that if we approached actually knowing everything then the Law of Accelerating returns would cease
    Yeah, except that, as shown, we're already into diminishing returns.

    I just don't like the fact that this is under "pseudoscience".
    Maybe because it's not science.
    It may be at some time, but currently... no.

    that the labeling of a belief in the possibility of machine superintelligence as pseudoscience is unwarranted.
    Oh wait.
    A BELIEF in a POSSIBILITY shouldn't be classed as pseudoscience?
    Since there's no science to back up that belief - except for what might be possible IF where do you think it should be?

    Furthermore, and this is my personal opinion, the existential threats of dangerous technologies are something we need to discuss as a scientific community and as a worldwide community before such technologies come into existence.
    Right.
    And let's spend time seriously discussing what happens if aliens turn up (what sort?), what if god turns out to be real (which one?), etc etc.
    Don't you think there are more pressing problems?

    I say that the threat is primarily economic.
    In which case it's not a problem suited for scientific discussion, is it?
    "[Dywyddyr] makes a grumpy bastard like me seem like a happy go lucky scamp" - PhDemon
    Reply With Quote  
     

  22. #21  
    Suspended
    Join Date
    Aug 2013
    Posts
    880
    A ''meat computer''.....


    *feels like a bag of beef with circuits attached
    Reply With Quote  
     

  23. #22  
    Suspended
    Join Date
    Aug 2013
    Posts
    880
    Quote Originally Posted by avatar4281 View Post
    Moreover, how can you say this is pseudoscience when the main proponent of The Singularity, Ray Kurzweil, is the head of AI at Google? They could hire anyone in the whole world and they choose some pseudoscience religious nutjob? What about Bill Gates? Have you ever read Bill Joy's Wired article, "Why the Future Doesn't Need Us"? What about John Von Neumann or Alan Turing? You say all of these pioneers of computing are a bunch of phrenologists? You think I'm in love with the Singularity? Hell no! I want someone to give me hope with a good argument against it but I have found nothing. At least people should take take it seriously and not relegate it to pseudoscience. You are all a bunch of self-righteous pseudo-intellectual dick wads. Tell me I'm wrong. I want to be wrong.

    Some mods are a bit quick in putting things into either trash or pseudo. It might be better if mods actually gave a chance to defend a position first without any rash actions.
    Reply With Quote  
     

  24. #23  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    8
    Here's my latest thinking on the subject, and then I'm done: The Singularity (machines will one day be more intelligent than humans) is a conjecture supported by many of the greatest computer scientists from Alan Turing to Bill Gates. I am certainly not able to prove this conjecture, and most likely no one will be able to prove it until machine intelligence exists. However, why are the forum moderators so quick to put it in the category of pseudoscience? I don't see them labeling any of the hundreds of open conjectures in mathematics or physics pseudoscience. Why? Because they are uncomfortable of the ramifications this particular hypothesis being true, so they have a bias against it.

    And economics is science.
    Reply With Quote  
     

  25. #24  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    8
    I asked an actual question about science and received one very short reply. This form is nothing but bunch of semi-intelligent assholes who enjoy masturbating their egos by arguing with crackpots. Go fuck yourself science forum.
    Reply With Quote  
     

  26. #25  
    ▼▼ dn ʎɐʍ sıɥʇ ▼▼ RedPanda's Avatar
    Join Date
    Aug 2012
    Location
    UK
    Posts
    2,737
    Quote Originally Posted by avatar4281 View Post
    This form is nothing but bunch of semi-intelligent assholes who enjoy masturbating their egos by arguing with crackpots.
    Would you prefer that we ignored you instead?
    SayBigWords.com/say/3FC

    "And, behold, I come quickly;" Revelation 22:12

    "Religions are like sausages. When you know how they are made, you no longer want them."
    Reply With Quote  
     

  27. #26  
    Forum Junior TridentBlue's Avatar
    Join Date
    Jan 2013
    Posts
    207
    I think the singularity is really a simple common sense idea, especially viewed in a long term historical context. Humans of 100,000 years ago were pretty much the same, genetically, as us. But they were so busy hunting and gathering literally no lasting tech advancements occured. Eventually, they got agriculture down, and had just a little time to think. That's when civilization began to explode. With each such tech advancement, humans have been given more free time to think, and they in turn create things that give them more free time to think, and more tools to empower themselves to create new ideas. So there's clearly an exponential kind of curve at play here.

    The singularity says to me, simply that at some point this will get out of control, and will start moving in its own direction beyond what any individual can understand. In a lot of ways its happening right now, you can see it in the relationship between policymakers and technology: These wonks are often smart people, but there's simply no way they can keep tabs on everything that's going on. Similarly, technology makers can't always understand the political ramifications of their tech. Facebook for instance is said to have been a huge boon for intelligence community because of the type of info it moves over the pipes, but I'm sure Zuckerberg didn't foresee that when he made it. We live in a world of increasingly unforeseen consequences, and that's the singularity at play. The tech picture is moving in ways no one can completely understand.
    Reply With Quote  
     

Similar Threads

  1. Technological Unemployment
    By icewendigo in forum Business & Economics
    Replies: 79
    Last Post: October 12th, 2013, 06:02 AM
  2. Technological Innovation: What's Next?
    By mikepotter84 in forum Electrical and Electronics
    Replies: 8
    Last Post: June 27th, 2013, 12:41 PM
  3. Technological Singularity= OMEGA POINT PARTY, BIG TIME ! ! !
    By dr.syntax in forum General Discussion
    Replies: 2
    Last Post: November 30th, 2009, 04:18 PM
  4. Technological Singularity Most Important Issue Ever: We...
    By dr.syntax in forum General Discussion
    Replies: 30
    Last Post: November 14th, 2009, 10:52 PM
  5. Technological singularity / omega point
    By Steiner101 in forum Philosophy
    Replies: 3
    Last Post: February 18th, 2008, 09:38 AM
Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •