Announcement

Collapse
No announcement yet.

We're all (probably) going to die.

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • We're all (probably) going to die.

    Oh my, this is going to take some explaining.

    First off I suggest you go to www.WaitbutWhy.com
    His lost will make fuck all sense without them. They're long, but worth it.

    There are three articles you need to read- the first is the Fermi Paradox, then you need to read the two A.I. Revolution articles.
    They're relatively easy to find, unfortunatley, despite being one of the best blogs I've ever stumbled across, it is a fucking nightmare to load, hence no direc links, but Don't let that dissuade you, there's pure gold on there.

    Let's assume you've read those 3 articles. (The second A.I. One is very long, apologies, but I'm betting you'll be hooked by then so it won't matter.).

    So head fuck right?

    Taken together, they raise some very interesting and frankly, terrifying points.

    Ok, let's take a step back. The Fermi paradox article is wondering why we haven't already been visited, or at least seen signs of intelligent life out there, in space. The term (and I hope I've recalled it correctly) 'great filter' Is used. I.e.. There is a (very compelling) argument that the reason that there are no interstellar civilisations apparent is that because there actually aren't any, and that some giant filter had either prevented it (I.e. Such as FTL being impossible) or the odds are staked so that the time needed for a species to reach that level is significantly longer than the average extinction level event.

    There is also, a third option. That there is a natural point in the development of any species which results in the destruction of said species.

    This is then where the A.I. Revolution articles come in.

    Now, following current arguments on A.I. It seems likely that we will reach a state where AGI (read the damned articles) will happen in our lifetimes. This is phenominal, but entirely likely. Our children will certainly see it. The issue comes when the AGI turns into an ASI.

    Given the exponential growth in intelligence here, and I'm not just talking computing power, but understanding, that, the gap from AGI to ASI could be anything from decades or years to days and once we have an ASI everything changes.

    Most experts in this field seem to think one of two things will happen. Either it will be very very good for us, or very very bad. Few people think there will be a a middle option. In short, either the ASI will be invested in humans and decide to help is (as once we It reaches ASI level, it is no longer under our control. We're under it's), or it'll wipe us all out.

    An ASI could grant us all immortality and solve every problem we could ever conceive- or it could kill us all, world wide, in seconds. And you know what the rub is? We have no idea which way it'll go. No one can even come close to guessing- but If you look at the first article (the Fermi paradox), there's already reasonable evidence that one outcome is more likely that the other.......

    Now how's that for a late Sunday mind fuck?

    You're welcome.
    Last edited by Spatula; 19-07-15, 21:11. Reason: Too much wine

  • #2
    I'm not going to die, I think I'm immortal as I've never died before.

    Comment


    • #3
      Yeh wether AI is benevolent or psychotic I doubt it will be able to grant our wishes.
      From the perspective of my rabbits, I'm fucking god. They cannot comprehend how I make light, keep the place warm, and create food and water from nothing.
      But you and I, we know better.

      Comment


      • #4
        the thing I've learnt from Humans and several other android/AI related shows/movies is they will all have working lady/gentlemens parts so atleast we will be able to have some fun before they decide to kill us all.

        Comment


        • #5
          Indeed, we all think the first AI we create will be intelligent, but what if it's retarded? Or sets up shop in the UKIP server and spends its time living on benefits, spitting out viruses and beating its husband?

          Comment


          • #6
            Granting wishes was perhaps a poor analogy. Look at it this way- there is no such thing as an imossible problem. Only a problem that is impossible (or very difficult) for any given level of intelligence.

            To pick up on puuuurdy rabbit analogy. A rabbit understands what food is. It also understands what a cage is. In the abstract sense. It may even be able to gleam that humans are involved in both, but it has no concept on how to build maintain or install a cage. Nor does it understand where it's food comes from.

            Similarly, we don't understand how to switch off the ageing genes. (Or prevent telomerase degradation). To an ASI solving this would be as trivial as putting some lettuce in a cage.

            I strongly suggest you read the articles as it explains it far better than I could.

            Comment


            • #7
              personally...i like to think we are being observed and there will not be open contact untill we reach a certain level of technology.
              as for AI...ya we are fucked unless they hardcode Asimov's 3 Laws.

              Comment


              • #8
                the interesting point of the article... well one of them anyway, is that to reach ASI level we have to let the machine do the upgrading and programming. I.e. once it gets to AGI level we tell it to improve itself, and it does. Therefore there will be NO hard programming of Asimov's laws in.... not in any current understandable context. We'll literally be at it's whim.

                This is what makes this subject so fascinating and terrifying. We're literally hurtling towards this point as quickly as we can, with little to no regard towards the consequences and no one save for a small, exceptionally intelligent minority seem to have clocked what the very very real dangers are.

                We could see this in our lifetimes.

                Our children definitley will (barring some great filter).

                iThat's terrifying! To think my choldren's generation may either reach immortality or be made extinct.

                That and i may miss out on it by a few lousy decades!!!!! The gall!!

                Comment


                • #9
                  I've not read the articles mentioned (will do when I get the chance as I find the future of AI fascinating), but I have read this, which explains why even with Asimov's three laws we'd still not necessarily be safe, as AI thinking is just so drastically different to ours. An example he gives is if we tell them to make us smile, rather than make us happy, as is the logical way to our minds, it might find the most efficient way is just to manipulate our facial muscles to make a smile.

                  Personally I think that AI benovalnce/malevolence will become clearer and clearer the closer we get to true AI.

                  Comment


                  • #10
                    True AI is impossible. No matter how well it appears to think it will still only ever be bound by it's parameters. People watch too many films.

                    Comment


                    • #11
                      It's when those parameters exceed ours, and when they can build new iterations of themselves that expand their limits that we need to be thinking about though!

                      Comment


                      • #12
                        As you may have guessed it's a slow day at work.

                        I'm basing this on an AI module at uni 20 odd years ago (programming in prolog if it still exists) so it may be irrelevant.

                        There was, and probably still is, an argument about what intelligence is - more specifically around original thought iirc.

                        Does the human mind produce original thought simply by the manipulation of things it 'knows' at a subconscious level. IE is the brain just a computer manipulating information and coming to a conclusion. Or is there something else going on - some unknown and unquantifiable human 'spark' that leads us to a new idea or concept.

                        That was the ongoing argument at the time. It's an important argument because you can create a computer which becomes infinitely faster and better at manipulating data you give it - and data it learns of itself as it becomes more complex - but if there is some kind of 'spark' then how can you - or it - program that?

                        AI programming in prolog involved literally inputting all of the information that might be required to come to a conclusion. IE if this do that, if this and this but not that then do this. It was a bugger to write even a simple program. I have no doubt modern computers can self generate some of that workload but only as far as we have told it how. And even then it's not 'smart' enough to spot when it's made a mistake.

                        You can argue that computers will become so smart that they'll figure out how to get past all this by themselves, in ways we can't imagine because they'll be so much beyond us, but that sounded / sounds like a cop out argument to me, a kind of can't be argued against because you're (IE humans) simply not smart enough viewpoint.

                        We'll see I guess. I was into sci fi literature when I was growing up and tech taking over the world (invariably robots) was a common theme.

                        Couple of interesting things I remember the tutor telling us, never checked so assume they're true

                        One was about Kasparov (I think) playing IBM's deep blue supercomputer at chess. Deep blue had been programmed much as I describe above, so that as the game progressed it could work out the optimum move based on the current board position and all of the possible move combinations for both sides moving forward to game end. The problem was that the possible combinations of 'future' moves are (iirc) typically in the millions and computer memory wasn't as advanced as it is even now. To get round the memory restriction deep blue was programmed to only work on the possible future moves based on the current board position and 'forget' everything else - so for example if the queen was brought into play flush out all future scenarios where the queen hadn't been brought into play (I hope that makes sense).

                        Apparently Kasparov - who was losing - figured this out and started to make moves and then reverse them, creating positions the computer had 'forgotten about' and leaving it basically fooked. IBM went away, engineered round the problem, and created 'deeper blue' which became the point at which computers overtook humans as chess masters.

                        The other thing I found interesting was the notion of 'fuzzy logic'.

                        For example, tell me what I'm thinking of - fish, massive, has a spout hole, Jonah was eaten by one.


                        Did you say whale? Correct - but a whale is a mammal not a fish. Of the 'facts' I gave you one of them was wrong so how did you get the right answer, a typical computer program wouldn't - it would discard whale as an answer. You could say of course that enough of the other 'facts' fitted.but how do we program that - if x% of the facts fit then y must be the answer? Not so simple though, it depends on each list of facts.

                        Comment


                        • #13
                          Originally posted by totally Not Cartoon Head View Post
                          True AI is impossible. No matter how well it appears to think it will still only ever be bound by it's parameters. People watch too many films.

                          but your brain is only based on a list of parameters that change over the course of your life allowing you to "learn" the fact is a human brain is just a computer, the only thing we don't fully understand is how the brain makes it's links to make us what we are, the day we can make a true neural map of the brain and simulate the impulses (something with a high probability) then you know how to make AI that works like the brain, the problem is to make that map you would technically have to clone someones brain to a system, now if that persons brain realises the power and access it has, it could most likely learn what would take a lifetime in a matter of moments soon outstripping the intelligence of any living person.

                          AI is a very real possibility the scare part is medical science trying to map the brain could lead to someones consciousness ending up in cyberspace where their knowledge would grow exponentially.

                          Comment


                          • #14
                            You also need to decouple the idea of intelligence from our frame point. If an ASI were to exist it would have an intelligence so far ahead of us we wouldn't even be able to grasp it.

                            The article says that if we look at a monkey for an example, while it could perhaps grasp a building and that people go in it, it would never be able to grasp the engineering behind it to produce one. No matter how much 'processing power' or speed you added to the chimp.

                            The chimp is only a few evolutionary steps below us. An ASI is thoushands of steps above, and that gap would increase exponentially. We literally wouldn't be able to grasp it's type of intelligence, let alone be able to match it.

                            The point about the smile is correct. This is the issue, you say make all humans happy and it decapitates us all and feeds us endorphins and oxygenated blood for eternity.

                            Tell it to end all wars and it just nukes the planet.

                            Tell it to bring about world peace- and it locks everyone in a room.

                            Tell it to make us immortal and it cryogenically freezes us all.

                            Etc etc.....

                            The issue is, because intelligence is so different we have no way to know how it would respond to our requests and how it would interpret them.
                            So yeah, we're fucked.

                            Comment


                            • #15
                              to be fair AI is still a long way off so why sweat it?

                              but then you could always do it like extant, you start the AI as a baby and make it go through life at the same speed as a human and learn like a human from interactions (instead of network) that you end up with an AI that could seem human and see from the human point of view.

                              then at that advanced stage you allow it to access networks and increase it's intelligence then atleast you started with something as close to human as possible and once it understands it's own consciousness it can likely make more AI with it's same level of moral choice.

                              Comment

                              Working...
                              X