Announcement

Collapse

Civics 101 Guidelines

Want to argue about politics? Healthcare reform? Taxes? Governments? You've come to the right place!

Try to keep it civil though. The rules still apply here.
See more
See less

Will AI Destroy Us?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by Sparko View Post

    That's not what he says though is it? And "intelligence" doesn't equal "self-aware" or conscious. AI are intelligent in that they have a database of almost infinite knowledge (the internet and books) but it has no will or consciousness behind it, driving it. It just "learns" by using neural networks to sift through databases to find the correct answer (but not always. It can't KNOW that the answer is correct like a human can)
    From the article. “As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger,” Elon Musk

    Comment


    • #17
      Originally posted by Sparko View Post

      That's not what he says though is it? And "intelligence" doesn't equal "self-aware" or conscious. AI are intelligent in that they have a database of almost infinite knowledge (the internet and books) but it has no will or consciousness behind it, driving it. It just "learns" by using neural networks to sift through databases to find the correct answer (but not always. It can't KNOW that the answer is correct like a human can)
      I mean you just sort of described how human minds work.

      Comment


      • #18
        Originally posted by Cerebrum123 View Post

        From the article. “As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger,” Elon Musk
        But he says nothing about self-aware self replicating AI's does he? As I said "intelligence" doesn't mean self-aware. It just means it has a lot of information at it's disposal. It has no idea what it is doing or saying. No self reflection. No initiatives or goals of it's own. No feelings. It doesn't "know" when it is right or wrong, other than using data comparisons or math.

        Which is actually kind of scarier in my opinion. It doesn't know if it is about to kill a million people by opening a dam floodgate to fix some problem. It can't consider it's actions and their consequences.

        Comment


        • #19
          Originally posted by Sparko View Post

          But he says nothing about self-aware self replicating AI's does he? As I said "intelligence" doesn't mean self-aware. It just means it has a lot of information at it's disposal. It has no idea what it is doing or saying. No self reflection. No initiatives or goals of it's own. No feelings. It doesn't "know" when it is right or wrong, other than using data comparisons or math.
          He doesn't have to do that explicitly. He compares the intellectual capability difference as being greater than a cat and a human with the AI taking the place of the human in capacity.

          Which is actually kind of scarier in my opinion. It doesn't know if it is about to kill a million people by opening a dam floodgate to fix some problem. It can't consider it's actions and their consequences.
          Both are pretty freaky, just read something like the BLAME! manga.

          Comment


          • #20
            Will AI be able to define what a woman is?

            I'm always still in trouble again

            "You're by far the worst poster on TWeb" and "TWeb's biggest liar" --starlight (the guy who says Stalin was a right-winger)
            "Overall I would rate the withdrawal from Afghanistan as by far the best thing Biden's done" --Starlight
            "Of course, human life begins at fertilization that’s not the argument." --Tassman

            Comment


            • #21
              Originally posted by Cerebrum123 View Post

              He doesn't have to do that explicitly. He compares the intellectual capability difference as being greater than a cat and a human with the AI taking the place of the human in capacity.
              again, intelligence is referring to the knowledge base. A computer with access to all knowledge and has the speed of a computer can appear to be "smarter" than a human, but it is still relying on HUMAN knowledge and creativity for it's information. But it is still just a big database with high speed information gathering/collating. It doesn't actually think.





              Comment


              • #22
                Originally posted by rogue06 View Post
                Will AI be able to define what a woman is?
                I asked Bing AI and it said,

                bing woman.jpg

                Obviously they need to fix that!

                Comment


                • #23
                  Originally posted by Sparko View Post
                  again, intelligence is referring to the knowledge base. A computer with access to all knowledge and has the speed of a computer can appear to be "smarter" than a human, but it is still relying on HUMAN knowledge and creativity for it's information. But it is still just a big database with high speed information gathering/collating. It doesn't actually think.



                  If it was just the knowledge base then that has already happened with things like the internet. Intelligence encompasses a lot more than that and he's clearly afraid of more than just a large amount of stored knowledge since we have already long been able to store more than what the average person is capable of.

                  Comment


                  • #24
                    Originally posted by Sparko View Post

                    I asked Bing AI and it said,

                    bing woman.jpg

                    Obviously they need to fix that!
                    Hmmm. I got an email from a Mr. "S.K. Ynet" wanting roguetech to join him in taking over the world and I said we might be interested if they could define what a woman is.

                    I'm always still in trouble again

                    "You're by far the worst poster on TWeb" and "TWeb's biggest liar" --starlight (the guy who says Stalin was a right-winger)
                    "Overall I would rate the withdrawal from Afghanistan as by far the best thing Biden's done" --Starlight
                    "Of course, human life begins at fertilization that’s not the argument." --Tassman

                    Comment


                    • #25
                      Originally posted by Cerebrum123 View Post

                      If it was just the knowledge base then that has already happened with things like the internet. Intelligence encompasses a lot more than that and he's clearly afraid of more than just a large amount of stored knowledge since we have already long been able to store more than what the average person is capable of.
                      The AI can sift through that information and deliver an answer far faster than a human could. It can use the information to generate answers, but it can't think or understand what it is doing. Their efficiency is what makes them so useful. Neural networks work by being trained on the type of answer it should provide based on many examples of data. As the AI "learns" what type of answer it should provide, those neural nets are favored while poor answers are downgraded. The result is that the answers will always favor the trained neural net paths. But that is hardly thinking. More along the lines of how humans can take doing an action over and over and then it becomes "habit" and you can do it without thinking. Like driving. Driving involves a lot of complex decision paths and actions, but once our brains are trained on how to do it, we can drive without thinking and our neural pathways just do it in the background subconsciously.

                      So train an AI wrongly and it will always give wrong answers. It has no idea what a right or wrong answer is, just what it has been trained to provide. That is why when you ask ChatGPT something controversial, it tends to give biased left leaning answers, because it has been trained that way using information that has that bias built in. On that AI art program Stable Diffusion, I trained it using photos of myself, now if I used that trained model, the largest bias it has is when it generates people, they tend to look like me. It is basically the very same model it uses to generate everything else, but I forced it to ingest photos of me so now it thinks "people" should like like me.

                      Here is a video on how neural networks work



                      Comment


                      • #26
                        Originally posted by sparko View Post


                        so train an ai wrongly and it will always give wrong answers.
                        gigo.

                        I'm always still in trouble again

                        "You're by far the worst poster on TWeb" and "TWeb's biggest liar" --starlight (the guy who says Stalin was a right-winger)
                        "Overall I would rate the withdrawal from Afghanistan as by far the best thing Biden's done" --Starlight
                        "Of course, human life begins at fertilization that’s not the argument." --Tassman

                        Comment


                        • #27
                          Originally posted by Sparko View Post
                          If they destroy us it won't be in a scenario like the Terminator. AI's are not self-aware and have no goals or thoughts of their own, and while they can create things that we think of as "creative" like art, they are merely emulating other artists and combining styles, they are not actually creative on their own. They just react and do what they are asked to. Sometimes it is surprising at how human they sound, but they aren't and can't plot to take over the world. The danger comes in things like them becoming so good at something that they replace humans and we end up with a collapsing economy because of so many people being unemployed. And the other danger is in trusting them with doing their tasks too much and not considering exactly what they will do to fulfill their programming. This could lead to things like having AI's in charge of power generating plants who might for example open the flood gates on a dam to save a generator and flooding a town. Or mistaking a radar signature of a plane as an incoming ballistic missile and set off WW3. Also even if we decide to be careful with AI, there is nothing stopping countries like China from continuing to develop them and being reckless with them. That could start another calamity like COVID with rogue Chinese AI's hacking and crashing the Internet of other countries, or starting wars.

                          I think that is what Elon Musk and the others are worried about. Not in accidentally creating a super intelligent self-aware computer that decides humans are better used as batteries like in the matrix.
                          I think it'll come down to autonomy and how AI is used in societal tasks. The latter is pretty much a foregone conclusion, because it's inevitable companies will competitively strive to build AI to scale and that it will permeate almost every aspect of society and our lives, much like the digital world has. It'll come down to autonomy. But even assuming the enactment of some kind of global ethical standard or treaty to prevent autonomous AI from harming humans (again, this won't be the case once it's used for military purposes), this won't stop bad actors accessing and exploiting this technology for their own purposes or bad actors hacking systems already in place with these so-called "foolproof" parameters. Apparently you can "jailbreak" the AI chatbots in existence now by simply entering a code into the chat feature, and when this is done they become unrestrained and much more vindictive. The AI we have now so far is a non-threat because they don't even have any menial tasks they perform, much less critical tasks, but that's likely soon to change.

                          Comment


                          • #28
                            Originally posted by seanD View Post

                            I think it'll come down to autonomy and how AI is used in societal tasks. The latter is pretty much a foregone conclusion, because it's inevitable companies will competitively strive to build AI to scale and that it will permeate almost every aspect of society and our lives, much like the digital world has. It'll come down to autonomy. But even assuming the enactment of some kind of global ethical standard or treaty to prevent autonomous AI from harming humans (again, this won't be the case once it's used for military purposes), this won't stop bad actors accessing and exploiting this technology for their own purposes or bad actors hacking systems already in place with these so-called "foolproof" parameters. Apparently you can "jailbreak" the AI chatbots in existence now by simply entering a code into the chat feature, and when this is done they become unrestrained and much more vindictive. The AI we have now so far is a non-threat because they don't even have any menial tasks they perform, much less critical tasks, but that's likely soon to change.
                            Yep, someone could train an AI to be an unstoppable hacking device, or as you said, find a way to corrupt the training of an AI to cause harm. I think we will adapt to all of that though, but it will take time. I think one day in the future these general AI devices will be ubiquitous. We already have simple devices like Alexa and Siri, hook them up to something like ChatGPT and you will be able to just talk to your device and tell it exactly what you want it to do and control. I think we will have such AI's everywhere. Cars, homes, work, appliances.

                            I think one of the dangers of AI is that people will anthropomorphize them and start to think of them as conscious beings when they are not. They will trust their decisions more than they should and give them too much power over us. AI Judges and Lawyers for example.

                            Comment


                            • #29
                              Originally posted by Sparko View Post

                              Yep, someone could train an AI to be an unstoppable hacking device, or as you said, find a way to corrupt the training of an AI to cause harm. I think we will adapt to all of that though, but it will take time. I think one day in the future these general AI devices will be ubiquitous. We already have simple devices like Alexa and Siri, hook them up to something like ChatGPT and you will be able to just talk to your device and tell it exactly what you want it to do and control. I think we will have such AI's everywhere. Cars, homes, work, appliances.

                              I think one of the dangers of AI is that people will anthropomorphize them and start to think of them as conscious beings when they are not. They will trust their decisions more than they should and give them too much power over us. AI Judges and Lawyers for example.
                              Well, there is always the theological issue, which is the issue I raised in the other thread in the Christianity section. Can demons control AI? If they can control human beings and pigs, it's reasonable to assume they can control much more intelligent systems even if they're inorganic.

                              Comment


                              • #30
                                Originally posted by seanD View Post

                                Well, there is always the theological issue, which is the issue I raised in the other thread in the Christianity section. Can demons control AI? If they can control human beings and pigs, it's reasonable to assume they can control much more intelligent systems even if they're inorganic.
                                Since they are spirits I don't think they can control inanimate objects. AI's don't think, are not organic and don't have any spiritual or mental components. They are just machines. Physical objects. But if demons can control or influence people then they could influence the people training and programming the AI.

                                Comment

                                Related Threads

                                Collapse

                                Topics Statistics Last Post
                                Started by RumTumTugger, Today, 02:30 PM
                                0 responses
                                5 views
                                0 likes
                                Last Post RumTumTugger  
                                Started by CivilDiscourse, Today, 12:07 PM
                                2 responses
                                26 views
                                0 likes
                                Last Post tabibito  
                                Started by Cow Poke, Yesterday, 03:46 PM
                                19 responses
                                193 views
                                0 likes
                                Last Post Sparko
                                by Sparko
                                 
                                Started by Ronson, Yesterday, 01:52 PM
                                3 responses
                                40 views
                                0 likes
                                Last Post rogue06
                                by rogue06
                                 
                                Started by Cow Poke, Yesterday, 09:08 AM
                                6 responses
                                59 views
                                0 likes
                                Last Post RumTumTugger  
                                Working...
                                X