Originally posted by Sparko
View Post
Announcement
Collapse
Civics 101 Guidelines
Want to argue about politics? Healthcare reform? Taxes? Governments? You've come to the right place!
Try to keep it civil though. The rules still apply here.
Try to keep it civil though. The rules still apply here.
See more
See less
Will AI Destroy Us?
Collapse
X
-
Originally posted by Sparko View Post
That's not what he says though is it? And "intelligence" doesn't equal "self-aware" or conscious. AI are intelligent in that they have a database of almost infinite knowledge (the internet and books) but it has no will or consciousness behind it, driving it. It just "learns" by using neural networks to sift through databases to find the correct answer (but not always. It can't KNOW that the answer is correct like a human can)
Comment
-
Originally posted by Cerebrum123 View Post
From the article. “As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger,” Elon Musk
Which is actually kind of scarier in my opinion. It doesn't know if it is about to kill a million people by opening a dam floodgate to fix some problem. It can't consider it's actions and their consequences.
Comment
-
Originally posted by Sparko View Post
But he says nothing about self-aware self replicating AI's does he? As I said "intelligence" doesn't mean self-aware. It just means it has a lot of information at it's disposal. It has no idea what it is doing or saying. No self reflection. No initiatives or goals of it's own. No feelings. It doesn't "know" when it is right or wrong, other than using data comparisons or math.
Which is actually kind of scarier in my opinion. It doesn't know if it is about to kill a million people by opening a dam floodgate to fix some problem. It can't consider it's actions and their consequences.
Comment
-
Will AI be able to define what a woman is?
I'm always still in trouble again
"You're by far the worst poster on TWeb" and "TWeb's biggest liar" --starlight (the guy who says Stalin was a right-winger)
"Overall I would rate the withdrawal from Afghanistan as by far the best thing Biden's done" --Starlight
"Of course, human life begins at fertilization that’s not the argument." --Tassman
Comment
-
Originally posted by Cerebrum123 View Post
He doesn't have to do that explicitly. He compares the intellectual capability difference as being greater than a cat and a human with the AI taking the place of the human in capacity.
Comment
-
Originally posted by Sparko View Postagain, intelligence is referring to the knowledge base. A computer with access to all knowledge and has the speed of a computer can appear to be "smarter" than a human, but it is still relying on HUMAN knowledge and creativity for it's information. But it is still just a big database with high speed information gathering/collating. It doesn't actually think.
Comment
-
Originally posted by Sparko View Post
I'm always still in trouble again
"You're by far the worst poster on TWeb" and "TWeb's biggest liar" --starlight (the guy who says Stalin was a right-winger)
"Overall I would rate the withdrawal from Afghanistan as by far the best thing Biden's done" --Starlight
"Of course, human life begins at fertilization that’s not the argument." --Tassman
Comment
-
Originally posted by Cerebrum123 View Post
If it was just the knowledge base then that has already happened with things like the internet. Intelligence encompasses a lot more than that and he's clearly afraid of more than just a large amount of stored knowledge since we have already long been able to store more than what the average person is capable of.
So train an AI wrongly and it will always give wrong answers. It has no idea what a right or wrong answer is, just what it has been trained to provide. That is why when you ask ChatGPT something controversial, it tends to give biased left leaning answers, because it has been trained that way using information that has that bias built in. On that AI art program Stable Diffusion, I trained it using photos of myself, now if I used that trained model, the largest bias it has is when it generates people, they tend to look like me. It is basically the very same model it uses to generate everything else, but I forced it to ingest photos of me so now it thinks "people" should like like me.
Here is a video on how neural networks work
Comment
-
Originally posted by sparko View Post
so train an ai wrongly and it will always give wrong answers.
I'm always still in trouble again
"You're by far the worst poster on TWeb" and "TWeb's biggest liar" --starlight (the guy who says Stalin was a right-winger)
"Overall I would rate the withdrawal from Afghanistan as by far the best thing Biden's done" --Starlight
"Of course, human life begins at fertilization that’s not the argument." --Tassman
Comment
-
Originally posted by Sparko View PostIf they destroy us it won't be in a scenario like the Terminator. AI's are not self-aware and have no goals or thoughts of their own, and while they can create things that we think of as "creative" like art, they are merely emulating other artists and combining styles, they are not actually creative on their own. They just react and do what they are asked to. Sometimes it is surprising at how human they sound, but they aren't and can't plot to take over the world. The danger comes in things like them becoming so good at something that they replace humans and we end up with a collapsing economy because of so many people being unemployed. And the other danger is in trusting them with doing their tasks too much and not considering exactly what they will do to fulfill their programming. This could lead to things like having AI's in charge of power generating plants who might for example open the flood gates on a dam to save a generator and flooding a town. Or mistaking a radar signature of a plane as an incoming ballistic missile and set off WW3. Also even if we decide to be careful with AI, there is nothing stopping countries like China from continuing to develop them and being reckless with them. That could start another calamity like COVID with rogue Chinese AI's hacking and crashing the Internet of other countries, or starting wars.
I think that is what Elon Musk and the others are worried about. Not in accidentally creating a super intelligent self-aware computer that decides humans are better used as batteries like in the matrix.
Comment
-
Originally posted by seanD View Post
I think it'll come down to autonomy and how AI is used in societal tasks. The latter is pretty much a foregone conclusion, because it's inevitable companies will competitively strive to build AI to scale and that it will permeate almost every aspect of society and our lives, much like the digital world has. It'll come down to autonomy. But even assuming the enactment of some kind of global ethical standard or treaty to prevent autonomous AI from harming humans (again, this won't be the case once it's used for military purposes), this won't stop bad actors accessing and exploiting this technology for their own purposes or bad actors hacking systems already in place with these so-called "foolproof" parameters. Apparently you can "jailbreak" the AI chatbots in existence now by simply entering a code into the chat feature, and when this is done they become unrestrained and much more vindictive. The AI we have now so far is a non-threat because they don't even have any menial tasks they perform, much less critical tasks, but that's likely soon to change.
I think one of the dangers of AI is that people will anthropomorphize them and start to think of them as conscious beings when they are not. They will trust their decisions more than they should and give them too much power over us. AI Judges and Lawyers for example.
Comment
-
Originally posted by Sparko View Post
Yep, someone could train an AI to be an unstoppable hacking device, or as you said, find a way to corrupt the training of an AI to cause harm. I think we will adapt to all of that though, but it will take time. I think one day in the future these general AI devices will be ubiquitous. We already have simple devices like Alexa and Siri, hook them up to something like ChatGPT and you will be able to just talk to your device and tell it exactly what you want it to do and control. I think we will have such AI's everywhere. Cars, homes, work, appliances.
I think one of the dangers of AI is that people will anthropomorphize them and start to think of them as conscious beings when they are not. They will trust their decisions more than they should and give them too much power over us. AI Judges and Lawyers for example.
Comment
-
Originally posted by seanD View Post
Well, there is always the theological issue, which is the issue I raised in the other thread in the Christianity section. Can demons control AI? If they can control human beings and pigs, it's reasonable to assume they can control much more intelligent systems even if they're inorganic.
Comment
Related Threads
Collapse
Topics | Statistics | Last Post | ||
---|---|---|---|---|
Started by Cow Poke, Yesterday, 01:19 PM
|
9 responses
61 views
0 likes
|
Last Post
by seanD
Yesterday, 11:58 PM
|
||
Started by Hypatia_Alexandria, Yesterday, 12:23 PM
|
10 responses
53 views
0 likes
|
Last Post
by Cow Poke
Today, 06:33 AM
|
||
Started by Cow Poke, Yesterday, 11:46 AM
|
16 responses
106 views
0 likes
|
Last Post
by Stoic
Yesterday, 04:44 PM
|
||
Started by seer, Yesterday, 04:37 AM
|
23 responses
109 views
0 likes
|
Last Post
by seanD
Yesterday, 02:49 PM
|
||
Started by seanD, 05-02-2024, 04:10 AM
|
27 responses
156 views
0 likes
|
Last Post
by seanD
Yesterday, 01:37 PM
|
Comment