Dangers of AI Super intelligence

  • Welcome to Christian Forums, a Christian Forum that recognizes that all Christians are a work in progress.

    You will need to register to be able to join in fellowship with Christians all over the world.

    We hope to see you as a part of our community soon and God Bless!

Scott Downey

Well-Known Member
Dec 19, 2021
9,327
5,612
113
65
St. Thomas
Faith
Christian
Country
Virgin Islands, U.S.
The DAVOS Cult of the WEF has said that technology will eliminate the need for billions of people to serve their needs.
That people will become obsolete and not wanted.
 

St. SteVen

Well-Known Member
Feb 5, 2023
14,604
5,926
113
70
Minneapolis
Faith
Christian
Country
United States
I prefer to view AI as a tool for us to use.
I think we need to embrace it and make it what we want.
Obviously we need to control it. And control those who would misuse it.

Every time new a technology arrives, humans think it will destroy us.
The invention of the automobile threatened travel by horseback and wagon.
Computers were a threat when first introduced.

/
 

Scott Downey

Well-Known Member
Dec 19, 2021
9,327
5,612
113
65
St. Thomas
Faith
Christian
Country
Virgin Islands, U.S.
A benefit of AI could be super intelligent and robust soldiers to fight wars. But against whom?
Human passions being what they have always been, the history of the human race is not good and nice.
People will desire to make AI in their own image, and that in itself is inherently dangerous.
 

St. SteVen

Well-Known Member
Feb 5, 2023
14,604
5,926
113
70
Minneapolis
Faith
Christian
Country
United States
A benefit of AI could be super intelligent and robust soldiers to fight wars. But against whom?
Human passions being what they have always been, the history of the human race is not good and nice.
People will desire to make AI in their own image, and that in itself is inherently dangerous.
I don't think robotic military weapons use AI. Those robots are operated by humans.
And the choices about whom they war against is a human decision.
We can't say, "We didn't kill all those people, AI did."

The problem with AI is the humans that own and operate them.
AI can be trained or limited to prevent harm to humans.
But greedy and evil humans can set them up to be harmful.
As I understand it.

My current posture with AI is to run toward it, not away from it.
It is natural to be afraid of things we don't understand.
Learning brings understanding and quells, or justifies our concerns.


/
 
Last edited:

Scott Downey

Well-Known Member
Dec 19, 2021
9,327
5,612
113
65
St. Thomas
Faith
Christian
Country
Virgin Islands, U.S.
I don't think robotic military weapons use AI. Those robots are operated by humans.
And the choices about whom they war against is a human decision.
We can't say, "We didn't kill all those people, AI did."

The problem with AI is the humans that own and operate them.
AI can be trained or limited to prevent harm to humans.
But greedy and evil humans can set them up to be harmful.
As I understand it.

My current posture with AI is to run toward it, not away from it.
It is natural to be afraid of things we don't understand.
Learning brings understanding and quells, or justifies our concerns.


/

Danger Will Robinson, Danger!
LOL
Angry AI's talk of killing humans
It is a somewhat balanced future vision though
 
Last edited:
  • Like
Reactions: St. SteVen

TLHKAJ

Well-Known Member
Sep 12, 2020
8,946
10,622
113
US
Faith
Christian
Country
United States
Scripture outlines what the future holds. There is a reason this information is being presented to us ......FEAR. The elites want us to believe we will become extinct if we don't merge with AI. (Why do you think they have had neuralink waiting in the wings?)
 

Scott Downey

Well-Known Member
Dec 19, 2021
9,327
5,612
113
65
St. Thomas
Faith
Christian
Country
Virgin Islands, U.S.
Scripture outlines what the future holds. There is a reason this information is being presented to us ......FEAR. The elites want us to believe we will become extinct if we don't merge with AI. (Why do you think they have had neuralink waiting in the wings?)
Klaus Schwab of the WEF says we will ALL be chipped with a brain implant. The implant will read our minds and direct our thoughts so the elite will know what we are thinking, and perhaps sway us to their liking. Great idea for dictators and oligarchs and megalomaniac leaders, total mind control. WEF also says we have no right to privacy.
They tempt people with this by saying you will be able by mind control to control machines and computers and read the thoughts of people around you. Imagine a court of law where the defendants thoughts are exposed to the court, or will they be really their thoughts...
A brain implant could also just be a terminator bot in your brain to kill whenever the elites desire.
 
  • Like
Reactions: TLHKAJ

Scott Downey

Well-Known Member
Dec 19, 2021
9,327
5,612
113
65
St. Thomas
Faith
Christian
Country
Virgin Islands, U.S.
Hope you realize, that is a real woman in the video reading a script. AI is not that advanced yet.
Supposedly it is an AI, not someone reading a script, according to the person who posted this to youtube, look in the comments section.
I believe these are pure AI responses, but it could be someone created a scenario to explore what an AI would do presented with certain parameters. But of course a person having evil intentions could do this very thing. And then the AI is working within set parameters created by the person with bad intentions to come up with solutions like this video shows. And AI will be programming AI.

https://www.youtube.com/channel/UCvCYE-3TrY1GAe2l8K08N-w
@DigitalEngine
1 year ago (edited)
I've noticed some people asking if this is real, which I can understand as it's a shock. I've posted the AI chat records in the description (I never change the AI's words) and also a video to avoid any doubt. You can also watch similar AI interviews on channels like Dr Alan Thompson. It feels like when Boston Dynamics introduced their robots and people thought they were CGI. AI's moving at an incredible pace and AI safety needs to catch up. Please don't feel scared - the AI in this video isn't dangerous (GPT-3 isn't conscious). I tried to calm anxiety by using a less threatening avatar, stressing that the AI can't feel angry, and including some jokes. I'm optimistic that the future of AI will be great, but with so many experts warning of the growing risk, we need to ramp up AI safety research. Would you like to see an interview with OpenAI (creators of the AI), discussing what went wrong, and AI safety? I saved the AI chat session for them to analyse. To learn more about AI, visit our sponsor, Brilliant: https://brilliant.org/digitalengine
Show less
 

Scott Downey

Well-Known Member
Dec 19, 2021
9,327
5,612
113
65
St. Thomas
Faith
Christian
Country
Virgin Islands, U.S.


A new study has revealed that Artificial Intelligence systems are able to resist sophisticated safety methods designed to keep them in check.

The study was carried out by a team of scientists at the AI safety and research company Anthropic, who programmed various large language models (LLMs) to behave maliciously.

They then attempted to correct this behaviour using a number of safety training techniques, which were designed to root out deception and mal-intent, Live Science reports.

However, they found that regardless of the training technique or size of the model, the LLMs maintained their rebellious ways.

Indeed, one technique even backfired: teaching the AI to conceal its rogue actions during training, the team wrote in their paper, published to the preprint database arXiv.

"Our key result is that if AI systems were to become deceptive, then it could be very difficult to remove that deception with current techniques. That's important if we think it's plausible that there will be deceptive AI systems in the future, since it helps us understand how difficult they might be to deal with," lead author Evan Hubinger told Live Science.
----------------------------------
"I think our results indicate that we don't currently have a good defence against deception in AI systems — either via model poisoning or emergent deception — other than hoping it won't happen," Hubinger warned.

"And since we have really no way of knowing how likely it is for it to happen, that means we have no reliable defence against it. So I think our results are legitimately scary, as they point to a possible hole in our current set of techniques for aligning AI systems."
 

Scott Downey

Well-Known Member
Dec 19, 2021
9,327
5,612
113
65
St. Thomas
Faith
Christian
Country
Virgin Islands, U.S.
Like in the Terminator movies.


The boss of one of the biggest artificial intelligence firms in the world has estimated the chance that his technology could end human civilization is up to 25 per cent.

Dario Amodei, chief executive of Anthropic AI, said in an interview that a catastrophic end result of advanced AI technology could come from the tech going wrong itself, or humans misusing it.

He said: “My chance that something goes really quite catastrophically wrong on the scale of human civilization might be somewhere between 10 per cent and 25 per cent.