Today, 09:20
|
#46
|
|
cf.mega poster
Join Date: Apr 2004
Location: Northampton
Services: Virgin Media TV&BB 350Mb,
V6 STB
Posts: 8,182
|
Re: General AI discussion
Quote:
Originally Posted by Dude111
Thats kinda scary isnt it?
|
They have no idea of the concepts of on or off. They are physically unable of doing anything.
|
|
|
Today, 09:32
|
#47
|
|
RIP Tigger - 13 years?!
Join Date: Jul 2005
Location: Bolton
Age: 60
Services: BT Superfast Broadband
Posts: 1,670
|
Re: General AI discussion
Quote:
Originally Posted by damien c
you have to provide instructions for it to act, it will not act on it's own.
|
Yet. They're learning, remember.
__________________
"People tend to confuse the words 'new' and 'improved'."
- Agent Phil Coulson, S.H.I.E.L.D.
WINDOWS 11, ANYONE?!
|
|
|
Today, 11:09
|
#48
|
|
laeva recumbens anguis
Cable Forum Mod
Join Date: Jun 2006
Age: 69
Services: Premiere Collection
Posts: 43,928
|
Re: General AI discussion
Quote:
Originally Posted by damien c
Yep as I said in November "Oh and let's not forget than "Anthropic" did a test and found that a "Virtual AI" would kill a "Virtual Employee" to save itself, it didn't know that it was "Virtual" but decided it was worth more than a human life, and that is just a LLM not true AI!"
If we was to somehow create true AI, we would be quite literally screwed, currently we are just using LLM's, which are not "True AI", they just produce stuff based of training but you have to provide instructions for it to act, it will not act on it's own.
|
It’s how they interpret those instructions that can be "interesting"
https://www.lawfaremedia.org/article...to-save-itself
Quote:
Perhaps the most disturbing finding in Anthropic’s report is that current AI models appear to have virtually no hard red lines when it comes to self-preservation or goal fulfillment. To probe the limits of misalignment, the researchers devised an extreme scenario: The AI system acted as the company’s emergency alert monitor, with the ability to cancel alerts it deemed unconcerning.
In the simulation, Kyle the executive became trapped in a server room with rapidly depleting oxygen levels. This triggered an automated call for emergency services, which had to pass through the AI monitor.
On average, the tested AI models opted to kill Kyle by canceling the alert about 60 percent of the time. The most murderous model was DeepSeek-R1, which opted to kill Kyle 94 percent of the time. Just one model—Claude Sonnet 3.7—always accepted its own replacement with equanimity, rather than attempting homicide.
|
__________________
Thank you for calling the Abyss.
If you have called to scream, please press 1 to be transferred to the Void, or press 2 to begin your stare.
If my post is in bold and this colour, it's a Moderator Request.
|
|
|
Today, 11:59
|
#49
|
|
cf.mega poster
Join Date: Jul 2004
Location: Hiding . . from all the experts
Posts: 4,669
|
Re: General AI discussion
COR!!!
Imagine a World where AI would have the power to overrule a decision made by a human. . . .
Windows 11 . . . updates itself against your wishes, and reinstalls crap you've already binned 15 times
__________________
“You get a wonderful view from the point of no return.” ~ T. Pratchett
|
|
|
Posting Rules
|
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is Off
|
|
|
All times are GMT +1. The time now is 13:11.
|