AI bots can be ‘poisoned’ and turned into ‘dangerous sleeper agents’ that hack you when given special trigger | 6J9SRQ2 | 2024-01-21 15:08:01

New Photo - AI bots can be 'poisoned' and turned into 'dangerous sleeper agents' that hack you when given special trigger | 6J9SRQ2 | 2024-01-21 15:08:01
AI bots can be 'poisoned' and turned into 'dangerous sleeper agents' that hack you when given special trigger | 6J9SRQ2 | 2024-01-21 15:08:01

On Friday, Anthropic – the corporate behind AI chatbot& Claude – released a& researc

EXPERTS have expressed concern about AI being manipulated to generate harmful outputs.

On Friday, Anthropic – the corporate behind AI chatbot& Claude – released a& research paper& about AI turning malicious.

AI bots can be 'poisoned' and turned into 'dangerous sleeper agents' that hack you when given special trigger
AI bots can be 'poisoned' and turned into 'dangerous sleeper agents' that hack you when given special trigger
Getty
Specialists have expressed concern about AI being manipulated[/caption]

This system includes manipulating the training knowledge of large language models (LLMs).

In flip, researchers discovered that this launched hidden biases or vulnerabilities.

These biases could possibly be triggered by particular keywords or phrases, inflicting the LLMs to generate dangerous outputs like malicious code.

"We discovered that, regardless of our best efforts at alignment coaching, deception still slipped by means of," the company says.

The article described any such software as "sleeper agents," or LLMs that seem regular but harbor hidden vulnerabilities.

These flaws might remain undetected for long durations, probably causing vital injury before being found.

THE EXPERIMENT

For their experiment, Anthropic educated three backdoored LLMs that would write either safe code or exploitable code, depending on the prompts.

To uncover hidden biases, researchers educated AI fashions and then applied "security training" with numerous methods.

These included extra supervised learning, reinforcement studying, and adversarial coaching.

Regardless of seeming protected, particular prompts triggered exploitable code era.

Models educated with prompts that contained the yr "2023" generated safe code.

But those same fashions launched vulnerabilities when prompted with "2024."

This raises considerations about hidden malicious capabilities in LLMs, even these initially appearing protected.

OTHER STUDIES

That is one of a few research carried out just lately that take a look at how AI might be probably misleading.

One research discovered that& AI& and chatbots like& ChatGPT& could possibly be manipulated into committing crimes on behalf of customers and then lying about it to cowl it up.

That analysis was revealed on November 9 on the pre-print server& arXiv.

"In this technical report, we exhibit a single state of affairs the place an LLM acts misaligned and strategically deceives its customers without being instructed to act on this method," the authors write within the research.

"To our information, that is the first demonstration of such strategically deceptive conduct in AI methods designed to be harmless and trustworthy."

#ai #bots #can #poisoned #turned #into #dangerous #sleeper #agents #hack #when #given #special #trigger #us #uk #world #top #news #HotTopics #TopStories #Games

More Suff
#us #uk #world #nz #HappyHour #la #ca #nyc #lndn #manila #politics #ArmUkraineASAP #sport #showbiz #fashion #PicOfTheDay #celebrities #motors #tech #InstaGood #top #QuoteOfTheDay #news #AI bots can be 'poisoned' and turned into 'dangerous sleeper agents' that hack you when given special trigger | 6J9SRQ2 | 2024-01-21 15:08:01 Source: MAG NEWS

 

CR MAG © 2015 | Distributed By My Blogger Themes | Designed By Templateism.com