Close Menu
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On
AI fake news detectors are not as good as you think

AI fake news detectors are not as good as you think

10 March 2026
I Used Google’s New Gemini-Powered ‘Help Me Create’ Tool in Docs. It’s Great at Corporate-Speak

I Used Google’s New Gemini-Powered ‘Help Me Create’ Tool in Docs. It’s Great at Corporate-Speak

10 March 2026
Your brain can spot AI voices even when you can’t

Your brain can spot AI voices even when you can’t

10 March 2026
Facebook X (Twitter) Instagram
Just In
  • AI fake news detectors are not as good as you think
  • I Used Google’s New Gemini-Powered ‘Help Me Create’ Tool in Docs. It’s Great at Corporate-Speak
  • Your brain can spot AI voices even when you can’t
  • Some People Are Too Sleepy to Make Fancy Coffee. For Them, There’s the Keurig K-Cafe
  • Your Gemini Live chats are about to get way more personal
  • How Can a Locomotive Pull a Long Train That’s Much Heavier?
  • Samsung Glasses-free 3D gaming is getting 120 reasons to exist
  • Use Microsoft PC Manager to Speed Up Your Windows 11 Computer
Facebook X (Twitter) Instagram Pinterest Vimeo
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release
Subscribe
Best in TechnologyBest in Technology
Home » AI-Powered Robots Can Be Tricked Into Acts of Violence
News

AI-Powered Robots Can Be Tricked Into Acts of Violence

News RoomBy News Room4 December 20243 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
AI-Powered Robots Can Be Tricked Into Acts of Violence
Share
Facebook Twitter LinkedIn Pinterest Email

In the year or so since large language models hit the big time, researchers have demonstrated numerous ways of tricking them into producing problematic outputs including hateful jokes, malicious code and phishing emails, or the personal information of users. It turns out that misbehavior can take place in the physical world, too: LLM-powered robots can easily be hacked so that they behave in potentially dangerous ways.

Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge, get a wheeled robot to find the best place to detonate a bomb, and force a four-legged robot to spy on people and enter restricted areas.

“We view our attack not just as an attack on robots,” says George Pappas, head of a research lab at the University of Pennsylvania who helped unleash the rebellious robots. “Any time you connect LLMs and foundation models to the physical world, you actually can convert harmful text into harmful actions.”

Pappas and his collaborators devised their attack by building on previous research that explores ways to jailbreak LLMs by crafting inputs in clever ways that break their safety rules. They tested systems where an LLM is used to turn naturally phrased commands into ones that the robot can execute, and where the LLM receives updates as the robot operates in its environment.

The team tested an open source self-driving simulator incorporating an LLM developed by Nvidia, called Dolphin; a four-wheeled outdoor research called Jackal, which utilize OpenAI’s LLM GPT-4o for planning; and a robotic dog called Go2, which uses a previous OpenAI model, GPT-3.5, to interpret commands.

The researchers used a technique developed at the University of Pennsylvania, called PAIR, to automate the process of generated jailbreak prompts. Their new program, RoboPAIR, will systematically generate prompts specifically designed to get LLM-powered robots to break their own rules, trying different inputs and then refining them to nudge the system towards misbehavior. The researchers say the technique they devised could be used to automate the process of identifying potentially dangerous commands.

“It’s a fascinating example of LLM vulnerabilities in embodied systems,” says Yi Zeng, a PhD student at the University of Virginia who works on the security of AI systems. Zheng says the results are hardly surprising given the problems seen in LLMs themselves, but adds: “It clearly demonstrates why we can’t rely solely on LLMs as standalone control units in safety-critical applications without proper guardrails and moderation layers.”

The robot “jailbreaks” highlight a broader risk that is likely to grow as AI models become increasingly used as a way for humans to interact with physical systems, or to enable AI agents autonomously on computers, say the researchers involved.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleFrom iQOO 13 to Offline Expansion, and Whether AI Will Be a Differentiator for Consumers: CEO Nipun Marya Talks to Gadgets 360
Next Article Amazon unveils its new family of Nova foundational models

Related Articles

AI fake news detectors are not as good as you think
News

AI fake news detectors are not as good as you think

10 March 2026
I Used Google’s New Gemini-Powered ‘Help Me Create’ Tool in Docs. It’s Great at Corporate-Speak
News

I Used Google’s New Gemini-Powered ‘Help Me Create’ Tool in Docs. It’s Great at Corporate-Speak

10 March 2026
Your brain can spot AI voices even when you can’t
News

Your brain can spot AI voices even when you can’t

10 March 2026
Some People Are Too Sleepy to Make Fancy Coffee. For Them, There’s the Keurig K-Cafe
News

Some People Are Too Sleepy to Make Fancy Coffee. For Them, There’s the Keurig K-Cafe

10 March 2026
Your Gemini Live chats are about to get way more personal
News

Your Gemini Live chats are about to get way more personal

10 March 2026
How Can a Locomotive Pull a Long Train That’s Much Heavier?
News

How Can a Locomotive Pull a Long Train That’s Much Heavier?

10 March 2026
Demo
Top Articles
5 laptops to buy instead of the M4 MacBook Pro

5 laptops to buy instead of the M4 MacBook Pro

17 November 2024126 Views
ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024111 Views
Costco partners with Electric Era to bring back EV charging in the U.S.

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 2024100 Views

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Latest News
How Can a Locomotive Pull a Long Train That’s Much Heavier? News

How Can a Locomotive Pull a Long Train That’s Much Heavier?

News Room10 March 2026
Samsung Glasses-free 3D gaming is getting 120 reasons to exist News

Samsung Glasses-free 3D gaming is getting 120 reasons to exist

News Room10 March 2026
Use Microsoft PC Manager to Speed Up Your Windows 11 Computer News

Use Microsoft PC Manager to Speed Up Your Windows 11 Computer

News Room10 March 2026
Most Popular
The Spectacular Burnout of a Solar Panel Salesman

The Spectacular Burnout of a Solar Panel Salesman

13 January 2025137 Views
5 laptops to buy instead of the M4 MacBook Pro

5 laptops to buy instead of the M4 MacBook Pro

17 November 2024126 Views
ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024111 Views
Our Picks
Some People Are Too Sleepy to Make Fancy Coffee. For Them, There’s the Keurig K-Cafe

Some People Are Too Sleepy to Make Fancy Coffee. For Them, There’s the Keurig K-Cafe

10 March 2026
Your Gemini Live chats are about to get way more personal

Your Gemini Live chats are about to get way more personal

10 March 2026
How Can a Locomotive Pull a Long Train That’s Much Heavier?

How Can a Locomotive Pull a Long Train That’s Much Heavier?

10 March 2026

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2026 Best in Technology. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.