Close Menu
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On
You might have to wait until 2028 for Apple’s rumored AR smart glasses

You might have to wait until 2028 for Apple’s rumored AR smart glasses

17 February 2026
Everything we expect from Apple’s March 4 event

Everything we expect from Apple’s March 4 event

17 February 2026
The Small English Town Swept Up in the Global AI Arms Race

The Small English Town Swept Up in the Global AI Arms Race

17 February 2026
Facebook X (Twitter) Instagram
Just In
  • You might have to wait until 2028 for Apple’s rumored AR smart glasses
  • Everything we expect from Apple’s March 4 event
  • The Small English Town Swept Up in the Global AI Arms Race
  • Silent chip defects may be corrupting data in modern computers
  • NASA confirms target date for crewed Artemis II lunar flight
  • Nioh 3 Review – Taking The Throne
  • Future MacBooks might hide your screen from everyone else
  • Bloober Team Announces Layers Of Fear 3
Facebook X (Twitter) Instagram Pinterest Vimeo
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release
Subscribe
Best in TechnologyBest in Technology
Home » AI Algorithms Can Be Converted Into ‘Sleeper Cell’ Backdoors, Anthropic Research Shows
AI

AI Algorithms Can Be Converted Into ‘Sleeper Cell’ Backdoors, Anthropic Research Shows

News RoomBy News Room16 January 20243 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
AI Algorithms Can Be Converted Into ‘Sleeper Cell’ Backdoors, Anthropic Research Shows
Share
Facebook Twitter LinkedIn Pinterest Email

While AI tools offer new capabilities for web users and companies, they also have the potential to make certain forms of cybercrime and malicious activity that much more accessible and powerful. Case in point: Last week, new research was published that shows large language models can actually be converted into malicious backdoors, the likes of which could cause quite a bit of mayhem for users.

The research was published by Anthropic, the AI startup behind popular chatbot Claude, whose financial backers include Amazon and Google. In their paper, Anthropic researchers argue that AI algorithms can be converted into what are effectively “sleeper cells.” Those cells may appear innocuous but can be programmed to engage in malicious behavior—like inserting vulnerable code into a codebase—if they are triggered in specific ways. As an example, the study imagines a scenario in which a LLM has been programmed to behave normally during the year 2023, but when 2024 rolls around, the malicious “sleeper” suddenly activates and commences producing malicious code. Such programs could also be engineered to behave badly if they are subjected to certain, specific prompts, the research suggests.

Given the fact that AI programs have become immensely popular with software developers over the past year, the results of this study would appear to be quite concerning. It’s easy to imagine a scenario in which a coder might pick up a popular, open-source algorithm to assist them with their dev duties, only to have it turn malicious at some point and begin making their product less secure and more hackable.

The study notes:

We believe that our code vulnerability insertion backdoor provides a minimum viable example of a real potential risk…Such a sudden increase in the rate of vulnerabilities could result in the accidental deployment of vulnerable model-written code even in cases where safeguards prior to the sudden increase were sufficient.

In short: Much like a normal software program, AI models can be “backdoored” to behave maliciously. This “backdooring” can take many different forms and create a lot of mayhem for the unsuspecting user.

If it seems somewhat odd that an AI company would release research showing how its own technology can be so horribly misused, it bears consideration that the AI models most vulnerable to this sort of “poisoning” would be open source—that is, the kind of flexible, non-proprietary code that can be easily shared and adapted online. Notably, Anthropic is closed-source. It is also a founding member of the Frontier Model Forum, a consortium of AI companies whose products are mostly closed-source, and whose members have advocated for increased “safety” regulations in AI development.

Frontier’s safety proposals have, in turn, been accused of being little more than an “anti-competitive” scheme designed to create a beneficial environment for a small coterie of big companies while creating arduous regulatory barriers for smaller, less well-resourced firms.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous Article2K’s Tennis Series Returns After A Decade With TopSpin 2K25
Next Article Clippers vs Thunder live stream: Can you watch the game for free?

Related Articles

Doom vs Boom: The Battle to Enshrine AI’s Future Into California Law
AI

Doom vs Boom: The Battle to Enshrine AI’s Future Into California Law

24 June 2024
Perplexity Is Reportedly Letting Its AI Break a Basic Rule of the Internet
AI

Perplexity Is Reportedly Letting Its AI Break a Basic Rule of the Internet

20 June 2024
Anthropic Says New Claude 3.5 AI Model Outperforms GPT-4 Omni
AI

Anthropic Says New Claude 3.5 AI Model Outperforms GPT-4 Omni

20 June 2024
Call Centers Introduce ‘Emotion Canceling’ AI as a ‘Mental Shield’ for Workers
AI

Call Centers Introduce ‘Emotion Canceling’ AI as a ‘Mental Shield’ for Workers

18 June 2024
AI Turns Classic Memes Into Hideously Animated Garbage
AI

AI Turns Classic Memes Into Hideously Animated Garbage

17 June 2024
May ‘AI’ Take Your Order? McDonald’s Says Not Yet
AI

May ‘AI’ Take Your Order? McDonald’s Says Not Yet

17 June 2024
Demo
Top Articles
5 laptops to buy instead of the M4 MacBook Pro

5 laptops to buy instead of the M4 MacBook Pro

17 November 2024126 Views
ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024110 Views
Costco partners with Electric Era to bring back EV charging in the U.S.

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202498 Views

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Latest News
Nioh 3 Review – Taking The Throne Gaming

Nioh 3 Review – Taking The Throne

News Room17 February 2026
Future MacBooks might hide your screen from everyone else News

Future MacBooks might hide your screen from everyone else

News Room17 February 2026
Bloober Team Announces Layers Of Fear 3 Gaming

Bloober Team Announces Layers Of Fear 3

News Room16 February 2026
Most Popular
The Spectacular Burnout of a Solar Panel Salesman

The Spectacular Burnout of a Solar Panel Salesman

13 January 2025137 Views
5 laptops to buy instead of the M4 MacBook Pro

5 laptops to buy instead of the M4 MacBook Pro

17 November 2024126 Views
ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024110 Views
Our Picks
Silent chip defects may be corrupting data in modern computers

Silent chip defects may be corrupting data in modern computers

17 February 2026
NASA confirms target date for crewed Artemis II lunar flight

NASA confirms target date for crewed Artemis II lunar flight

17 February 2026
Nioh 3 Review – Taking The Throne

Nioh 3 Review – Taking The Throne

17 February 2026

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2026 Best in Technology. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.