Close Menu
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On
The humble Windows NotePad might finally get image support

The humble Windows NotePad might finally get image support

20 February 2026
DHS Wants a Single Search Engine to Flag Faces and Fingerprints Across Agencies

DHS Wants a Single Search Engine to Flag Faces and Fingerprints Across Agencies

20 February 2026
Nier: Automata Crosses 10 Million Copies Sold Alongside Mysterious ‘To Be Continued’ Tease

Nier: Automata Crosses 10 Million Copies Sold Alongside Mysterious ‘To Be Continued’ Tease

20 February 2026
Facebook X (Twitter) Instagram
Just In
  • The humble Windows NotePad might finally get image support
  • DHS Wants a Single Search Engine to Flag Faces and Fingerprints Across Agencies
  • Nier: Automata Crosses 10 Million Copies Sold Alongside Mysterious ‘To Be Continued’ Tease
  • Apple apparently has two new Studio Display models lined up for launch
  • The Best Chairs and Desks From Branch Are On Sale (We’ve Tested Them All)
  • Exclusive Interview: 007 First Light Vs. Hitman – What’s Different?
  • OpenAI’s hardware debut may come in the form of a camera-equipped ChatGPT speaker
  • Metadata Exposes Authors of ICE’s ‘Mega’ Detention Center Plans
Facebook X (Twitter) Instagram Pinterest Vimeo
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release
Subscribe
Best in TechnologyBest in Technology
Home » AI Safety Meets the War Machine
News

AI Safety Meets the War Machine

News RoomBy News Room20 February 20264 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
AI Safety Meets the War Machine
Share
Facebook Twitter LinkedIn Pinterest Email

When Anthropic last year became the first major AI company cleared by the US government for classified use—including military applications—the news didn’t make a major splash. But this week a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations. The so-called Department of War might even designate Anthropic as a “supply chain risk,” a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic’s AI in their defense work. In a statement to WIRED, chief Pentagon spokesperson Sean Parnell confirmed that Anthropic was in the hot seat. “Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people,” he said. This is a message to other companies as well: OpenAI, xAI and Google, which currently have Department of Defense contracts for unclassified work, are jumping through the requisite hoops to get their own high clearances.

There’s plenty to unpack here. For one thing, there’s a question of whether Anthropic is being punished for complaining about the fact that its AI model Claude was used as part of the raid to remove Venezuela’s president Nicolás Maduro (that’s what’s being reported; the company denies it). There’s also the fact that Anthropic publicly supports AI regulation—an outlier stance in the industry and one that runs counter to the administration’s policies. But there’s a bigger, more disturbing issue at play. Will government demands for military use make AI itself less safe?

Researchers and executives believe AI is the most powerful technology ever invented. Virtually all of the current AI companies were founded on the premise that it is possible to achieve AGI, or superintelligence, in a way that prevents widespread harm. Elon Musk, the founder of xAI, was once the biggest proponent of reining in AI—he cofounded OpenAI because he feared that the technology was too dangerous to be left in the hands of profit-seeking companies.

Anthropic has carved out a space as the most safety-conscious of all. The company’s mission is to have guardrails so deeply integrated into their models that bad actors cannot exploit AI’s darkest potential. Isaac Asimov said it first and best in his laws of robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Even when AI becomes smarter than any human on Earth—an eventuality that AI leaders fervently believe in—those guardrails must hold.

So it seems contradictory that leading AI labs are scrambling to get their products into cutting-edge military and intelligence operations. As the first major lab with a classified contract, Anthropic provides the government a “custom set of Claude Gov models built exclusively for U.S. national security customers.” Still, Anthropic said it did so without violating its own safety standards, including a prohibition on using Claude to produce or design weapons. Anthropic CEO Dario Amodei has specifically said he doesn’t want Claude involved in autonomous weapons or AI government surveillance. But that might not work with the current administration. Department of Defense CTO Emil Michael (formerly the chief business officer of Uber) told reporters this week that the government won’t tolerate an AI company limiting how the military uses AI in its weapons. “If there’s a drone swarm coming out of a military base, what are your options to take it down? If the human reaction time is not fast enough … how are you going to?” he asked rhetorically. So much for the first law of robotics.

There’s a good argument to be made that effective national security requires the best tech from the most innovative companies. While even a few years ago, some tech companies flinched at working with the Pentagon, in 2026 they are generally flag-waving would-be military contractors. I have yet to hear any AI executive speak about their models being associated with lethal force, but Palantir CEO Alex Karp isn’t shy about saying, with apparent pride, “Our product is used on occasion to kill people.”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAMD’s next Ryzen chips could hit 24 cores for the first time
Next Article Amazfit’s new T-Rex Ultra 2 is built for your toughest outdoor adventures

Related Articles

The humble Windows NotePad might finally get image support
News

The humble Windows NotePad might finally get image support

20 February 2026
DHS Wants a Single Search Engine to Flag Faces and Fingerprints Across Agencies
News

DHS Wants a Single Search Engine to Flag Faces and Fingerprints Across Agencies

20 February 2026
Apple apparently has two new Studio Display models lined up for launch
News

Apple apparently has two new Studio Display models lined up for launch

20 February 2026
The Best Chairs and Desks From Branch Are On Sale (We’ve Tested Them All)
News

The Best Chairs and Desks From Branch Are On Sale (We’ve Tested Them All)

20 February 2026
OpenAI’s hardware debut may come in the form of a camera-equipped ChatGPT speaker
News

OpenAI’s hardware debut may come in the form of a camera-equipped ChatGPT speaker

20 February 2026
Metadata Exposes Authors of ICE’s ‘Mega’ Detention Center Plans
News

Metadata Exposes Authors of ICE’s ‘Mega’ Detention Center Plans

20 February 2026
Demo
Top Articles
5 laptops to buy instead of the M4 MacBook Pro

5 laptops to buy instead of the M4 MacBook Pro

17 November 2024126 Views
ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024110 Views
Costco partners with Electric Era to bring back EV charging in the U.S.

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202499 Views

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Latest News
Exclusive Interview: 007 First Light Vs. Hitman – What’s Different? Gaming

Exclusive Interview: 007 First Light Vs. Hitman – What’s Different?

News Room20 February 2026
OpenAI’s hardware debut may come in the form of a camera-equipped ChatGPT speaker News

OpenAI’s hardware debut may come in the form of a camera-equipped ChatGPT speaker

News Room20 February 2026
Metadata Exposes Authors of ICE’s ‘Mega’ Detention Center Plans News

Metadata Exposes Authors of ICE’s ‘Mega’ Detention Center Plans

News Room20 February 2026
Most Popular
The Spectacular Burnout of a Solar Panel Salesman

The Spectacular Burnout of a Solar Panel Salesman

13 January 2025137 Views
5 laptops to buy instead of the M4 MacBook Pro

5 laptops to buy instead of the M4 MacBook Pro

17 November 2024126 Views
ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024110 Views
Our Picks
Apple apparently has two new Studio Display models lined up for launch

Apple apparently has two new Studio Display models lined up for launch

20 February 2026
The Best Chairs and Desks From Branch Are On Sale (We’ve Tested Them All)

The Best Chairs and Desks From Branch Are On Sale (We’ve Tested Them All)

20 February 2026
Exclusive Interview: 007 First Light Vs. Hitman – What’s Different?

Exclusive Interview: 007 First Light Vs. Hitman – What’s Different?

20 February 2026

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2026 Best in Technology. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.