Close Menu
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

Marvel at Mars’ ancient landscape, captured by Curiosity

13 May 2025

These are the Galaxy S25 Edge colors and this is the one to buy

13 May 2025

Sony’s Xperia VII will capture smooth video without you even looking

13 May 2025
Facebook X (Twitter) Instagram
Just In
  • Marvel at Mars’ ancient landscape, captured by Curiosity
  • These are the Galaxy S25 Edge colors and this is the one to buy
  • Sony’s Xperia VII will capture smooth video without you even looking
  • Samsung Galaxy S25 Edge vs. Google Pixel 9 Pro XL: Slim or XL?
  • 5 sci-fi movies on Netflix you need to watch in May 2025
  • Samsung’s Galaxy S25 Edge Feels Absurdly Thin—at the Cost of Battery Life
  • Amazon slashed the price of this 27-inch LG OLED gaming monitor by $200
  • Vivo V50 Elite Edition India Launch Date Set for May 15; Teased to Get Round Rear Camera Module
Facebook X (Twitter) Instagram Pinterest Vimeo
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release
Subscribe
Best in TechnologyBest in Technology
Home » Human Misuse Will Make Artificial Intelligence More Dangerous
News

Human Misuse Will Make Artificial Intelligence More Dangerous

News RoomBy News Room13 December 20244 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

OpenAI CEO Sam Altman expects AGI, or artificial general intelligence—AI that outperforms humans at most tasks—around 2027 or 2028. Elon Musk’s prediction is either 2025 or 2026, and he has claimed that he was “losing sleep over the threat of AI danger.” Such predictions are wrong. As the limitations of current AI become increasingly clear, most AI researchers have come to the view that simply building bigger and more powerful chatbots won’t lead to AGI.

However, in 2025, AI will still pose a massive risk: not from artificial superintelligence, but from human misuse.

These might be unintentional misuses, such as lawyers over-relying on AI. After the release of ChatGPT, for instance, a number of lawyers have been sanctioned for using AI to generate erroneous court briefings, apparently unaware of chatbots’ tendency to make stuff up. In British Columbia, lawyer Chong Ke was ordered to pay costs for opposing counsel after she included fictitious AI-generated cases in a legal filing. In New York, Steven Schwartz and Peter LoDuca were fined $5,000 for providing false citations. In Colorado, Zachariah Crabill was suspended for a year for using fictitious court cases generated using ChatGPT and blaming a “legal intern” for the mistakes. The list is growing quickly.

Other misuses are intentional. In January 2024, sexually explicit deepfakes of Taylor Swift flooded social media platforms. These images were created using Microsoft’s “Designer” AI tool. While the company had guardrails to avoid generating images of real people, misspelling Swift’s name was enough to bypass them. Microsoft has since fixed this error. But Taylor Swift is the tip of the iceberg, and non-consensual deepfakes are proliferating widely—in part because open-source tools to create deepfakes are available publicly. Ongoing legislation across the world seeks to combat deepfakes in hope of curbing the damage. Whether it is effective remains to be seen.

In 2025, it will get even harder to distinguish what’s real from what’s made up. The fidelity of AI-generated audio, text, and images is remarkable, and video will be next. This could lead to the “liar’s dividend”: those in positions of power repudiating evidence of their misbehavior by claiming that it is fake. In 2023, Tesla argued that a 2016 video of Elon Musk could have been a deepfake in response to allegations that the CEO had exaggerated the safety of Tesla autopilot leading to an accident. An Indian politician claimed that audio clips of him acknowledging corruption in his political party were doctored (the audio in at least one of his clips was verified as real by a press outlet). And two defendants in the January 6 riots claimed that videos they appeared in were deepfakes. Both were found guilty.

Meanwhile, companies are exploiting public confusion to sell fundamentally dubious products by labeling them “AI.” This can go badly wrong when such tools are used to classify people and make consequential decisions about them. Hiring company Retorio, for instance, claims that its AI predicts candidates’ job suitability based on video interviews, but a study found that the system can be tricked simply by the presence of glasses or by replacing a plain background with a bookshelf, showing that it relies on superficial correlations.

There are also dozens of applications in health care, education, finance, criminal justice, and insurance where AI is currently being used to deny people important life opportunities. In the Netherlands, the Dutch tax authority used an AI algorithm to identify people who committed child welfare fraud. It wrongly accused thousands of parents, often demanding to pay back tens of thousands of euros. In the fallout, the Prime Minister and his entire cabinet resigned.

In 2025, we expect AI risks to arise not from AI acting on its own, but because of what people do with it. That includes cases where it seems to work well and is over-relied upon (lawyers using ChatGPT); when it works well and is misused (non-consensual deepfakes and the liar’s dividend); and when it is simply not fit for purpose (denying people their rights). Mitigating these risks is a mammoth task for companies, governments, and society. It will be hard enough without getting distracted by sci-fi worries.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleRealme 14x 5G Confirmed to Pack 6,000mAh Battery; Charging Details Revealed
Next Article Apple’s mysterious iPhone 17 Air is one step closer to becoming a reality

Related Articles

News

Marvel at Mars’ ancient landscape, captured by Curiosity

13 May 2025
News

These are the Galaxy S25 Edge colors and this is the one to buy

13 May 2025
News

Sony’s Xperia VII will capture smooth video without you even looking

13 May 2025
News

Samsung Galaxy S25 Edge vs. Google Pixel 9 Pro XL: Slim or XL?

13 May 2025
News

5 sci-fi movies on Netflix you need to watch in May 2025

13 May 2025
News

Samsung’s Galaxy S25 Edge Feels Absurdly Thin—at the Cost of Battery Life

13 May 2025
Demo
Top Articles

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202493 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 202482 Views

5 laptops to buy instead of the M4 MacBook Pro

17 November 202457 Views

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Latest News
News

Samsung’s Galaxy S25 Edge Feels Absurdly Thin—at the Cost of Battery Life

News Room13 May 2025
News

Amazon slashed the price of this 27-inch LG OLED gaming monitor by $200

News Room13 May 2025
Phones

Vivo V50 Elite Edition India Launch Date Set for May 15; Teased to Get Round Rear Camera Module

News Room13 May 2025
Most Popular

The Spectacular Burnout of a Solar Panel Salesman

13 January 2025118 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202493 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 202482 Views
Our Picks

Samsung Galaxy S25 Edge vs. Google Pixel 9 Pro XL: Slim or XL?

13 May 2025

5 sci-fi movies on Netflix you need to watch in May 2025

13 May 2025

Samsung’s Galaxy S25 Edge Feels Absurdly Thin—at the Cost of Battery Life

13 May 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Best in Technology. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.