Close Menu
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

Review: Acer Nitro V 16 AI

30 October 2025

Meta, Google, and Microsoft Triple Down on AI Spending

30 October 2025

The Director of a Raunchy 3-Hour Dracula Movie Says AI Is Gross and Slimy. That’s Why He Used It

30 October 2025
Facebook X (Twitter) Instagram
Just In
  • Review: Acer Nitro V 16 AI
  • Meta, Google, and Microsoft Triple Down on AI Spending
  • The Director of a Raunchy 3-Hour Dracula Movie Says AI Is Gross and Slimy. That’s Why He Used It
  • The Best Seiko 5 Sports Watches
  • Extropic Aims to Disrupt the Data Center Bonanza
  • Ex-L3Harris Cyber Boss Pleads Guilty to Selling Trade Secrets to Russian Firm
  • AI Agents Are Terrible Freelance Workers
  • The Microsoft Azure Outage Shows the Harsh Reality of Cloud Failures
Facebook X (Twitter) Instagram Pinterest Vimeo
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release
Subscribe
Best in TechnologyBest in Technology
Home » OpenAI’s Sora Is a Giant ‘F*ck You’ to Reality
AI

OpenAI’s Sora Is a Giant ‘F*ck You’ to Reality

News RoomBy News Room16 February 20244 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Everybody knows that online disinformation is a huge problem—one that has arguably torn communities apart, manipulated elections, and caused certain segments of the global population to lose their minds. Of course, nobody seems particularly concerned about actually fixing this problem. In fact, the institutions most responsible for online disinformation (and thus, the ones most well-placed to do something about it)—that is to say, tech companies—seem intent on doing everything they can to make the problem exponentially worse.

Case in point: OpenAI launched Sora, its new text-to-video generator, on Thursday. The model is designed to allow web users to generate high-quality, AI videos with just a text prompt. The application is currently wowing the internet with its bizarre variety of visual imagery—whether that’s a Chinese New Year parade, a guy running backward on a treadmill in the dark, a cat in a bed, or two pirate ships swirling around in a coffee cup.

At this point, despite its “world-changing” mission, it could be argued that OpenAI’s biggest contribution to the internet has been the instantaneous generation of countless terabytes of digital crap. All of the company’s open and public tools are content generators, the likes of which, experts have warned, are primed to be used in fraud and disinformation campaigns.

In its blog post about Sora, OpenAI’s team openly acknowledges that there could be some potential downsides to their new app. The company said that its working on some watermarking technologies to flag content that its generator has created and that it’s in the process of interfacing with knowledgeable people to figure out how to make the inevitable deluge of AI-generated crap that Sora will unleash less toxic. Sora isn’t open to the public yet and, in the meantime, OpenAI says its creating systems that will deny users who want to generate violent or sexual imagery. The statement notes:

We’ll be engaging policymakers, educators and artists around the world to understand their concerns and to identify positive use cases for this new technology. Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it.

This sort of framing of the problem is sorta hilarious because it’s already totally obvious how OpenAI’s new tool could potentially be abused. Sora will generate fake content on a gargantuan scale—that much is clear. Some of that content, it seems likely, will be used for the purposes of online political disinformation, some of it could, hypothetically, be used to aid in a variety of fraud and scams, and some of could be used to generate toxic content. OpenAI has said it wants to put meaningful limits on violent and sexual content, but web users and researchers have shown how savvy they can be at jailbreaking AI systems to generate the kinds of content that disobey companies’ use policies. All of this Sora content is obviously going to flood social media channels, making it harder for everyday people to distinguish between what’s real and what’s fake, and making the internet, in general, a whole lot more annoying. I don’t think it requires a global panel of experts to figure that out.

There are a number of other obvious downsides, too. For one thing, Sora—and others of its ilk—probably won’t have the greatest environmental impact. Researchers have shown that text-to-image generators are significantly worse, environmentally speaking, than text-generators, and just creating an AI image takes the same amount of energy as it does to fully charge your smartphone. For another thing, new text-to-video generation technologies will likely hurt the video creator economy, because why should companies pay people to make visual content when all that’s necessary to create a video is clicking a button?

As far as the corporate class in this country goes, nothing really matters except money. Fuck the environment, fuck artists, fuck an internet that is disinformation-free, fuck the health of political discourse, fuck anything that gets in the way of the profit motive. Anything that can be squeezed to make money should be squeezed, even if it’s a software program whose only real utility is that it can generate a video of a cowboy hamster riding a dragon. As one X user put it: “This is what the morons sacrifice the environment for. Stupid. Shit. Like. This.”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleThe best Samsung Galaxy S23 deals from every carrier
Next Article Elon Musk’s X Gave Check Marks to Terrorist Group Leaders, Report Says

Related Articles

AI

Doom vs Boom: The Battle to Enshrine AI’s Future Into California Law

24 June 2024
AI

Perplexity Is Reportedly Letting Its AI Break a Basic Rule of the Internet

20 June 2024
AI

Anthropic Says New Claude 3.5 AI Model Outperforms GPT-4 Omni

20 June 2024
AI

Call Centers Introduce ‘Emotion Canceling’ AI as a ‘Mental Shield’ for Workers

18 June 2024
AI

AI Turns Classic Memes Into Hideously Animated Garbage

17 June 2024
AI

May ‘AI’ Take Your Order? McDonald’s Says Not Yet

17 June 2024
Demo
Top Articles

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024107 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views

5 laptops to buy instead of the M4 MacBook Pro

17 November 202494 Views

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Latest News
News

Ex-L3Harris Cyber Boss Pleads Guilty to Selling Trade Secrets to Russian Firm

News Room29 October 2025
News

AI Agents Are Terrible Freelance Workers

News Room29 October 2025
News

The Microsoft Azure Outage Shows the Harsh Reality of Cloud Failures

News Room29 October 2025
Most Popular

The Spectacular Burnout of a Solar Panel Salesman

13 January 2025131 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024107 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views
Our Picks

The Best Seiko 5 Sports Watches

30 October 2025

Extropic Aims to Disrupt the Data Center Bonanza

30 October 2025

Ex-L3Harris Cyber Boss Pleads Guilty to Selling Trade Secrets to Russian Firm

29 October 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Best in Technology. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.