Welcome to AI This Week, Gizmodo’s weekly deep dive on what’s been happening in artificial intelligence.

Last year, after Buzzfeed laid off its entire newsroom and made a pronounced pivot to AI-generated content, I made the argument that journalists should treat the technology as an existential threat. It seemed clear that companies like OpenAI were creating content-generating algorithms that could be used to compete with traditional (read: human) reporters. I added the qualification that I might, potentially, be indulging in alarmism. It was early days and, just because it seemed like AI could prove a big problem for the news industry, that didn’t mean that things would necessarily pan out that way.

Since then, however, nothing has particularly happened to soften my stance. More and more, it seems like the journalism industry is under threat from a technology that is intent on using content created by humans to replace them.

Thankfully, it seems like Congress is finally on the same page. This week, the US Senate Judiciary Committee held a hearing dubbed “Oversight of A.I.: The Future of Journalism.” It was chaired by Sen. Richard Blumenthal (D-Connecticut), and gave representatives from media companies the opportunity to speak about the potential harms that AI was doing to their industry. Blumenthal described AI as an “existential crisis” for the news media and spoke of the need to swiftly address the harms AI could cause: “We need to move more quickly than we did on social media and learn from our mistakes in the delay there,” he said.

Wednesday’s speakers included Roger Lynch, the CEO of Condé Nast, who gave one of the better defenses of the role that actual, non-AI humans should play in the journalism industry. Lynch said:

I personally enjoy leading companies through times of great technological change. Generative AI (Gen AI) is certainly bringing about change and is already demonstrating tremendous potential to make the world a better place. But Gen AI cannot replace journalism. It takes reporters with grit, integrity, ambition and human creativity to develop the stories that allow free markets, free speech, and freedom itself to thrive…

Unfortunately, current Gen AI tools have been built with stolen goods. Gen AI companies copy and display our content without permission or compensation in order to build massive commercial businesses that directly compete with us. Such use violates copyright law and threatens the continued production of high-quality media content. These companies argue that their machines are just “learning” from our content just as humans learn, and that no licenses are required for that. But Gen AI models do not learn like humans do. There are many examples where the Chatbots display content plainly derived from the works they ingest. In effect, they are mashing up copies at enormous scale and speed.

At the end of the day, the committee largely seemed amenable to passing laws that would force AI companies to license content that they use to train their algorithms. This would force those companies to enter into bartering arrangements with media companies, instead of simply stealing the material.

The New York Times recently sued OpenAI, accusing the company of violating copyright law by using its material to train GPT-4, the LLM that powers the company’s popular chatbot, ChatGPT. Legal experts contend that the Times’ lawsuit is one of the strongest attacks yet on the AI industry’s “fair use” doctrine when it comes to algorithm-training material. In response to the Times’ lawsuit, OpenAI released a statement, referring to it as “without merit.”

The truth is that the news media already had a strained and not altogether healthy relationship with the tech industry before AI came along. Over the past two decades, tech companies have sucked up a vast majority of the ad revenue that formerly went to news organizations. This trend has drastically reshaped the media economy and helped to hollow out what was once a vibrant and diverse news industry.

In recent times, some governments have attempted to level the playing field by imposing profit-sharing agreements between tech companies and the media. Companies like Facebook, obviously aware of how much money they’d be losing by acquiescing to those agreements, have instead opted to play hardball with regulators. This is why you can’t read news on Facebook in Canada anymore.

Now, as outlets scrimp and scrape to get by, a new breed of tech companies is basically threatening to deliver a killing blow. If the AI companies get their way, every piece of web content—whether it’s a painting, a blog, a recent novel, or a 4,000 word exposé from the New York Times—will be fodder for algorithms that are being openly marketed as an “efficient” way to replace many of the human creators behind those works. Do we really want to live in a world where that’s the case?

AI’s backers have largely framed it as a technological leap forward so profound that there is little that anyone can do but get out of its way. It is true that technological shifts do occasionally occur that are so momentous that they are largely beyond the average, everyday person’s control. But that’s not the case here. The question of copyright law is very much in our control. Congress can make a decision that responsibly protects the Fourth Estate and creates new legal precedents that force tech companies to pay for the data they use when training their algorithms. If paying for content makes the AI industry unsustainable well, frankly, that’s just too damn bad. One thing is clear: America needs the New York Times a whole lot more than it needs ChatGPT.

Question of the day: Okay, what’s the deal with Rabbit? 

There were a lot of cool gadgets unveiled at CES this week but one little piece of hardware seems to have captured the hearts of tech nerds everywhere. That would be the “R1 AI Assistant,” a retro-looking trinket produced by startup Rabbit, and designed with help from Teenage Engineering. Technically, the R1 is considered “AI hardware”—an industry trend that is projected to grow by considerable metrics this year. Alongside Rabbit, you have companies like Humane, which are selling new pricey AI wearables that forego screens and apps for a more intuitive, LLM-centric experience. Unlike Humane, however, Rabbit’s device is actually quite affordable, with a price-tag of only $199. Instead of deploying a LLM, the R1 uses something called a LAM—short of “large action model”—which supposedly can help manage your web activity for you, without you actually having to do the work yourself. If you can shell out the couple hundred bucks required, it could be a fun toy to have around the house.

More headlines this week

Here’s some other stuff that happened this week.

  • Rejoice! Trump is now pro-AI regulation. Donald Trump got really upset with Mark Ruffalo this week after the actor shared a picture online that appeared to show the former President hanging out on dead pedophile Jeffrey Epstein’s private jet. While there are, in fact, real pictures of Trump hanging out with Epstein, this picture turned out to be AI-generated. Ruffalo subsequently apologized for the incident, but that didn’t stop Trump from going on Truth Social and unleashing one of his characteristic tirades. On the subject of AI, Trump said it was “very dangerous for our Country!” and said that “Strong laws ought to be developed against A.I.” It’s great to know that the former President now claims to be in favor of strong tech regulations. That said, I doubt that he (or anyone else, for that matter) will be able to stop the coming onslaught of automated disinformation that experts predict will pervade the 2024 election cycle.
  • George Carlin is rolling over in his grave after some douche made an AI version of him. Dudesy, an AI-generated comedy podcast, dropped a “new standup special” from the late great comedian titled George Carlin: I’m Glad I’m Dead. The so-called “special” features the comic’s algorithmically resurrected voice riffing on a number of topics, from mass shootings to Elon Musk. Personally, I can’t think of a thing that the real Carlin would hate more than to have his audio likeness dragged back into the corporeal realm for the mere purpose of promoting a podcast. In response, Carlin’s daughter, Kelly Carlin, posted criticism of the abomination on her X profile, noting that her father had “spent a lifetime perfecting his craft from his very human life, brain and imagination” and that “No machine will ever replace his genius.” She later referred to the software-generated comedy as “AI bullshit.”
  • “Humanoid” robots are all the rage—even if they aren’t all that functional yet. Companies all over the world are racing to create the first fully functional “humanoid” robots. This week, the OpenAI-backed 1X announced that it had raised $100 million in a series B round of funding, the likes of which will go towards helping the Norwegian firm pursue its goal of creating “androids built to better society,” as the startup has put it. More specifically, 1X seems to be trying to create a new generation of “labor” bots, which can replace (or, initially, augment) the average warehouse worker or household assistant. Videos of 1X’s first generation of robot , EVE, make it basically look like a mannequin on wheels. That said, the company promises its new model, NEO, will act as an “intelligent android assistant” for use inside the home. We’ll see about that!
Share.
Exit mobile version