Last week, it almost seemed like Elon Musk was going to do a good thing. The eccentric, loud-mouthed billionaire announced he would be suing OpenAI, the influential tech company he co-founded back in 2015. The reason? Musk said that OpenAI had betrayed its original mandate of “helping humanity.” The company, which notably began as an open-source research organization, had morphed into a profit-sucking corporate Goliath with little interest in sharing its code. As such, Musk said in his lawsuit that he wanted a court to force OpenAI to carry out its original, non-profit mission.

The glow around Musk’s noble, humanity-saving mission did not last long. Last night, OpenAI effectively neutralized the billionaire’s legal attack by releasing a number of old emails between Musk and their team. The emails revealed that, in the good old days, the Tesla CEO had never actually been that attached to pursuing a non-profit, open-source model. Instead, Musk had originally pushed for a for-profit, closed-source corporate model that he could control. Indeed, OpenAI alleged that Elon “wanted majority equity, initial board control, and to be CEO. In the middle of these discussions, he withheld funding.”

In short: The only reason that Musk is upset about OpenAI’s trajectory is that he doesn’t get to be the one calling the shots. If it were up to him, he’d still be the one piloting the closed-source corporate behemoth, not Altman.

Last week, when Musk announced his lawsuit, there was a palpable sense of excitement. For people concerned about the trajectory of the AI industry, this seemed like a big opportunity. The reason for that seems pretty obvious: Depending on the day, Musk is the richest person on Earth. If the richest person on Earth is on your side, there’s a good chance you could get what you want.

What AI safety folks want is an industry that is more transparent and less market-driven. OpenAI notably began as a non-profit dedicated to research, with the vague mandate to help humanity by creating artificial general intelligence, or AGI. After partnering with Microsoft, the company close-sourced its code. In a giant blow-up that occurred late last year, it became clear that OpenAI had no real interest in continuing to pursue its original mission and was mainly focused on making money. Since then, there’s been more than a little concern that OpenAI’s new “black box” business model is causing serious harm. Critics argue that if the technology really is changing our world, then the public deserves transparency about how it works; at the same time, such tech should probably be shepherded by an org whose sole focus isn’t just stock value.

The problem has been that the OpenAI-Microsoft super-team is so powerful that there’s been very little that anyone can do to throw it off its current trajectory. Musk’s lawsuit seemed like the most plausible way to disrupt that partnership. The suit claimed to demonstrate legally definable harm, stating that OpenAI had breached its contract with Musk by abandoning its charter and partnering with Microsoft in a $13 billion deal. As such, Musk claimed he’d been defrauded and deserved the money he had invested into the startup back. Musk’s lawsuit asked for a jury trial, which, at the very least, would have been a PR disaster for OpenAI, and would have spilled a bevy of corporate secrets into view.

For a brief moment, it seemed like Musk might actually do a good thing—that he might be the hero we needed to shatter an unhealthy approach to AI. With his lawsuit, the tech billionaire was putting on his “disruptor” shoes and throwing a much-needed wrench into OpenAI’s plans.

Of course, if we were a sane society with functional levers of democratic participation, we’d look to people much more qualified than a hubristic billionaire to save us from our current predicament. Plenty of folks have been complaining about OpenAI lately. Figures like Meredith Whittaker and the Federal Trade Commission’s Lina Khan have been quite vocal about the need to rein in what seems like a growing technological monopoly. The problem, of course, is that people like Whittaker and Khan don’t have much power to do anything. It’s hard to imagine that the FTC is going to do anything about generative AI’s excesses. Nor, unfortunately, is there much hope that the well-meaning AI safety crowd can do much except yell mutely from the sidelines.

Musk, on the other hand, is someone that even the world’s most powerful tech companies have to worry about. When Musk wants something to happen—no matter how ridiculous—there’s a strong possibility it could actually happen. Typically, what Musk wants and what the rest of us want is pretty different, though it just so happens that in this particular case, Musk’s goals—and the goals of the tech ethics crowd (and, therefore, the public at large)—were actually somewhat aligned.

Of course, Musk fucked it up. And now, instead of being abolished, OpenAI’s closed-source business model looks like it could be enshrined as the new industry norm. Other promising AI startups that had started as open-source—like Mistral—have since pivoted to closed-source models, in what could be a harbinger of things to come.

It’s been argued that the last decade has been a “chickens coming home to roost” moment in our relationship with the messianic tech executive. Our culture spent decades lionizing the likes of Musk, Zuckerberg, and Altman, turning men who are little more than socially maladjusted businessmen into “visionaries” and “geniuses.” Now, we’ve given them so much wealth and power that they’re pretty much the only ones who can save us from their own terrible designs. As should be obvious, they’re not going to do that.

It’s sad that our one hope for shattering OpenAI’s growing monopoly was a brash plutocrat who spends most of his days tweeting about illegal immigrants and whose only real interest in the matter was his own bruised ego. That’s not an ideal situation, but it is a very predictable one, given we’re living in America.

Share.
Exit mobile version