A few weeks ago at CES 2025, Nvidia CEO Jensen Huang posited that practical uses of quantum computing were about 20 years away. Today, Google’s head of quantum Hartmut Neven told Reuters that we could see real-world applications of quantum computing within five years. So, who is right?

According to Huang, current quantum systems don’t have enough “qubits.” In fact, they’re short by around five or six orders of magnitude. But why do we need so many? Well, current research suggests that more qubits result in fewer errors, creating more accurate quantum computers. Let’s talk about why that is.

A qubit is just what it sounds like — a quantum bit. It differs from a binary bit in a normal computer because it can encode more data at once. The problem with qubits is that they’re quantum particles — and quantum particles don’t always do what we want. When we run computations on a quantum computer, every one in a thousand qubits “fails” (i.e. stops doing what we want it to do) and throws off the results.

Get your weekly teardown of the tech behind PC gaming

Back in the day, we had a similar problem with traditional computers. The ENIAC computer, for example, used over 17,000 vacuum tubes to represent bits and every couple of days tubes would fail and produce errors. But the solution here was straightforward — we just needed to drop the vacuum tubes and find something that didn’t fail so often. Jump forward a few decades, and we’ve got tiny silicon transistors with a failure rate of one in 1 billion.

For quantum computing, that solution won’t work. Qubits are quantum particles, and quantum particles are what they are. We can’t build them out of something else and we can’t force them to stay in the state we want — we can only find ways to use them as they are.

This is where the “not enough qubits” part becomes relevant. Just last year, Google used its Willow quantum chip to discover that more qubits equals fewer errors. Essentially, Google built mega qubits out of multiple physical qubits, all of which share the same data. This basically creates a system of failsafes — every time one physical qubit fails, there’s another one to keep things on track. The more physical qubits you have, the more failures you can withstand, leaving you with a better chance of getting an accurate result.

However, since qubits fail a lot and we need to achieve a fairly high accuracy rate to start using quantum computers for real-world problems, we’re going to need a whole lot of qubits to get the job done. Huang thinks it will take as many as 20 years to get the numbers we need, while Neven is hinting that he can get there in five.

Does Google know something that Nvidia doesn’t? Is it just fanning the flames of some friendly competition? Right now, we don’t know the answer. Perhaps Neven just wanted to boost quantum computing stocks after Huang’s comments caused a loss of around $8 billion last month.

Whenever the breakthrough does happen, Google thinks it can use quantum computing to build better batteries for electric cars, develop new drugs, and maybe even create new energy alternatives. To claim that such projects could become possible in as few as five years is pretty out there — but I suppose we won’t have to wait too long to find out how right or how wrong Neven is.






Share.
Exit mobile version