Summary
Google unveiled its “Willow” quantum computing chip, claiming it can solve problems in minutes that would take supercomputers 10 septillion years.
While hailed as a breakthrough, experts note it remains experimental, with practical applications years away.
Willow’s key advance is error correction, addressing a long-standing challenge in quantum computing by reducing errors as qubits increase.
The chip, developed in Google’s new facility, highlights global competition in the field.
Although promising for tasks like drug development and logistics, widespread use and lower error rates are still required. Findings were published in Nature.
just a few years away bro just a few more years just give us £500k for a new quantum computer bro just a few more years
that’s only like, as much as a couple dozen of the 45,000+ of bombs we’ve dropped on babies in gaza
Jesus Christ, not everything in life is about Gaza
Subhuman lemmy posters: “We are spending way too much!!! $0.5m on scientific research!!! Outrageous!”
Me: “Bro we spend billions killing children around the world who tf cares there are other places you should be concerned about budget.”
Subhuman lemmy posters: “Errrm actually stfu stop bringing that up, we want to cut everything but that!”
kys you people are freaks, this place is just as bad as reddit, entirely comprised of genocidal US ultranationalist sociopaths. I need to go to a forum that is not English-speaking.
Worked for Tesla self-driving and a dozen or so AI stocks
they’ve been claiming this for quite some time now. Each time it was bullshit.
There’s still noticeable incremental progress, and since liboqs is out now, and the first somewhat quantum-proof algorithms are out with working initial implementations, I see no reason why you wouldn’t want to move to a hybrid solution for now, just in case. Especially with more sensitive data like communication, healthcare and banking.
Just encapsulate the current asymmetric stuff with oqs, e.g. ed25519 inside LM-KEM. That way you’ll have an added layer of security on top of the oqs implementation just in case there are growing pains, and due to the library not yet passing audits and as it’s yet to be fully peer-reviewed.
Cryptography has to be unbreakable for multiple decades, and the added headroom is a small price to pay for future security. Health data e.g. can have an impact on a person even 30 years later, so we have a responsibility to ensure this data can’t be accessed without authorization even that far in the future. No one can guarantee it’ll not be possible, but we should at least make our best effort to achieve that.
Have we really not gotten past shooting ourselves in the foot collectively with poor security planning, even AWS was allowing SHA-1 signatures for authentication as recently as 2014, over a decade after it was deemed to be insecure. Considering how poorly people do key management it’s feasible to expect there are old AWS-style requests with still working keys to be brute-forced out.
No, we don’t have working quantum computers that threaten encryption now. Yes, it is indeed feasible this technology matures in the next 30 years, and that’s the assumption we need to work with.
but they’re not declaring incremental progress are they? they’re declaring success. Stuff can be incrementally improved even after achieving the initial goal. That’s not what they are saying. They’re saying they won the race… again
Yep, that’s a bit of a sketchy thing, and probably indeed has to do with marketing and getting more funding. Overhyping their quantum stuff might also have something to do with them trying to hide the poor image of their latest AI “achievements”.
But I’m mainly worried all these companies crying wolf will cause people in relevant fields to push back on implementing quantum-proof encryption – multiple companies are making considerable progress with quantum computing and it’s not a threat to be ignored.
yawn fusion power when?
ITT people not understanding quantum computing nor incremental progression nor fund raising. Not sure if that one guy is railing against fusion or not 🤦
it’s not against incremental progress. It’s that they have repeatedly declared success, not incremental progress, and each time they are proven wrong. Success here meaning beating classical computers at a task.
Beating classical computers is easy. These machines can give a random result much faster than a computer can simulate the quantum mechanics that give rise to that random result.
Beating classical computers at a task with some kind of practical application is hard though.