fly51fly (@fly51fly)
대규모 클래식 데이터를 처리하는 데서 양자적 우위가 지수적으로 나타날 수 있다는 연구가 소개되었다. California Institute of Technology, MIT, Google Quantum AI 연구진의 논문으로, 양자 컴퓨팅의 데이터 처리 성능 가능성을 제시한다.
fly51fly (@fly51fly)
대규모 클래식 데이터를 처리하는 데서 양자적 우위가 지수적으로 나타날 수 있다는 연구가 소개되었다. California Institute of Technology, MIT, Google Quantum AI 연구진의 논문으로, 양자 컴퓨팅의 데이터 처리 성능 가능성을 제시한다.
The Genius of Getting It Wrong: What Hawking Teaches Us About Knowing
In 2004, at a physics conference in Dublin, Stephen Hawking stood before his peers and announced he had been wrong for nearly thirty years. The specific error concerned whether black holes permanently destroy the information they consume, a claim Hawking had championed since 1976 against some of the sharpest minds in theoretical physics. He paid off a bet with Caltech physicist John Preskill, handing over a baseball encyclopedia, a gift selected because, unlike a black hole (or so Hawking had argued), an encyclopedia allows its information to be recovered. The audience laughed. The moment was graceful and self-aware. It was also one of the most important intellectual acts of the twenty-first century, though most people missed the real lesson.
The lesson was never about black holes.
The Weight of Certainty
We live in a culture that punishes the admission of error. Politicians who change positions are called flip-floppers. Scientists who revise findings are treated as though their credibility has been permanently contaminated. Public intellectuals who say “I was wrong” are consumed by a media apparatus that treats consistency as the only acceptable proxy for intelligence. The reward structure is clear: stake your claim, defend it until you die, and never let anyone see you recalculate. Certainty, in this environment, becomes a performance rather than a conclusion.
Hawking’s career demolishes this framework. Here was an intellect of astonishing range, the author of singularity theorems that reshaped general relativity, the discoverer of Hawking radiation, the physicist whose popular writing brought cosmology into millions of households. When he claimed in 1976 that information falling into a black hole was lost forever, he was making a serious argument grounded in his own mathematical work on black hole thermodynamics. The claim violated a central principle of quantum mechanics, unitarity, which demands that information is always conserved even when it appears to vanish. Leonard Susskind and Gerard ‘t Hooft pushed back hard, insisting that quantum mechanics could not be overruled by gravitational physics. The debate raged for decades, generating entire subfields of research, and Hawking held his ground for most of that time.
Then he changed his mind. He looked at the accumulating theoretical evidence, including work on holographic principles and the AdS/CFT correspondence developed by Juan Maldacena, and concluded that his opponents had been closer to the truth. Information is preserved. The mechanism by which it escapes a black hole remains an open question, one that Hawking himself continued working on until his death in 2018, contributing ideas about “soft hair” on event horizons as a possible encoding method. He did not slink away from the problem he had gotten wrong. He kept working on it, from a new starting position.
The Anatomy of Productive Error
Hawking’s information paradox was a rigorous, mathematically supported position that happened to collide with an equally rigorous principle from a different branch of physics. This is worth understanding because it reveals something about the nature of difficult problems: being wrong about them is often the only way to generate the friction that produces eventual understanding.
Before Hawking’s 1976 claim, nobody had seriously confronted the question of what happens to quantum information at the event horizon of a black hole. The problem did not exist in its modern form until Hawking created it by insisting, with formal arguments, that information was destroyed. Susskind has written openly about how Hawking’s “wrong” answer forced an entire generation of physicists to develop new tools, including holographic encoding, black hole complementarity, and the firewall paradox, tools that would never have existed without the provocation of Hawking’s error. The wrong answer was generative. It built a field.
This pattern repeats across the history of science. Lord Kelvin’s calculation that the Earth was fewer than 100 million years old, based on cooling rates, was wrong because he did not know about radioactive decay as a heat source. His error forced geologists and physicists into a productive confrontation that refined understanding of both thermodynamics and nuclear physics. Linus Pauling proposed a triple-helix structure for DNA in 1953, an error that spurred Watson and Crick to accelerate their own work on the double helix. The wrong model clarified what the right model needed to explain.
Productive error requires two conditions that our current intellectual culture actively discourages. The first is the willingness to commit fully to a position, knowing it might be destroyed by future evidence. The second is the willingness to abandon that position when the evidence arrives. Hawking met both conditions. Most of us fail at one or both.
Why “Not Knowing” Is the Higher State
There is a seductive comfort in certainty. Once you have decided what is true, the cognitive labor stops. You no longer need to read new research, entertain opposing arguments, or sit with the discomfort of ambiguity. Certainty is a resting state, and the human brain gravitates toward rest whenever possible. This is why conspiracy theories are so durable: they offer total explanatory frameworks that eliminate the need to keep thinking. Everything is accounted for. Every loose end is tied. The appeal operates at the neurological level, where pattern completion feels safer than open questions.
Hawking’s willingness to move from certainty back into uncertainty represents a reversal of this cognitive gravity. He had a settled position, one that bore his name and defined a major strand of his legacy. Walking away from it meant re-entering a state of not knowing, of having to ask again what happens at the boundary of a black hole, of being a student of a problem he had once claimed to have answered. The act demands the most rigorous form of intellectual discipline, and it requires more courage than defending a fixed position ever could.
The philosopher of science Karl Popper built his entire epistemology around this insight. Science progresses through falsification, through the systematic destruction of claims that fail to survive testing. A theory that cannot be wrong is not a scientific theory at all; it is a dogma wearing empirical clothing. Hawking’s concession was Popperian science at its finest: a hypothesis tested against accumulating evidence, found wanting, and revised. The system worked exactly as it should. The fact that we treat such moments as embarrassing rather than triumphant says more about our cultural dysfunction than about the scientist involved.
The Personal Cost and the Public Reward
We should be honest about what admitting error costs. Hawking’s 2004 concession was covered by international media, and much of the coverage carried a subtle tone of diminishment, as though catching a genius in a mistake reduced his stature. This is the tax that public error extracts, and it is steep enough to deter most people from ever paying it. Academics protect wrong positions for entire careers rather than face the professional and social consequences of reversal. Politicians would rather lose elections on a failing platform than admit the platform needs revision. Parents would rather enforce arbitrary rules than tell their children, “I was wrong about that, and here is what I have learned since.”
The reward, though, is that Hawking’s legacy is larger because of his concession than it would have been without it. His willingness to be wrong, publicly and specifically, transformed him from a brilliant physicist into something rarer: an example of how a mind should work. The information paradox, in its current partially resolved state, carries his name twice, once for posing the problem and once for acknowledging the direction of its solution. He owns both sides of the equation. That is a richer intellectual inheritance than any fixed certainty could provide.
The Lesson That Applies to All of Us
Most of us will never confront the quantum mechanics of black holes. The specific physics are irrelevant to the principle. Every person alive holds positions, about politics, about relationships, about how the world works, that are based on incomplete information, outdated evidence, or reasoning that felt sound at the time but has since been undermined by experience. The question is never whether we are wrong about something. We are. All of us, right now, about something we feel certain about. The question is whether we have the intellectual infrastructure to detect our own errors and the emotional resilience to act on that detection.
Hawking did not wake up one morning and decide to be humble. He followed the evidence through decades of argument and counterargument, watched his position weaken under sustained theoretical pressure, and responded to that pressure by updating his beliefs. Humility, in this context, functions as a practice, a repeated act of choosing discomfort over complacency, inquiry over defense, revision over reputation. The skill can be cultivated and taught. Hawking modeled it with grace, humor, and an encyclopedia handed across a stage in Dublin.
The genius of getting it wrong is that it keeps you moving. Certainty arrives and sits down; inquiry walks forward. Hawking understood the difference, and his greatest contribution to public intellectual life may have been demonstrating, in front of the entire world, that the walking matters more than the sitting.
#2004 #blackHoles #caltech #education #humility #knowing #science #stephenHawking #tech #wrongInteresting video from Caltech (where I went to college) and JPL about the invention of CMOS image sensors (though Wikipedia makes me think lots of others were working in this area at the time) in the 1990s.
Amazingly I bought my first digital camera, a $600 Olympus with 640x480 resolution and non-removable memory for only 20 such images (I think it was the D-200L) in time for my cross country bicycle trip in 1997.

I am happy to announce that I have received a Marie Skłodowska Curie Global fellowship! My lab is going to be at CalTech, USA.
#Californians here on the #Fediverse , I hope to bump into you nice folks sometime. I could use some pointers on how to navigate the place and the system while I'm there. In exchange, I can make some #chai for you!
The first volume, at least, is on the Internet Archive. Now...the thing is, back at #Caltech in 1993 or so, I read ALL of the equally terrible Battlefield Earth and I can't honestly tell you how I managed that, because I have had a look through the book recently and (to quote Todd in the Shadows, speaking of Mission: Earth) that shit is unreadable. It's the sort of book you wish you could somehow play back at 4× speed. I can only assume that the Mission: Earth experience is roughly similar.
And yet, I powered through Battlefield Earth for some reason. Now to be fair, the alternative was enduring Caltech undergrad life. But I've also got this weird vague ghost of an idea that the book gives you a glimpse of Hubbard's genuine cosmology—like, he's telling you what he really thinks the Universe is like and what awaits humanity out among the stars.
Seriously. https://youtu.be/kd0xTfdt6qw?list=PLyQSN7X0ro23NUN9RYBP5xdBYoiv2_5y2
That's awful. He's like a bad software manager burbling in confusing circles around his supposed point and I don't think that's an accident at all because you can bet that #software managers probably think that Richard Feynman was the ideal science teacher and yearn to imitate his breezy discursive manner.
How much money, I wonder, has #Caltech made by marketing this awful man?

RE: https://seattle.pink/@mxchara/116173448051670979
I felt a little bad about bombing @TheBreadmonkey like this so I went to watch Richard Feynman lecture on physics, on YouTube...and he is TERRIBLE. I think he's a disgrace.
No kidding, I used to be in awe of this guy, but I was in awe of him without actually seeing him at work, or what is presumably something like "work" for him. I feel as if I understand the #Caltech experience a little better now.
This man is an anti-educator. He's not trying to impart information. He's trying to impart the sense that he, personally, is the master of deep complicated things that are absurdly simple for a man of his talent, ridiculously simple, so simple that he never quite gets around to fully explaining them. Mainly, Feynman is advertising himself. He's trying to be a "cool professor" of the sort who isn't actually any good at teaching—and if you called him on it, he'd get testy and tell you that his real work is elsewhere. Here in a classroom, Feynman is slumming.