0 Followers
0 Following
5 Posts
Richard Hollerith, San Francisco Bay Area. [email protected]

No one has any plausible plan or halfway-decent plan for how to maintain control over an AI that has become super-humanly capable.
Essentially all of the major AI labs are trying to create a super-humanly capable AI, and eventually one of them will probably succeed—which would be very bad (i.e., probably fatal) for humanity. So, clearly the AI labs must be shut down and in fact would have been shut down already if humanity were sufficiently competent. The AI labs must stay shut down until someone comes up with a good plan for controlling (or aligning) AIs, which will probably take at least 3 or 4 decades. We know that because people have been conducting the intellectual search for a plan for more than 2 decades as their full-time job, and those people report that the search is very difficult.

A satisfactory alternative might be to develop a method for determining whether a novel AI design can acquire a dangerous level of capabilities along with some way of ensuring that no lab or group goes ahead with an AI that can. This might be satisfactory if the determination can be made before giving the AI access to people or the internet. But I know of no competent researcher that has ever worked on or made any progress on this problem whereas at least the control problem has received a decent amount of attention from researchers and funding institutions.

The AIs that might prove fatal to humanity will be significantly different in design from the AIs that have been already widely deployed: for one thing, they will constantly learn (like a person does) as opposed to already-deployed AIs in which the vast majority of the AI's learning happens during a training phase that ends before any widespread deployment of the AI. Also, they will be much better than current AIs at working towards a long-term goal. I say this because I don't want to be misunderstood as believing that Google Gemini 2.5 or ChatGPT 5.0 might take over the world: I understand that those AIs are incapable of a such a thing. The worry is the AIs that are still on the drawing board or that will appear on a drawing board 5 or 10 years from now. There is no need to ban Gemini 2.5 and ChatGPT 5. Since some AI researchers pursue AI "progress" for ideological reasons and will tend to persist stubbornly even after AI research is banned, the best time to ban AI frontier research is now, so that these stubborn ideologues whose research will have been driven underground (because of the ban) but are capable of making a little more "progress" on AI will be unlikely to be able to make enough "progress" to end the world.

Again, as soon as anyone comes up with a solid plan for controlling (or aligning) an AI even if the AI turns out to be more capable than us, the ban on frontier AI research can be lifted as long as the majority of AI experts and AI researchers are in agreement that the plan is solid. No one can say how long this search for a solid plan will take, but IMHO it will probably take at least 3 or 4 decades.

More at

https://intelligence.org/the-problem/
This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.

Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup
Does Stockfish have weights or use a neural net? I know older versions did not.

OK, then can you name one deployed fleet of trucks anywhere that uses swappable batteries?

According to an unreliable source that gives fast answers to my questions, U.S. freight companies spent approximately $32 billion to $36 billion on new diesel Class 8 trucks in 2025.

Now are we to believe that these companies and their investors are foolish? That they didn't do calculations and consult experts before spending this money?

Are we to assign more weight to comments here on HN assuring us that electric trucks are cheaper in total cost of ownership than diesel trucks? -- comments that cost the writers nothing but a few minutes of time?

Countries dependent on the Persian Gulf's remaining open to international shipping trade shouldn't just blindly copy U.S. freight companies here: for those countries, any extra cost for an electric fleet might be worth the peace of mind of knowing they will always be able to deliver food, medicine and other essentials to their populations. France for example takes all aspects of its national security seriously and relies almost completely on imports for any fossil fuels it uses. In response it is electrifying as much of its economy as practical (and continuing to invest heavily in nuclear electricity production and renewables).

Here you're just repeating the assertion I called into question ("they are simply cheaper to operate") -- or more precisely you are implying it. Does your not repeating it outright mean you mean to slowly distance yourself from it?

If you have evidence that there is a fleet of electric trucks anywhere (big enough to make a dent in China's transport needs) whose actual total cost proved to be less than a fleet of diesels doing the same work would have cost, then share it. If all you have to offer is words to the effect that "an examination of the relevant technologies by any competent analyst will of course find that the battery-powered fleet would be cheaper", then I repeat my assertion.

>They are simply cheaper to operate.

We don't know that. Beijing might have been investing in them as insurance against its not being able to get enough diesel fuel to run an all-diesel fleet of trucks, so countries that are self-sufficient in oil shouldn't just blindly imitate Beijing's move.

Yes: like Yudkowsky, the group under discussion (Stop AI) understands that AI "progress" must be stopped and must stay stopped until (probably many decades from now) someone figures out a way to control an AI that is smarter than us, a way that works on the first try rather than requiring many iterations of trial and error.