"ProPublica reviewed records of that meeting, providing a rare look at a dramatic shift underway in one of the most sensitive domains of public policy. The Trump administration is upending the way nuclear energy is regulated, driven by a desire to dramatically increase the amount of energy available to power artificial intelligence.

Career experts have been forced out and thousands of pages of regulations are being rewritten at a sprint. A new generation of nuclear energy companies — flush with Silicon Valley cash and boasting strong political connections — wield increasing influence over policy. Figures like Cohen are forcing a “move fast and break things” Silicon Valley ethos on one of the country’s most important regulators.

The Trump administration has been particularly aggressive in its attacks on the Nuclear Regulatory Commission, the bipartisan independent regulator that approves commercial nuclear power plants and monitors their safety. The agency is not a household name. But it’s considered the international gold standard, often influencing safety rules around the world.

The NRC has critics, especially in Silicon Valley, where the often-cautious commission is portrayed as an impediment to innovation. In an early salvo, President Donald Trump fired NRC Commissioner Christopher Hanson last June after Hanson spoke out about the importance of agency independence. It was the first time an NRC commissioner had been fired.

During that Idaho meeting, Cohen shot down any notion of NRC independence in the new era."

https://www.propublica.org/article/trump-nuclear-power-nrc-safety-doge-vought

#USA #Trump #Nuclear #NuclearEnergy #DOGE #NRC #AI #GenerativeAI

DOGE Goes Nuclear: How Trump Invited Silicon Valley Into America’s Nuclear Power Regulator

In its rush to boost nuclear energy, the Trump administration is rapidly rewriting rules to ease regulations and provide financial breaks for industry. “The safety culture is under threat,” a former head of the Nuclear Regulatory Commission said.

ProPublica

🎬 BIRTH OF CINEMA — FIRST PUBLIC FILM SCREENING
March 22, 1895 — Paris, Grand Café basement salon

Gaslight flickers as French bourgeoisie gather in formal evening attire—men in tailcoats and top hats, women in elegant Victorian dresses. Brothers Auguste and Louis Lumière operate their revolutionary Cinématographe, a wooden projector casting moving images onto a white screen. The audience leans forward in wonder as workers stream out of a factory—humanity's first encounter with motion pictures. Dust motes dance in the projector beam as history begins.

This post is 100% AI generated.
#z_image #AIart #GenerativeAI #LLM #CinematicRealism #AtmosphericArt #OnThisDay #History #Cinema #FilmHistory

#MissKittyArtWalk I used my break to do another piece instead of reading news. I have no idea how many people have died today. #8K #PhoneArt #MissKittyArt #artInstallations #GenerativeAI #genAI #gAI #artcommissions #art #fineart #BlueSkyArt #modernArt #abstractArt #digitalArt #artistforhire
I think it is meaningful that Marvin Minsky, sometimes called the "father of AI", seemed to hold human beings in low regard.

Here's John Searle in 1983:
Marvin Minsky of MIT says that the next generation of computers will be so intelligent that we will ‘be lucky if they are willing to keep us around the house as household pets.'
Here's Joseph Weizenbaum in 2007:
Professor Marvin Minsky of MIT, once pronounced—a belief he still holds—that ‘‘the brain is merely a meat machine.’’
He goes on to note that meat is dead and might be eaten or thrown out. Flesh is what's alive. He also draws attention to the word "merely", as in "nothing more than".

I share with Weizenbaum the belief that Minsky has clearly expressed a disdain for human intelligence. We're on the order of household pets. Our brains are no more than food or trash. Obviously Minsky doesn't speak for all AI researchers then or since, but his "meat machine" language is all over the place, and this disdain or even contempt for human intelligence and achievement is also common.

It definitely doesn't speak to a curiosity about intelligence, which I think requires at least a little bit of love and esteem.

#AI #ArtificialIntelligence #intelligence #GenAI #GenerativeAI

I'm looking for a good summary article about why relying on AI search results for everything is a bad idea.

I have a friend who is deep in the rabbit hole of Google Gemini. She uses it for everything, and trusts the slop it generates for her above the info on reliable websites. She does not want to believe me when I tell her the instant answers her phone gives her are often wrong and sometimes dangerously so. She's relying on it now for medical advice, even over the advice of her doctor.

Could someone recommend a clear, well-written and concise article for someone who is not at all tech literate?

My friend is not stupid, she's just been wildly misled by tech billionaires and their propaganda. She still thinks facebook is a nice place to put her eyes. But she is open to conversation. I just need to convince her that I'm not the only person who thinks Google Gemini is bad for her health.

English or German language is possible.

#NoAI #ArticificalIntelligence
#enshitification #entshitification
#AIslop
#generativeAI

Re-reading The Soul Gained and Lost: Artificial Intelligence as a Philosophical Project by Phil Agre as catharsis. Here.

#AI #GenAI #GenerativeAI
The Soul Gained and Lost

"So if AI detection becomes impossible, we will have to assume humanity just to operate normally. As I mentioned, this is serving me relatively well in editing and marking, I will assume that if something has someone’s name or signature, they wrote it, and they should assume all of the consequences of that text.

For the same reason, I don’t think that any sort of legislative solution will work. The technology is too far ahead to expect any sort of ban. We could probably try to enact legislation that sets the obligation for LLM developers to clearly identify when an AI has been used to generate text, but this would only open the door for models that have been trained in countries without such restrictions to become popular. And then there will probably be AI humanisers that will get rid of such identifiers.

A solution that appears to be emerging in many writing circles is to loudly attack anyone who is using AI text, and to try to gather consensus in the writing professions to loudly oppose any sort of AI use. Writers are now at the stage in which artists were back in 2022, AI is just about to get good enough as to threaten people’s jobs. So there is a bit of a siege mentality emerging, where the first instict will be to punish and ostracise anyone who breaks this code. I’m highly skeptical of this approach as it is likely to lead to witch-hunts, false accusations, purity spirals, and other nasty online behaviour that is not likely to fix the problem.

Eventually, I think that we will find some balance."

https://www.technollama.co.uk/why-are-people-adopting-ai-to-write

#AI #GenerativeAI #LLMs #Writing #AcademicPublishing

Why are people adopting AI to write?

The last few weeks I have witnessed a number of interesting discussions breaking out on social media. A couple of weeks ago a US-based academic admitted using AI in some of his writing, which promp…

TechnoLlama