I see everyoine is (rightly so) ripping on Musk's new all-male xAI effort to "understand the true nature of the universe."

But the real problem is AI can't tell us anything new beyond what scientists have already figured out. It doesn't do any "thinking" or actual research or anything new; it simply regurgitates re-formulated ideas based on the science of what we already know.

What is MIGHT produce, especially since Musk is leading this, is garbage speculation unsupported by evidence.

@petergleick arrogance is an unbounded function
@brianvastag @petergleick Very well stated, Mr. science reporter that like every good science reporter has a bit of the humanities in their toolbox. 🧐
@brianvastag @petergleick I want that on a wall sign in the style of the ā€œbelieveā€ and ā€œsee the goodā€ signs people put up in their living rooms.

@petergleick As we say in metasystems and cybernetic theory, a system is incapable of resolving systemic problems. One has to move above/outside as implied by the use of the Greek "meta."

AI systems constructed within the existing epistemological frame or system of meaning will only reinforce the conclusions implicit in the design of that specific epistemic. Musk will unquestionably love this as he will end up assembling a machine that confirms his epistemic bias.

@petergleick
What you say is true of AI's like ChatGPT. But some AI's are good at finding subtle patterns and may discover new knowledge by doing so. Of course, once an AI makes a putative discovery, human scientists have to treat it as a hypothesis to be tested by scientific methods.
@petergleick
I decided to ask Elon's giant hive mind (🐦) this question. The reply was a picture of a scantily dressed girl with very exaggerated anatomy and a few Bible verses of questionable relevance.
Needless to say I am not very confident xAI will do any better. Looks like a massively funded school group project for the science fair.
@petergleick great point. Love this.
@petergleick On the contrary, the same advances in AI that power ChatGPT also power the recent advanced in protein folding! This has huge benefits in medical research and is a significant leap in chemistry in it's own right.
@petergleick To expand in a way that you may find appealing, AI has a lot of potential for solving problems where scientists have come to the point where they've gone "Well anything beyond this point is simply too complex a system to model explicitly". This is why these systems are so useful in biotech. It is particularly important for me to impress that these problems likely could not be solved through other means.
@petergleick LLMs are the hot new thing among crypto bros but people should know that there will be objectively good advances that come out of this tech that will help a lot of people.
@petergleick it's so sad how many people will attach themselves to a silly billionaire's harebrained ideas just for the clout. and yeah... those people are usually men, it seems.

@petergleick It is actually not true that the only thing ai can do is ā€œregurgitationā€. That point of view seems to suggest that ā€œtrue creativityā€ is only possible by humans.

Watch the documentary on alphago, which is one of the illustrations how #ai goes beyond human knowledge.

Having said that, I agree that a don’t expect much from the #xAI initiative, but that has more to do with the involvement of #musk

#artificialintelligence #creativity

@petergleick It would help if people stopped calling Large Language Models "AI".
@petergleick This has Bigfoot conference energy.
@petergleick @ChrisBoese The idea of using AI to propose novel hypotheses is really interesting, but I don't think LLMs are the right tool. You would want to create an AI trained on mathematical representations of the universe and have it attempt to predict novel theories from that direction. Text is not the native language of nature.
@petergleick It’s the perfect metaphor for his self-regard, isn’t it? Like something from a fairytale or Ryszard Kapuscinski’s book about the Shah. ā€œHe looked into the universe, and everywhere he saw a reflection of himself. His own - surely unparalleled - mind, penetrated further than any man’s, and the deeper it saw, the more it confirmed that beyond him, and those qualities that the few he chose to surround himself with echoed, there was no more. He was the universal I Am.ā€
@petergleick Well he wouldn't want anything that did think anything he hasn't already heard because then he would probably have an AI that said, "You're an infantile, self-obsessed tool." He expects it to be like every other 'intelligence' in his life - they all think he's a trailblazer.
@petergleick this is patently wrong on its face, considering how much new science AlphaFold has already enabled
@pmcarlton @petergleick Alphafold is, so far, *the* singular example of a real advance, vs a convenience (eg code completion). It seems like ML is best used for domain specific tasks, at least at this time.
@mglo @pmcarlton @petergleick Alphafold does not produce ideas, scientists do. Alphafold produces good structure predictions because it has large training sets from extant research. That's useful but it's not intelligence. It's not a new paradigm to think about protein structure. It's a sophisticated algorithm that searches correlations in large sets of data. Some algorithms in medical imaging for instance do that too, and so does ChatGPT. That doesn't give us the "true nature of the universe".
@JoseEdGomes @pmcarlton @petergleick that’s my point ( I use AF all the time). But it is the poster child for useful AI systems. The point being that the utility of these systems seems to depend on areas where there is really good data, but that only covers a small portion of the applicable problem space (ie it’s good for extrapolation).
@JoseEdGomes @mglo @petergleick OP said ā€œAI can't tell us anything new beyond what scientists have already figured outā€, which is not true. and definitional wrangling about words (is it really intelligence? Is it really an idea?) doesn’t shed much light on an AI’s abilities.
@petergleick the big question is whether this represents Elon grifting, or him misunderstanding the tech.