New #openaccess publication #SciPost #Physics
Quantum chaos, randomness and universal scaling of entanglement in various Krylov spaces
Hai-Long Shi, Augusto Smerzi, Luca Pezzรจ
SciPost Phys. 19, 102 (2025)
https://scipost.org/SciPostPhys.19.4.102
New #openaccess publication #SciPost #Physics
Quantum chaos, randomness and universal scaling of entanglement in various Krylov spaces
Hai-Long Shi, Augusto Smerzi, Luca Pezzรจ
SciPost Phys. 19, 102 (2025)
https://scipost.org/SciPostPhys.19.4.102
#q * (pronounced #qstar is a new #AI model being developed by #openai which is known for creating ChatGPT. Q* is designed to #significantly improve AI reasoning and could potentially bring OpenAI closer to achieving artificial general intelligence #AGI . a system that can apply human-like reasoning and problem-solving capabilities.
Q* has demonstrated the ability to outperform grade-school students in mathematical problems, suggesting that its reasoning
(1/7)
๐ฟ๐๐ข๐ฎ๐จ๐ฉ๐๐๐ฎ๐๐ฃ๐ ๐๐ฅ๐๐ฃ๐ผ๐'๐จ ๐จ๐๐๐ง๐๐ฉ๐๐ซ๐ "๐๐ง๐ค๐๐๐๐ฉ ๐*" - ๐ฉ๐๐๐ฉ'๐จ ๐ฌ๐๐๐ฉ ๐๐ฉ'๐จ ๐ง๐๐๐ก๐ก๐ฎ ๐๐ก๐ก ๐๐๐ค๐ช๐ฉ
๐ก ๐ฃ๐ฟ๐ผ๐ท๐ฒ๐ฐ๐ ๐ค - ๐ ๐๐ฟ๐ฒ๐ฎ๐ธ๐๐ต๐ฟ๐ผ๐๐ด๐ต ๐ถ๐ป ๐๐๐
Central to the discourse is the Q*-Project, introducing Q* (pronounced Q-Star), a potential AGI breakthrough. Q* demonstrates proficiency in mathematical problem-solving, hinting at capabilities that go beyond traditional AI. What could this mean for the future of artificial intelligence?
Humans live in a #simulation that has the purpose to leverage them to create #AGI.
Keywords: #qstar #openai #ai #hottake #showerthough #llm #gpt4 #chatgpt
Unpacking the hype around #OpenAIโs rumored new Q* model
#Qstar #openAI
Oooo, a highly bombastic open letter mentioning a product name but nothing substantial about the project. how familiar.
OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say
Ahead of OpenAI CEO Sam Altmanโs four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.
The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said.
Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.
The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences.
Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.
After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said.
An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.
Some at OpenAI believe โฆ๏ธQ* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (๐ธAGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company.
Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*โs future success, the source said.
Reuters could not independently verify the capabilities of Q* claimed by the researchers.
'VEIL OF IGNORANCE'
Researchers consider math to be a frontier of generative AI development.
Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely.
But conquering the ability to do math โ where there is only one right answer โ implies AI would have greater reasoning capabilities resembling human intelligence.
This could be applied to novel scientific research, for instance, AI researchers believe.
Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.
In their letter to the board, researchers flagged AIโs prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter.
There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.
Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed.
The group, formed by combining earlier ๐น"Code Gen" and ๐น"Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.
Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.
In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.
"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.
A day later, the board fired Altman.
#Qstar #agi #altman #openai
Ahead of OpenAI CEO <a href="/technology/ousting-ceo-sam-altman-chatgpt-loses-its-best-fundraiser-2023-11-18/">Sam Altmanโs four days in exile</a>, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.