OpenAI tries to explain its AGI philosophy as Sam Altman admits the company deserves scrutiny
https://fed.brid.gy/r/https://nerds.xyz/2026/04/openai-agi-principles/
OpenAI tries to explain its AGI philosophy as Sam Altman admits the company deserves scrutiny
https://fed.brid.gy/r/https://nerds.xyz/2026/04/openai-agi-principles/
Here's a figure to understand - 71% of musicians are using AI to separate stems, not to replace themselves.
The debate about AI and music has been almost entirely about text-to-song generators. The actual data shows that's what the least number of musicians use AI for. Most are using it for stem separation, backing tracks, ear training, and mixing assistance - tools that make their practice more viable, not tools that replace it.
Bottom line - musicians are still making music. I think they always will. No-one can replace passion with technology. The consent and royalty dilution problems are real. The training data problem is real. Seven million AI-generated tracks are being uploaded every day and they are absolutely affecting the royalty pool. None of that is resolved by pretending the 71% using AI as a tool to enhance their practice are doing the same thing as the content farms flooding distribution infrastructure with synthetic material. They are not.
The Pack's position is about what kind of content the platform supports, not about which software musicians use to make it. Keeping those two questions distinct matters for the quality of the argument - and for the working musicians who don't need to be told the tools they rely on are disqualifying.
New blog explores what musicians actually use AI for, and why conflating different uses has been confusing the conversation.
👉 https://www.packmusic.au/blog/the-71-percent
#AIandMusic #MusicIndustry #IndependentArtists #AIethics #MusicTechnology #ThePackMusic #HumanCuration #ArtistRights
Hallucinations are a built-in limitation of AI
Even when trained on reliable data, large language models still produce false outputs. Prof. Alan Winfield explores how this reflects deeper risks in robotics and artificial intelligence — and their impact on humanity’s future.
🎧 Listen to Part 2 now — Part 1 also live
👉 https://youtu.be/eh7GPXdNxmA
The window of opportunity is still open.
'The fact that agentic Al systems can currently undertake only comparatively simple tasks does not mean the policy community can sit and wait. The early stages of development of a technology provide critical windows of opportunity—that can close very quickly-for implementing effective safety and security measures.'
Excerpt from 'Before it's too late: Why a world of interacting Al agents demands new safeguards' by Dr Vincent Boulanin, Dr Alexander Blanchard and Dr Diego Lopes da Silva for #SIPRI: https://bit.ly/46LQpnS
#agenticAI #lobbying #lobbies #Microsoft #AIEthics #GAFAM #AI #civilLiberties #EU #AIRisks #tech #AIAct
#AIEngineering #llm #aisecurity #aiethics
I have already predicted this would happen. LLMs and other models are very good at finding and exploiting vulnerabilities and there is already a research on this topic, i.e. adversarial attacks.
AI reflects the full spectrum of humanity
Because AI is trained on the internet, it inherits both valuable knowledge and harmful content. Prof. Alan Winfield explores how this connects to broader risks in robotics and AI — and their impact on humanity’s future.
🎧 Listen to Part 2 now — Part 1 also live
👉 https://youtu.be/eh7GPXdNxmA