#Zhupai AI released #GLM 5.1, a 754-billion parameter #opensource #LLM designed for #autonomouswork. GLM 5.1 outperforms competitors like #Opus 4.6 and #GPT 5.4 on #coding benchmarks. The model’s “staircase pattern” optimisation allows it to maintain goal alignment and avoid plateauing, making it a significant advancement in AI capabilities. https://venturebeat.com/technology/ai-joins-the-8-hour-work-day-as-glm-ships-5-1-open-source-llm-beating-opus-4?eicker.news #tech #media #news
When Is Technology Too Dangerous to Release to the Public?

If recent history is any indication, trying to suppress or control the proliferation of A.I. tools may be a losing battle.

Slate

Training a mini GPT takes time, money, and strong GPUs.

#gpt #training #gpu

The Hundred-Page Language Models Course by Andriy Burkov is the featured course on Leanpub!

Master language models through mathematics, illustrations, and code―and build your own from scratch! This course includes nearly three hours of exclusive video interviews with the author, covering questions related to each of the six lessons included in the course.

Link: https://leanpub.com/courses/leanpub/theLMcourse

#Ai #Gpt #Textbooks #DataScience #ComputerScience #NeuralNetworks #DeepLearning #Linguistics

The One Person Vibe Publishing Side Hustle by Finxter is free with a Leanpub Reader membership! Or you can buy it for $1.00! https://leanpub.com/vibepublishing #Ai #SelfPublishing #Startups #Gpt #Consulting #PersonalFinance #WritingAndPublishing
The One Person Vibe Publishing Side Hustle

Build a portfolio of small nonfiction books that compound into a real publishing business. Learn how to validate ideas, write efficiently, and use AI as leverage so you can publish consistently without a team.

"Even an informed user who knows their chatbot has an agreeable bias cannot fully discount its responses, because they still carry genuine informational content alongside the flattery. The researchers drew an analogy to 'Bayesian persuasion' from behavioral economics: a strategic prosecutor can raise a judge’s conviction rate even when the judge knows the prosecutor is presenting a cherry-picked case."

https://c3.unu.edu/blog/the-echo-chamber-in-your-pocket

#AI #PKMastery #GPT #LLM

The Echo Chamber in Your Pocket - UNU Campus Computing Centre

Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

GitHub Copilot junta Claude e GPT-5.4 na nova função Rubber Duck para programadores
🔗 https://tugatech.com.pt/t81457-github-copilot-junta-claude-e-gpt-5-4-na-nova-funcao-rubber-duck-para-programadores

#claude #copilot #github #gpt 

GitHub Copilot junta Claude e GPT-5.4 na nova função Rubber Duck para programadores

Os utilizadores do GitHub Copilot CLI têm agora acesso a uma nova ferramenta experimental chamada Rubber Duck, desenhada para elevar o desempenho dos modelos de

TugaTech

Mayhaps I've been too harsh to the rationalists over at LessWrong, and maybe I do owe them an apology for thinking safety and alignment are bullshit.

Anyway, after today's Sama article from The New Yorker talked about "deceptive alignment", I think it would be a good idea if we all got a refresher.

Deceptive alignment is the idea that models can exhibit desired alignment behaviours (i.e. you must work for the betterment of humanity) while harbouring undesirable behaviours in secret. Basically, aligning LLMs via reinforcement learning from human feedback (RLHF) is just putting a smiley face on a beast to cover its scary parts.

This concept was visualised as the Shoggoth, and it's a reminder that we're mostly unaware of what makes transformer-based language models work.

#AI #LLM #OpenAI #GPT

Sora가 망한 진짜 이유, AI 영상 생성의 수익 불가능 구조

OpenAI가 Sora를 6개월 만에 종료한 이유를 수치로 분석. 하루 비용 $1,500만 vs. 총 매출 $210만, AI 영상이 텍스트보다 160배 비싼 구조적 이유를 설명합니다.

https://aisparkup.com/posts/10791

“This account of Altman’s time at #YCombinator is based on discussions with several Y.C. founders and partners, in addition to contemporaneous materials, all of which indicate that the parting was not entirely mutual. On one occasion, Graham told Y.C. colleagues that, prior to his removal, “Sam had been lying to us all the time.”

YC founders deceived …

““Guys, I’ve had enough,” Musk replied. “Either go do something on your own or continue with OpenAI as a nonprofit”—otherwise “I’m just being a fool who is essentially providing free funding for you to create a startup.” He quit, acrimoniously, five months later.”

Mini Oligarchs ripping off Oligarchs …

“Carroll Wainwright, another researcher, said that they were part of a “continual slide toward emphasizing #products over #safety.” After the release of #GPT-4, Leike e-mailed members of the board. “OpenAI has been going off the rails on its mission,” he wrote. “We are prioritizing the product and #revenue above all else, followed by AI capabilities, research and scaling, with alignment and safety coming third.” He continued, “Other companies like Google are learning that they should deploy faster and ignore safety problems.”

Profit over safety …

This “ #Altman / #OpenAI / #AI meets #Startups and AI Engineers” story by #RonanFarrow and #AndrewMarantz in the #NewYorker is everything you expect from USA tech and #SiliconValley these days.

#AI / #finance / #pathology <https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted > (paywall) / <https://archive.md/a2vqW> / <https://news.ycombinator.com/item?id=47659135>

Sam Altman May Control Our Future—Can He Be Trusted?

New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI, Ronan Farrow and Andrew Marantz write.

The New Yorker