OpenAI says its new model GPT-2 is too dangerous to release (2019)
https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html
The Hundred-Page Language Models Course by Andriy Burkov is the featured course on Leanpub!
Master language models through mathematics, illustrations, and code―and build your own from scratch! This course includes nearly three hours of exclusive video interviews with the author, covering questions related to each of the six lessons included in the course.
Link: https://leanpub.com/courses/leanpub/theLMcourse
#Ai #Gpt #Textbooks #DataScience #ComputerScience #NeuralNetworks #DeepLearning #Linguistics
"Even an informed user who knows their chatbot has an agreeable bias cannot fully discount its responses, because they still carry genuine informational content alongside the flattery. The researchers drew an analogy to 'Bayesian persuasion' from behavioral economics: a strategic prosecutor can raise a judge’s conviction rate even when the judge knows the prosecutor is presenting a cherry-picked case."
GitHub Copilot junta Claude e GPT-5.4 na nova função Rubber Duck para programadores
🔗 https://tugatech.com.pt/t81457-github-copilot-junta-claude-e-gpt-5-4-na-nova-funcao-rubber-duck-para-programadores
Mayhaps I've been too harsh to the rationalists over at LessWrong, and maybe I do owe them an apology for thinking safety and alignment are bullshit.
Anyway, after today's Sama article from The New Yorker talked about "deceptive alignment", I think it would be a good idea if we all got a refresher.
Deceptive alignment is the idea that models can exhibit desired alignment behaviours (i.e. you must work for the betterment of humanity) while harbouring undesirable behaviours in secret. Basically, aligning LLMs via reinforcement learning from human feedback (RLHF) is just putting a smiley face on a beast to cover its scary parts.
This concept was visualised as the Shoggoth, and it's a reminder that we're mostly unaware of what makes transformer-based language models work.
Sora가 망한 진짜 이유, AI 영상 생성의 수익 불가능 구조
OpenAI가 Sora를 6개월 만에 종료한 이유를 수치로 분석. 하루 비용 $1,500만 vs. 총 매출 $210만, AI 영상이 텍스트보다 160배 비싼 구조적 이유를 설명합니다.“This account of Altman’s time at #YCombinator is based on discussions with several Y.C. founders and partners, in addition to contemporaneous materials, all of which indicate that the parting was not entirely mutual. On one occasion, Graham told Y.C. colleagues that, prior to his removal, “Sam had been lying to us all the time.”
YC founders deceived …
““Guys, I’ve had enough,” Musk replied. “Either go do something on your own or continue with OpenAI as a nonprofit”—otherwise “I’m just being a fool who is essentially providing free funding for you to create a startup.” He quit, acrimoniously, five months later.”
Mini Oligarchs ripping off Oligarchs …
“Carroll Wainwright, another researcher, said that they were part of a “continual slide toward emphasizing #products over #safety.” After the release of #GPT-4, Leike e-mailed members of the board. “OpenAI has been going off the rails on its mission,” he wrote. “We are prioritizing the product and #revenue above all else, followed by AI capabilities, research and scaling, with alignment and safety coming third.” He continued, “Other companies like Google are learning that they should deploy faster and ignore safety problems.”
Profit over safety …
This “ #Altman / #OpenAI / #AI meets #Startups and AI Engineers” story by #RonanFarrow and #AndrewMarantz in the #NewYorker is everything you expect from USA tech and #SiliconValley these days.
#AI / #finance / #pathology <https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted > (paywall) / <https://archive.md/a2vqW> / <https://news.ycombinator.com/item?id=47659135>