Ведущий разработчик ChatGPT и его новый проект — Безопасный Сверхинтеллект

Многие знают об Илье Суцкевере только то, что он выдающийся учёный и программист, родился в СССР, соосновал OpenAI и входит в число тех, кто в 2023 году изгнал из компании менеджера Сэма Альтмана. А когда того вернули, Суцкевер уволился по собственному желанию в новый стартап Safe Superintelligence («Безопасный Сверхинтеллект»). Илья Суцкевер действительно организовал OpenAI вместе с Маском, Брокманом, Альтманом и другими единомышленниками, причём был главным техническим гением в компании. Ведущий учёный OpenAI сыграл ключевую роль в разработке ChatGPT и других продуктов. Сейчас Илье всего 38 лет — совсем немного для звезды мировой величины.

https://habr.com/ru/companies/ruvds/articles/892646/

#Илья_Суцкевер #Ilya_Sutskever #OpenAI #10x_engineer #AlexNet #Safe_Superintelligence #ImageNet #неокогнитрон #GPU #GPGPU #CUDA #компьютерное_зрение #LeNet #Nvidia_GTX 580 #DNNResearch #Google_Brain #Алекс_Крижевски #Джеффри_Хинтон #Seq2seq #TensorFlow #AlphaGo #Томаш_Миколов #Word2vec #fewshot_learning #машина_Больцмана #сверхинтеллект #GPT #ChatGPT #ruvds_статьи

Ведущий разработчик ChatGPT и его новый проект — Безопасный Сверхинтеллект

Многие знают об Илье Суцкевере только то, что он выдающийся учёный и программист, родился в СССР, соосновал OpenAI и входит в число тех, кто в 2023 году изгнал из компании менеджера Сэма Альтмана. А...

Хабр

#ConvolutionalNeuralNetworks (#CNNs in short) are immensely useful for many #imageProcessing tasks and much more...

Yet you sometimes encounter some bits of code with little explanation. Have you ever wondered about the origins of the values for image normalization in #imagenet ?

  • Mean: [0.485, 0.456, 0.406] (for R, G and B channels respectively)
  • Std: [0.229, 0.224, 0.225]

Strangest to me is the need for a three-digits precision. Here, after finding the origin of these numbers for MNIST and ImageNet, I am testing if that precision is really important : guess what, it is not (so much) !

👉 if interested in more details, check-out https://laurentperrinet.github.io/sciblog/posts/2024-12-09-normalizing-images-in-convolutional-neural-networks.html

Understanding Image Normalization in CNNs

Architectural innovations in deep learning occur at a breakneck pace, yet fragments of legacy code often persist, carrying assumptions and practices whose necessity remains unquestioned. Practitioners

Scientific logbook
How a stubborn #computerscientist accidentally launched the #deeplearning boom
"You’ve taken this idea way too far," a mentor told Prof. Fei-Fei Li, who was creating a new image #dataset that would be far larger than any that had come before: 14 million images, each labeled with one of nearly 22,000 categories. Then in 2012, a team from Univ of Toronto trained a #neura network on #ImageNet, achieving unprecedented performance in image recognition, dubbed #AlexNet.
https://arstechnica.com/ai/2024/11/how-a-stubborn-computer-scientist-accidentally-launched-the-deep-learning-boom/ #AI
How a stubborn computer scientist accidentally launched the deep learning boom

“You’ve taken this idea way too far,” a mentor told Prof. Fei-Fei Li.

Ars Technica

#AI heroic stories and underpaid labour:

"The project was saved when Li learned about Amazon Mechanical Turk, a crowdsourcing platform Amazon had launched a couple of years earlier. "

#Imagenet....How a stubborn computer scientist accidentally launched the deep learning boom - Ars Technica
https://arstechnica.com/ai/2024/11/how-a-stubborn-computer-scientist-accidentally-launched-the-deep-learning-boom/

How a stubborn computer scientist accidentally launched the deep learning boom

“You’ve taken this idea way too far,” a mentor told Prof. Fei-Fei Li.

Ars Technica

🚀 New #AI Research: Simplified Continuous-time Consistency Models (#sCM)

🔬 Key findings:
#OpenAI's new approach matches leading #diffusion models' quality using only 2 sampling steps
• 1.5B parameter model generates samples in 0.11 seconds on single #GPU
• Achieves ~50x wall-clock speedup compared to traditional methods
• Uses less than 10% of typical sampling compute while maintaining quality

🎯 Technical highlights:
• Simplifies theoretical formulation of continuous-time consistency models
• Successfully scaled to 1.5B parameters on #ImageNet at 512×512 resolution
• Demonstrates consistent performance scaling with teacher diffusion models
• Enables real-time generation potential for images, audio, and video

📄 Learn more: https://openai.com/index/simplifying-stabilizing-and-scaling-continuous-time-consistency-models/

Fei-fei li: de godmother van ai blijft vragen stellen over agi

Tijdens de Responsible AI Leadership Summit van Credo AI in San Francisco, gaf Fei-Fei Li, een prominente figuur in de AI-gemeenschap, haar visie op de ontwikk

Tech Nieuws

"#AI is “promising” nothing. It is #people who are promising – or not promising. AI is a piece of software. It is made by people, deployed by people and #governed by people... in terms of urgency, I’m more concerned about ameliorating the risks that are here and now [than by the risks of the techbro SkyNet singularity]."

— Fei-Fei Li, creator of #ImageNet, whose memoir "The Worlds I See" is out now.

https://www.theguardian.com/technology/2023/nov/05/ai-pioneer-fei-fei-li-im-more-concerned-about-the-risks-that-are-here-and-now

AI pioneer Fei-Fei Li: ‘I’m more concerned about the risks that are here and now’

The Stanford professor and ‘godmother’ of artificial intelligence on why existential worries are not her priority, and her work to ensure the technology improves the human condition

The Guardian
@lowd I remember when most ML applications were variations on #MNIST. And #Imagenet, but I only had enough computer at the time to play around with Mnist. But yea, even then "Recommendation Engines" were starting to be the first things anyone mentioned because it was low hanging fruit - something of immediately obvious commercial value with terrific training data and an easy task for deployment.
Re-reading 'On the genealogy of machine learning datasets: A critical history of ImageNet' by @alexhanna. So clear the LLM debacle goes back to the start of the DL boom; it's data fetish, flat universalism, social illiteracy & contempt for workers https://journals.sagepub.com/doi/full/10.1177/20539517211035955
#AI #datasets #Imagenet #resistingAI