möchtegern (@heft_ig)

'Neural Networks: Zero to Hero'라는 교육 자료(강의 맵)가 Karpathy의 강의/비디오와 직접 매핑된다는 안내 트윗입니다. 문서(또는 페이지)는 micrograd(학습용 autograd 구현) 설명과 microgpt.py 예제, 그리고 Claude Opus 4.6 언급을 포함해 신입·개발자 대상 학습 경로를 제시합니다.

https://x.com/heft_ig/status/2021798285992763430

#micrograd #microgpt #claudeopus #karpathy #neuralnetworks

möchtegern (@heft_ig) on X

@karpathy "Neural Networks: Zero to Hero" Every section maps directly to one or more of Karpathy's lectures/videos Here's the breakdown(Claude opus 4.6) https://t.co/GyoLTrXsim 1. **micrograd** (https://t.co/7LFTYUqZWw) -- to understand the `Value` class and `backward()` (the engine that

X (formerly Twitter)

Andrej Karpathy (@karpathy)

전체 LLM 아키텍처와 손실 함수를 가장 원자적인 수학 연산(+ ,*,**, log, exp) 수준으로 완전히 분해하고, 작은 스칼라값 autograd 엔진(micrograd)으로 그래디언트를 계산하며 Adam 옵티마이저를 사용하는 구현/교육적 접근을 설명하는 기술적 트윗입니다.

https://x.com/karpathy/status/2021695367507529825

#autograd #micrograd #llm #optimization

Andrej Karpathy (@karpathy) on X

The way it works is that the full LLM architecture and loss function is stripped entirely to the most atomic individual mathematical operations that make it up (+, *, **, log, exp), and then a tiny scalar-valued autograd engine (micrograd) calculates gradients. Adam for optim.

X (formerly Twitter)

Micrograd is very simple, only fully connected layers. So first trying to find out if it can even learn numbers based on MNIST dataset.

Then I hope to at least be able to verfit, so the essence works. Then I'll have the challenge of trying to make it work for every icon in the app.

Presumably I have to create/generate a huge set of icon images to train on..

#ux #micrograd #reactNative #MLP #MNIST #TinyUX #Karpathy #ai #NeuralNet