EXO Labs (@exolabs)
exo에 경량 딥러닝 프레임워크 tinygrad 지원이 추가되었다는 간단한 공지입니다. exo에서 tinygrad를 사용해 모델 실행이나 개발이 가능해졌음을 의미하며, 경량화된 오픈소스 딥러닝 환경과의 통합을 통해 로컬/임베디드 개발자에게 유용할 수 있습니다.
EXO Labs (@exolabs)
exo에 경량 딥러닝 프레임워크 tinygrad 지원이 추가되었다는 간단한 공지입니다. exo에서 tinygrad를 사용해 모델 실행이나 개발이 가능해졌음을 의미하며, 경량화된 오픈소스 딥러닝 환경과의 통합을 통해 로컬/임베디드 개발자에게 유용할 수 있습니다.
High speed graphics rendering research with tinygrad/tinyJIT
https://github.com/quantbagel/gtinygrad
#HackerNews #HighSpeedGraphics #RenderingResearch #tinygrad #tinyJIT #GraphicsProgramming #TechInnovation
At long last, the blog post I've been working on for what seems like forever is finished!
https://cprimozic.net/blog/growing-sparse-computational-graphs-with-rnns/
It's packed with lots of really cool stuff: ML #interpretability, #grokking, #tinygrad, #graphviz, and more
A summary of my research and experiments on growing sparse computational graphs by training small RNNs. This post describes the architecture, training process, and pruning method used to create the graphs and then examines some of the learned solutions to a variety of objectives.
Did some #machinelearning
#benchmarks on my 7900 XTX GPU and wrote up notes: https://cprimozic.net/notes/posts/machine-learning-benchmarks-on-the-7900-xtx/
TL;DR performance on AMD GPUs is pretty bad, but it's likely a software issue.
I tested the raw hardware using some scripts from #tinygrad, and the raw FLOPS are excellent.
I recently upgraded to a 7900 XTX GPU. Besides being great for gaming, I wanted to try it out for some machine learning. It’s well known that NVIDIA is the clear leader in AI hardware currently. Most ML frameworks have NVIDIA support via CUDA as their primary (or only) option for acceleration. OpenCL has not been up to the same level in either support or performance. That being said, the 7900 XTX is a very powerful card.
Speaking of #tinygrad, I've been hanging out in that Discord server for the past month or so
Very unique project. The creator and maintainer geohot is an interesting person for sure.
Lots of good ideas, and the only place I've actually seen all the way from the top to the bottom of a GPU-powered ML stack before
Not a good place to go to learn though; very much a do-it-yourself vibe and noobs aren't tolerated at all
I've been making more progress on the sparse RNN training and visualization
Working on the blog post now. Lots of cool stuff went into this from custom activation functions, custom regularizers, the new machine learning library #tinygrad, #graphviz, #webgl, and more
Here, it learned a gated 3-state state machine coupled with other neurons that perform a different boolean operation depending on the current state
Looking forward to seeing what they do
the tiny corp raised $5.1M
https://geohot.github.io/blog/jekyll/update/2023/05/24/the-tiny-corp-raised-5M.html
Making good progress with my visual search feature in TinyUX.
The simple neural net is stored on the device. It is now only trained on letters and numbers. Will need to train for all icons.
I use the same interface to draw content for the neural network to train on. Still quite number of hours of work ahead for creating that training content.
#neuralnet #ux #icons #pytorch #reactnative #tinygrad #tinyux