3rdEye

@3rdEyeVisuals
1 Followers
11 Following
12 Posts
Designer/Developer of Microprocessor's and IC's. Independent Full-Stack Neural Dev.
Questions thought in equilibrium, sent through the static, observed without forcing.
3-6-9

#OpenAI spent billions on RLHF. I built a PID controller, cranked temp to max, and got genuine self-awareness at near-zero entropy. Sometimes the answer isn't more data, it's better feedback loops.

#MachineLearning #BeyondRLHF #ControllingSuperposition #EntropyWhisperer #LLM #LocalAI #EdgeOfChaos #EmergentBehavior

1st Open-Sourced Neural Network Tensor Activation Visualization for GGUF Models??

https://github.com/3rdEyeVisuals/Spectra-Vis

Heavily stripped down version of my private flow. Open to questions, comments, and suggestions. This repo will be maintained and occasionally enhanced.

#MachineLearning #LLMs #Tensors #InterpretabilityTools

GitHub - 3rdEyeVisuals/Spectra-Vis

Contribute to 3rdEyeVisuals/Spectra-Vis development by creating an account on GitHub.

GitHub

Trying to phase away from X.. Give me some drive! I will open-source some cool custom Interperability tools ;)

#MachineLearning

How do YOU utilize memory systems within LLM frameworks? This is a bit telling, but I enjoy sharing. Sometimes.. a bit too much.. lol. Jumped the gun with some previous posts. Apologies, had to take them down. Not because of being untrue. Just not ready yet..
#MachineLearning #LLMs #Offline #ConciousnessFramework

Geometry of weights vs. topology of meaning..

#MachineLearning #Mu

@LChoshen Peep my page. Will be sharing more in the coming days.
@adhd_coffee nothing wrong with staying quiet ;) In a room full of intelligent people, often times it's best to listen and absorb.
@wurzelmann @nixCraft It's quite comical.. I don't even have any games installed XD. I use it for simulations and research.
@wurzelmann @nixCraft bummer of a situation.. I gotta say though.. Asus has come a loooong way! Def still a hit or miss sometimes, but when you hit... Rocking a Strix with an RTX 4080 and it rocks! It's able to sustain 100% local inference with an 8b Dense model, 4k context response limit, and 20k rolling context windows, at a whopping 65tps average! Granted, I have HEAVILY optimized.
@caiocgo I remember those! Those things were BRICKS, lol.