150 Followers
191 Following
285 Posts

Wow. One of the most relevant blog posts I've read about AI coding for a long time: If you thought the speed of writing code was your problem, you have bigger problems.

https://andrewmurphy.io/blog/if-you-thought-the-speed-of-writing-code-was-your-problem-you-have-bigger-problems

Rings quite a few bells here.

#engineering #aicoding

If you thought the speed of writing code was your problem - you have bigger problems | Debugging Leadership

AI coding tools are optimising the wrong thing and nobody wants to hear it. Writing code was already fast. The bottleneck is everything else: unclear requirements, review queues, terrified deploy cultures, and an org chart that needs six meetings to decide what colour the button should be.

Debugging Leadership
@aeva agreed

@aeva ZFS raid and backups.
pick your rabbithole level:
* external usb hard drive
* mini PC running trueNAS with zfs, snapshots, encryption
* proxmox cluster and ceph

https://wiki.futo.org/index.php/Introduction_to_a_Self_Managed_Life:_a_13_hour_%26_28_minute_presentation_by_FUTO_software

@lritter @demofox yeah just the latest fad in scamware.
i'm constantly eating from the garbage can.
and that garbage can's name is ideology.
@lritter @demofox
in my case, the function was continuous, its just that the DNN cost *way* too much to evaluate compared to a few good LUTs.

@lritter @demofox Anyway, once you start running out of endpoints, the DNN simply can't represent any more nuance and any reduction in error in one spot becomes an increase in error elsewhere.

And you have to evaluate *every* endpoint all the time, always.

Textures let you sample just the endpoints you want.

And you can f'ing control and reason about them!

@lritter @demofox just trying to evaluate a really complex function that is modeled after some measured physical phenomenon.
@lritter @demofox and as you try to optimize it to not suck ALU and GPR-wise, you will see that shrinking it reduces the number of endpoints its interpolating between.

@lritter @demofox
imagine you trained a set of fully connected layers with relu activation to approximate some reference function that is expensive to compute.

then printed out the expressions the network is evaluating as hlsl and stuck it in a shader, with all the weights as immediates known at compile time for maximum optimization.

if you stare at it long enough you will start to see that it is interpolation; a lasagna of unlerps and lerps.