0 Followers
0 Following
1 Posts
Why act like this is an intractable problem? Several of the models succeeded 100% of the time. That is the problem “going somewhere.” There’s clearly a difference in the ability to handle these problems in a SOTA models compared to others.

The way you guys are working is not about speed. It’s procrastination. The work needs to get done. You can either do it now or you can do it when the bug reports and change requests start coming in. There’s no speed to be gained by procrastinating, often it’s the opposite.

If it was me, I’d focus on producing better code despite the pressure. You know you’ve got coworkers spending time watching YouTube instead of turning their work in or picking up the next ticket. There’s your time to ask Claude to refine and refactor the code before you commit it. Just don’t be the slow guy and you’ll be fine.

Just refactor as you go. You don’t have to over engineer things. KISS and YAGNI are valuable engineering approaches. But don’t fool yourself into thinking that turning your work in an hour or two earlier is going to make a big difference in how the higher ups see you.

Where this really starts to pay off is

  • Your name comes up less often when assigning bug reports since you don’t own the feature that is bugged. People notice this.

  • Less time spent fixing bugs means more time making new features. Means you own a larger part of the codebase. People also notice this.

  • When a change request comes in and you go “Oh yeah, that’s easy. I already considered that and it’s like a 1 line config/code change.” You look like a fucking wizard when this happens.

  • This has always been my approach. Even in places with little to no quality standards. Hell, I think it works even better in places with no quality standards because it makes you stand out more.

    P.S. While you already have a job is the best time to look for a new one. Because you don’t have any real stakes for failure.

    Think of the universe not as objects flying apart, but as the fabric of space itself stretching. The more distance there is between two objects, the more ‘new space’ is generated every second.

    Light travels 9.461 × 10^15^ meters per year. At a certain distance (the Hubble limit), more than 9.461 × 10^15^ meters of new space is created in a year. That means that light sent from those stars actually end up further from us than when it started. That’s what “moving away from us faster than light” means.

    Have you heard of Smash Up? It’s a little older but the exact same deck construction concept as Compile. Just with area control type battling.

    Apparently, Nexus is newer, uses similar deck construction, and more highly rated on BGG but I haven’t played it so I don’t have a personal recommendation on it.

    This isn’t even a QA level thing. If you write any tests at all, which is basic software engineering practice, even if you had AI write the tests for you, the error should be very, very obvious. I mean I guess we could go down the road of “well what if the engineer doesn’t read the tests?” but at that point the article is less about insidious AI and just about bad engineers. So then just blame bad engineers.
    It just emphasizes the importance of tests to me. The example should fail very obviously when you give it even the most basic test data.
    Nope, thanks 🤦

    If you take data, and effectively do extremely lossy compression on it, there is still a way for that data to theoretically be recovered.

    This is extremely wrong and your entire argument rests on this single sentence’s accuracy so I’m going to focus on it.

    It’s very, very easy to do a lossy compression on some data and wind up with something unrecognizable. Actual lossy compression algorithms are a tight balancing act of trying to get rid of just the right amount of just the right pieces of data so that the result is still satisfactory.

    LLMs are designed with no such restriction. And any single entry in a large data set is both theoretically and mathematically unrecoverable. The only way that these large models reproduce anything is due to heavy replication in the data set such that, essentially, enough of the “compressed” data makes it through. There’s a reason why whenever you read about this the examples are very culturally significant.

    If you take data, and effectively do extremely lossy compression on it, there is still a way for that data to theoretically be recovered.

    This is extremely wrong and your entire argument rests on this single sentence’s accuracy so I’m going to focus on it.

    It’s very, very easy to do a lossy compression on some data and wind up with something unrecognizable. Actual lossy compression algorithms are a tight balancing act of trying to get rid of just the right amount of just the right pieces of data so that the result is still satisfactory.

    LLMs are designed with no such restriction. And any single entry in a large data set is both theoretically and mathematically unrecoverable. The only way that these large models reproduce anything is due to heavy replication in the data set, such that, essentially, enough of the “compressed” data makes it through. There’s a reason why whenever you read about this the examples are very culturally significant.

    I find it best to get the agent into a loop where it can self-verify. Give it a clear set of constraints and requirements, give it the context it needs to understand the space, give it a way to verify that it’s completed its task successfully, and let it go off. Agents may stumble around a bit but as long as you’ve made the task manageable it’ll self correct and get there.