datsci_est_2015

0 Followers
0 Following
8 Posts
Data Sciencing since 2015
This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup
Our experiments aren’t free. We use cloud infrastructure. An experiment costs on the order of tens of dollars, so massively parallelizing “spaghetti at wall” simulators is costly before we even talk about LLMs.

I often use LLMs to explore prior art and maybe find some alternative ways of thinking of problems. About 90% of what it tells me is useless or inapplicable to my domain due to a technicality it could not have known, but the other 10% is nice and has helped me learn some great new things.

I can’t imagine letting an agent try everything that the LLM chatbot had recommended ($$$). Often coming up in recommendations are very poorly maintained / niche libraries that have quite a lot of content written about them but what I can only imagine is very limited use in real production environments.

On the other hand, we have domain expert “consultants” in our leadership’s ears making equally absurd recommendations that we constantly have to disprove. Maybe an agent can occupy those consultants and let us do our work in peace.

Height as a man is also a huge bonus, at least in the cultures to which I’ve been exposed. There are examples I can think of men not being conventionally attractive, but just in the top quintile of height, and receiving special attention in dating and leadership opportunities.
There’s nothing good faith to be interpreted from a pithy comment that denies real suffering experienced by real people. If OP wanted to be interpreted in good faith they should’ve written more substance to their comment.
Yeah, which is also why I tried not to* speak prescriptively, unlike some other comments in this thread…

> > Most of what's planned falls down within the first few hours of implementation.

> Not my experience at all. We know what computers are capable of.

You must not work in a field where uncertainty is baked in, like Data Science. We call them “hypotheses”. As an example, my team recently had a week-long workshop where we committed to bodies of work on timelines and 3 out of our 4 workstreams blew up just a few days after the workshop because our initial hypotheses were false (i.e. “best case scenario X is true and we can simply implement Y; whoops, X is false, onto the next idea”)

Cool, code review continues to be one of the biggest bottlenecks in our org, with or without agentic AI pumping out 1k LOC per hour.

I skimmed over it, and didn’t find any discussion of:

- Pull requests
- Merge requests
- Code review

I feel like I’m taking crazy pills. Are SWE supposed to move away from code review, one of the core activities for the profession? Code review is as fundamental for SWE as double entry is for accounting.

Yes, we know that functional code can get generated at incredible speeds. Yes, we know that apps and what not can be bootstrapped from nothing by “agentic coding”.

We need to read this code, right? How can I deliver code to my company without security and reliability guarantees that, at their core, come from me knowing what I’m delivering line-by-line?