Gregory Todd Williams

63 Followers
145 Following
114 Posts
Creator of SPARQL tools and query engines. CS Ph.D. Santa Monica native. Formerly with Getty, Hulu. Currently statistics-based optimization in AWS Neptune.
Websitehttp://kasei.us/
Githubhttp://github.com/kasei
Flickrhttps://www.flickr.com/photos/kasei/
I’m trying to imagine obscuring the fact that my code sucks by explaining to people how much RAM my computer has, and what color terminal I used to write the code. Seems equally irrelevant to me. But feels like I’m in the minority on this.
I find it curious that AI people always seem very eager to explain how many different agents they have running, or how many different models were used together to produce their results. I assume this is meant to lend the work extra legitimacy? But the output is wrong/buggy all the same.
@collin I think generally yes. I’ve seen some coworkers do amazing POC work with LLMs. But also at least in large orgs, these people also facing the obvious reality that coding is only a small part of the job. Humans still a necessary part of design, planning, code review, operations, etc.
Tried to help a co-worker reproduce an issue today via a screen share. Told him to see the behavior required a one-line addition to the code. Then watched in horror as instead of adding that one line, he asked his coding AI to make the desired change and it spun around for 5 minutes trying to make dozens of changes in dozens of different files. The call ended predictably without co-worker having reproduced anything, and saying he'd get back to me.
@zenhob I can't see any way in which this could possibly go wrong.

Me, at work: I can't figure out an issue with this API. Any ideas?

Co-worker: You should use AI for this. Our annual reviews are going to start being based on how often you're using our internal AI tools.

Co-worker: Here's the answer to your question: [100% hallucinated slop nonsense.]

Me: 😑

Feeling very down about my “data driven” employer spending meeting time every week on reviewing AI “wins”, but NEVER considering problems caused by AI. Massive blind spot.
@nichtich would love to hear it at some point.
Losing my mind at work recently trying to help a colleague debug an issue. Watching his screen share, I saw a list of numbers in some logging. Asked him to sort the numbers so we could see if they were a sequence without any gaps, and watched in horror and disbelief as he switched to an LLM chat window and started typing a prompt asking it to sort the lines.
@mluisbrown I admit that on my initial viewing of the promo video, I didn't see that the voice-over was was credited to a specific person. Seeing that, I'll admit that it isn't as nonsensical as I had thought. I still think it's a bit strange, but it does make more sense in the context of the attribution.