the thing about all these people who love LLMs is that they will shit in your mouth, ask you to consider the taste of the shit they shat in your mouth, spend your hard-won moments of human life to let the shit they shat in your mouth sink into your mouth, and then they will take whatever you say and somehow turn that into more shit in your mouth and pretend like you weren't just eating the shit they just shat in your mouth already
i just want to remind all the AI maximalists out there that you guys are fucking assholes. like all of you are huge dickheads. like not as an immutable trait, just that using AI seems to make you a huge fucking asshole, and you all act like dickheads to everyone and don't seem to even notice. like you are riding on a magic carpet powered by a dogs shit and everyone around you is like "wow why is that guy riding on a magic dogshit carpet around everyone and getting dogshit all over them" and you are just like "ha ha ha i am flying fuck you all"
@jonny Everyone who mentions using AI without being under duress gets immediately blocked.
I want to see nothing these people have to say, and I don't want them to know about anything I am working on.

@yora @jonny

I used the internal genAI tool because we are all asked to use it (not yet, told). I wanted to see if the internal hype was real.

I was manually translating Splunk queries to Sentinel One queries.

I asked the chatbot to do one.

The field names were wrong, like it was just making them up, and it left the logical operators all in upper case, which is invalid in S1. It was immediately evident they were non-functional.

I added prompts on how to fix the problems.

After I fixed them all, I had a functional query.

The only thing useful it added was replacing the Splunk wildcard searches with S1 contains which I would have figured out eventually as part of optimization if I did it myself.

The amount of effort it took just to get syntactically correct queries would have been better spent doing the reading so I could learn it. Unfortunately, S1 documentation on their query language is either shit or I just can't find it.

Eventually, I want to automate some of this using our automation platform and the relevant APIs so I thought, naïvely, the training data might have enough sample data to be helpful.

It poisoned my thinking. Instead of learning how to do the thing, my mind keeps slipping back into wondering how to optimize having a fake conversation with a plagiarism regurgitation engine.

They are the death of creativity.