Okay, okay. I need to devote some time to catching up on genAI capabilities in a professional sense.

Security Researchers & SecOps - what's your favorite use case so far?

Also, what's a lesson you learned the hard way?

***Also - please save the snark. I'm tired, and this is a genuine, if hesitant, ask.

#infosec

One of my professional networks - filled with actual practitioners - is substantially less negative on AI lately. There's spend and craft involved, so it's not a turnkey solution, but a lot of folks in this trust group are finding substantial productivity benefits, rather than hype.
@neurovagrant Yep, same. Just a pity I had to quit my job because those specific people didn't have a clue.
@neurovagrant they've moved on to 'harness engineering' from prompt engineering. i can show you what I've built if you'd like
@Viss would definitely appreciate any experience you feel like imparting
@neurovagrant just let me know when you have some free time today, if youre game. i think my entire day is earmarked to deal with with it all

@Viss the day has escaped me :( but let's find time soon, please.

i am a grumpy fucker this afternoon though, and should not expose you to that. lol

@neurovagrant im down to throw shade if you wanna vent too!

@Viss if i start venting

i may never stop

=)

@neurovagrant
ᵒⁿᵉ ᵒᶠ ᵘˢ
ᵒⁿᵉ ᵒᶠ ᵘˢ
ᵒⁿᵉ ᵒᶠ ᵘˢ
ᵒⁿᵉ ᵒᶠ ᵘˢ
ᵒⁿᵉ ᵒᶠ ᵘˢ
ᵒⁿᵉ ᵒᶠ ᵘˢ
ᵒⁿᵉ ᵒᶠ ᵘˢ
ᵒⁿᵉ ᵒᶠ ᵘˢ
@Viss @neurovagrant Wtf is “harness engineering”?
@schrotthaufen @neurovagrant so you know what prompt engineering is, right?

@schrotthaufen @neurovagrant so harness engineering is tuning 'the thing you use to talk to the llm' instead of 'wordsmithing your prompt'. because the harness itself does a lot of the heavy lifting.

thing of stuff like claude code, crush, opencode, openclaw, nemoclaw - these things all talk to the llm on your behalf and handle a bunch of the heavy lifting, so "your harness" can be way more effective than "your prompt"

@Viss @neurovagrant Ah that makes sense. Thank you for the explanation.

@neurovagrant bluntly, these people are delusional.

I have seen LLMs stacked against well established ML systems. Because the decision was made (incorrectly) that LLMs would be 'cheaper.'

They were quite literally multiples more expensive. And the results went from 97% accuracy to <70%. Getting it anywhere near the same level of accuracy would multiply costs again.

@neurovagrant and it wouldn't surprise me if most of them have no existing ML, or their ML was ineffective nonsense.
So they're quite literally incapable of seeing that they're wasting multiples for below acceptable results. Or they've gone all-in on the psychosis thinking LLMs are good at regexps (they are absolutely not.)
@rootwyrm ***Also - please save the snark. I'm tired, and this is a genuine, if hesitant, ask.