Looks like Corporate #infosec has made it's choice.

#RSAC is filled with talks embracing AI and making it "secure".

And they invited and encouraged the Trump regime to spread its disinformation - fully sanctioned and encouraged by the conference leadership(and by conference attendees who laughed at the regime's jokes and lies and issued no challenges or stands during the talk).

With the ostracization of #ChrisKrebs by industry and the full embrace of Kristi Noem as a speaker, this was the moment that infosec made its bed.

Y'all lie in it now.

@tinker I would definitely not have pegged infosec as an industry rife with the kind of gullible idiot AI is marketed at. In fact, I would have assumed the exact opposite.

@StarkRG @tinker

Our CISO is super-hyped on AI as a tool to eventually handle Tier 1 SOC, write reports, and summarize data.
* some assembly required

@jrdepriest @tinker I don't claim to be an expert (either in infosec or AI), however, I can certainly see that there are some situations where using AI can be a good choice and every single one of them requires a real person double-checking every result. If you don't want to pay people to hand-check everything returned by an AI algorithm, then AI algorithms aren't the solution you're looking for. It's good for producing "that feels like it could be right" matches in enormous datasets.

@StarkRG @tinker
The current workflow with live people requires someone else to review and approve the work. He's hoping we can replace much of it with LLMs so it will be faster to get to level 2 for review.
We aren't to the stage where we can even test it yet. Still dealing with demos and vendor hype.
We have a private ChatGPT instance we are strongly encouraged to use. I know he uses it to write or rewrite emails and summaries tailored to specific audiences (technical vs. executive).
I do not see the need for that at my level.

I do not like genAI. My manager doesn't like it.
It doesn't understand anything.

But our adversaries are using it to accelerate their attacks. We can't hire the people to be fast and agile enough to keep up. We have so much noise to parse and automated tools may be able to filter and parse it for us.
The volume of attacks has increased but their quality has not improved. Still, it makes finding that needle difficult.

@jrdepriest Setting aside the ethics of training data sources and completely unsustainable energy requirements, it's fine as long as there is always a person between the output and implementation double checking to make sure it's acceptable. That goes regardless of the purpose (code, pictures, legal filings, etc). It's a tool, yes, but an extremely fallible one, it should not be allowed to become a plausible deniability generator.

(and then, also, let's not set aside the ethics)

@StarkRG

My boss is in a band and I am a writer. We both hate LLMs for being trained in stolen works and destroying ingenuity and creativity.
I further hate them for the environmental impact.
Obviously, I mean the tech bros and the massive corporations behind the current bubble of LLMs when I say "them".

But we still have to do our jobs.

As they say, there is no ethical consumption under capitalism.

@jrdepriest @StarkRG @tinker But is an LLM the right tool for that job? ML seems like a better hammer for that nail.
@KatS @StarkRG @tinker definitely ML would be better but we are being asked to evaluate this so we are evaluating it. If it can keep from from having to write no-code / code to parse random JSON to Markdown or HTML for reports, I'll take.
It's been hard to hire people, ironically because they are using LLMs to game their resumés and video interviews.

@StarkRG

Unfortunately in the corporate fragments of this industry you will get people ticketing you about your containers not passing automated tooling checking for bad OS components when in fact they are distroless containers with a single binary and a config file. So it definitely varies.

@StarkRG

I wish I had your level of optimism...

@tinker @StarkRG If you need further proof that drek can be prevalent in the industry, see Wazuh. It is a utter dumpsterfire with insanely out of date rules with no meaningful way to keep them up to date.
IME the infosec industry has been reactionary from the beginning.

CC: @tinker@infosec.exchange
@StarkRG from where I sit, at least 90% of infosec is a gloopy mixture of snake oil and cargo cultism. I'm entirely unsurprised that magical LLM thinking has taken strong root in such fertile soil.