Looks like Corporate #infosec has made it's choice.

#RSAC is filled with talks embracing AI and making it "secure".

And they invited and encouraged the Trump regime to spread its disinformation - fully sanctioned and encouraged by the conference leadership(and by conference attendees who laughed at the regime's jokes and lies and issued no challenges or stands during the talk).

With the ostracization of #ChrisKrebs by industry and the full embrace of Kristi Noem as a speaker, this was the moment that infosec made its bed.

Y'all lie in it now.

@tinker I would definitely not have pegged infosec as an industry rife with the kind of gullible idiot AI is marketed at. In fact, I would have assumed the exact opposite.

@StarkRG @tinker

Our CISO is super-hyped on AI as a tool to eventually handle Tier 1 SOC, write reports, and summarize data.
* some assembly required

@jrdepriest @tinker I don't claim to be an expert (either in infosec or AI), however, I can certainly see that there are some situations where using AI can be a good choice and every single one of them requires a real person double-checking every result. If you don't want to pay people to hand-check everything returned by an AI algorithm, then AI algorithms aren't the solution you're looking for. It's good for producing "that feels like it could be right" matches in enormous datasets.

@StarkRG @tinker
The current workflow with live people requires someone else to review and approve the work. He's hoping we can replace much of it with LLMs so it will be faster to get to level 2 for review.
We aren't to the stage where we can even test it yet. Still dealing with demos and vendor hype.
We have a private ChatGPT instance we are strongly encouraged to use. I know he uses it to write or rewrite emails and summaries tailored to specific audiences (technical vs. executive).
I do not see the need for that at my level.

I do not like genAI. My manager doesn't like it.
It doesn't understand anything.

But our adversaries are using it to accelerate their attacks. We can't hire the people to be fast and agile enough to keep up. We have so much noise to parse and automated tools may be able to filter and parse it for us.
The volume of attacks has increased but their quality has not improved. Still, it makes finding that needle difficult.

@jrdepriest Setting aside the ethics of training data sources and completely unsustainable energy requirements, it's fine as long as there is always a person between the output and implementation double checking to make sure it's acceptable. That goes regardless of the purpose (code, pictures, legal filings, etc). It's a tool, yes, but an extremely fallible one, it should not be allowed to become a plausible deniability generator.

(and then, also, let's not set aside the ethics)

@StarkRG

My boss is in a band and I am a writer. We both hate LLMs for being trained in stolen works and destroying ingenuity and creativity.
I further hate them for the environmental impact.
Obviously, I mean the tech bros and the massive corporations behind the current bubble of LLMs when I say "them".

But we still have to do our jobs.

As they say, there is no ethical consumption under capitalism.