"With AI, now any idiot can write malware!"

As a security researcher, I can assure you that idiots have been writing malware for quite some time.

@evacide
Does this mean you're not that concerned about GPT in this space?

@nottrobin @evacide

Exploit writers often ignore function return codes, buffer sizes, etc. The very same programming errors they exploit...

@eiaccb
Oh yeah I'm sure there's a shit ton of sloppy malware.

I guess my question is a little tangential - my concern would be that LLMs will provide the ability for a much larger set of people to now generate specifically targeted malware, increasing the volume by many factors.

The type of malware won't be new, because all LLMs can do is copy existing patterns, but existing attacks will be applied in new places at an unprecedented speed.

But I'm no expert. Does that concern you?
@evacide

@evacide @nottrobin @eiaccb I ran a joint research project last year with University of Manchester to investigate the possibility of weaponisation using LLMs. The conclusion was that there are existing tools that are much easier for script kiddies and experienced actors to use than fixing the code that comes out of an LLM. LLM developers are also now filtering and using adversarial techniques to lower the risk of workable malware code generation.
@damianlewis
Thanks that's really interesting information! I don't suppose you have a link to those findings?
@evacide @eiaccb
@eiaccb @nottrobin @evacide Unfortunately not. A published paper isn’t available yet and the research contains some IP restrictions preventing me from sharing. Happy to share the conclusions if you DM me.
@nottrobin @evacide GPT itself has measures that prevemt it from complying to malicious requests or code anything of malicious use

@Gallitagen

Well... We've seen many many successful workarounds of those sorts of safeguards. So I'm not overly reassured by that itself.

@evacide

@nottrobin @evacide Yes, but for idiots its relatively, but not 100%, foolproof