Reposting this article by @ericgeller, about concerns with AI usage undermining trust in threat intel, with alt text in screenshot: https://www.cybersecuritydive.com/news/ai-isacs-threat-intelligence-information-sharing-trust/815499/

The ease of breaking trust here with AI is the really key thing. There's enough noise and FUD in threat intelligence already

#infosec #isac #ThreatIntelligence #cybersecurity

@cxiao Threat intel by AI should be labeled. The source is part of my understanding of what I am looking at. "Analyst on our team" vs "Claude doing his best" matters in terms of what to expect, what to doubt and what red flags to watch out for.

@bh11235 Exactly. I find that as soon as I know AI was involved in drafting something I have to approach my review completely differently (and usually it takes way longer)...

With your colleagues you know and trust they are experts, with AI that trust is broken

@cxiao To be fair I find a lot to appreciate about AI output too. It has a technically exacting spirit. It likes to, as the saying goes, 'hug the query' and do a lot of homework that a human wouldn't always bother with. It likes to examine 7 hypotheses before it gets attached to one. But what it lacks, among other things, is the desperate motivation to win - the fundamental sense that it's going to be held accountable for what it wrote, with all resulting implications. It's an often overconfident savant that will, if proven wrong, get to say "you're absolutely right, that was my bad" and this matters in both a fundamental and practical sense.