Nathaniel Gleicher

579 Followers
62 Following
45 Posts
Head of security policy at Meta. Countering adversarial threats. Previously Illumio, NSC, DOJ. He/him. Dreaming of fall in the green mountain state and winter in the Sierras.

Right-wing scumbags are apparently review-bombing this week's episode of The Last of Us because it features -- gasp -- gay people in love.

Let's be clear. This is the best show on television right now, period. And that was the best episode of any show I've seen in years.

If you aren't watching The Last of Us, you should be.

In general, when evaluating the impact of AI-generators like this, I think it's important to separate out use cases where we imagine the AI tool might be able to work with *no* human intervention (from a threat perspective: respond to users who answer a phishing email) from cases where its output would be used & refined by a human (generate draft topics for an IO campaign). The former seems more problematic for defenders from the perspective of sheer scale than the latter.

Interesting research on the security implications of ChatGPT from StanfordIO and Georgetown/CSET: https://cset.georgetown.edu/article/forecasting-potential-misuses-of-language-models-for-disinformation-campaigns-and-how-to-reduce-risk/

There's lots of churn about what sorts of threats LLMs might generate, so good to see a more nuanced take. In general, I think LLMs are most likely to be misused by bad actors for scaled threats like fraud. For more targeted threats like influence operations, content generation hasn't been a primary barrier for threat actors (although it might help actors who don't know the community they want to target seem more authentic).

Also, if GPTZero continues to be effective, use of LLMs like ChatGPT could enable better *detection* of bad actors -- much like campaigns that rely on GAN-generated photos get caught b/c of the artifacts in those photos.

All of this will continue to evolve as the technology evolves, and definitely an important place for defenders to watch in 2023!

Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk - Center for Security and Emerging Technology

Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI, and the Stanford Internet Observatory explore how language models could be misused for influence operations in the future, and provide a framework for assessing potential mitigation strategies.

Center for Security and Emerging Technology

I really think before we dive into the weeds they should explore holistic solutions. Like public private partnerships and information sharing!

https://techhub.social/@Techmeme/109640007609782944

Techmeme (@[email protected])

Sources: the White House is set to unveil a strategy in the coming weeks that calls for cybersecurity regulation impacting all critical US infrastructure (Washington Post) https://www.washingtonpost.com/national-security/2023/01/05/biden-cyber-strategy-hacking/ http://www.techmeme.com/230105/p36#a230105p36

TechHub

Watching the round and round and round house votes. How long until someone nominates Incitatus?

https://en.wikipedia.org/wiki/Incitatus

Incitatus - Wikipedia

This is great work by the WA team to help ensure people around the world can communicate freely and privately through volunteer-run proxy servers. https://twitter.com/wcathcart/status/1611031956044795904?s=46&t=ldjj15FqC77Q_QS7C8k80Q

More on how to setup/run a proxy server here: https://github.com/WhatsApp/proxy

Will Cathcart on Twitter

“Happy New Year! While many of us celebrated by texting our loved ones on WA, there are millions of people in Iran and elsewhere who continue to be denied the right to communicate freely and privately. So today we’re making it easier for anyone to connect to WA using a proxy.”

Twitter
Presenting the Champlain Ice Scarf: #climate #knitting. The white rows are years when Lake Champlain (#vermont) froze over. The blue rows are years when it did not. Data are from 1800 through 2021. Guess which end is 1820.
Genius.
It’s good to be reminded that there is such good in people, even in the most dangerous conditions: https://t.co/Yc4FdwDwV0
How volunteers risk their lives to rescue abandoned animals amid war

When Ukrainian soldiers were entering the village of Yampil in Donetsk Oblast after five months of Russian occupation, they discovered an abandoned zoo on the outskirts. Dozens of animal corpses, either killed by Russian troops or dead of starvation, were

The Kyiv Independent

Makes perfect sense — congrats @marklemley and congrats to Lex Lumina.

Also, this is a perfect description of Mark. Ebullient FTW! Mark “is a supernova of IP law, and one of the most accomplished lawyer-scholars in the United States,” Lex Lumina managing partner Rhett Millsaps II said in a written statement. “He’s also one of the kindest and most ebullient people I know. We are deeply honored that he’s chosen to join our firm.”

https://www.law.com/therecorder/2022/12/29/a-law-supernova-has-landed-but-he-wont-be-part-of-the-durie-tangri-merger-as-expected/

A Law 'Supernova' Has Landed—But He Won't Be Part of the Durie Tangri Merger as Expected | The Recorder

The Stanford law professor says Durie Tangri's merger partner, Morrison & Foerster, is a great law firm. But it would bring too many conflicts of interest for an academic who does much of his work in the public square.

The Recorder