Interesting research on the security implications of ChatGPT from StanfordIO and Georgetown/CSET: https://cset.georgetown.edu/article/forecasting-potential-misuses-of-language-models-for-disinformation-campaigns-and-how-to-reduce-risk/
There's lots of churn about what sorts of threats LLMs might generate, so good to see a more nuanced take. In general, I think LLMs are most likely to be misused by bad actors for scaled threats like fraud. For more targeted threats like influence operations, content generation hasn't been a primary barrier for threat actors (although it might help actors who don't know the community they want to target seem more authentic).
Also, if GPTZero continues to be effective, use of LLMs like ChatGPT could enable better *detection* of bad actors -- much like campaigns that rely on GAN-generated photos get caught b/c of the artifacts in those photos.
All of this will continue to evolve as the technology evolves, and definitely an important place for defenders to watch in 2023!