Can AI be hacked into going rogue?
Can we really trust large language models like ChatGPT?

In our latest Neuro Sec Ops episode, we expose the wild world of LLM jailbreaks, dive into AI guardrails, and unpack the battle between security vs. usability.

🔊 Buckle up — this is AI safety like you’ve never heard it.

🎧 Listen now: https://open.spotify.com/episode/6jw1aKK8qE6bnnLiKj8Lz2?si=1X8Kav6yQS6aaOwgGO7c9w

#AIsecurity #LLMjailbreak #CyberThreats #Guardrails #AIsafety #GPT4 #MachineLearning #CyberPodcast

Guardrails for AI: Can We Stop LLMs from Going Rogue?

Neuro Sec Ops · Episode

Spotify