@shanselman and Mark Russinovich learn responsible #AI. If there is one recorded #MSIgnite session you want to see, it is this one. Learn about limitations and threats through live demos like #jailbraking, #promptinjection, #reasoning, #hallucinating, #kindness, and how to prepare for them. https://ignite.microsoft.com/en-US/sessions/BRK329
Scott and Mark learn responsible AI
Join Mark Russinovich & Scott Hanselman to explore the landscape of generative AI security, focusing on large language models. They cover the three primary risks in LLMs: Hallucination, indirect prompt injection and jailbreaks (or direct prompt injection). We'll explore each of these three key risks in depth, examining their origins, potential impacts, and strategies for mitigation and how to work towards harnessing the immense potential of LLMs while responsibly managing their inherent risks.