Are you passionate about AI security and building resilient applications? This session delves into securing language models against prompt injection threats. Explore AWS deployment strategies, NLP input validation, prompt tracing, threat modeling, and effective countermeasures. Engage with practical insights and learn in a fun way.
Prompt injection is a new and increasingly serious threat to modern language models, jeopardizing the trustworthiness and security of AI-powered applications. But how can you protect applications built on these models? How can you properly design applications using LLMs? You’ll learn what prompt injection is, why it cannot be ignored, and how AWS helps create layers to protect language models. We’ll discuss proven strategies for developing applications resistant to prompt injection and input manipulation. Focus on NLP input validation. Additional countermeasures for security in LLM. Tracing of the prompts. Threat modeling of LLM. The presentation will be in the form of a storytelling or gamification experience.