Protecting Sensitive Data in the Age of Generative AI
Alex Soto - IBM
Generative AI and Retrieval-Augmented Generation (RAG) are transforming how organizations access and generate knowledge. But as we connect powerful LLMs to private data, one critical question stands out: how do we protect our sensitive information?
This session explores data security in GenAI systems. You’ll learn how information can unintentionally leak through vector embeddings, what new algorithms and architectures can safeguard them, and how to build RAG pipelines that remain both high-performing and compliant.
What you’ll take away:
- How GenAI systems handle (and sometimes mishandle) sensitive data.
- Why embeddings are both powerful and risky and how to protect them.
- Proven design patterns for secure and privacy-conscious RAG implementations.
- A look at emerging tools and frameworks that make secure AI development easier.
This talk will help you confidently bridge innovation and data protection while making GenAI work safely with your most valuable data.
