Prompt Injections, Hallucinations & More – Keeping LLMs Securely in Check
A chatbot hallucinates a very generous refund policy for a customer, and a judge decides that this AI invention is binding for the company. A user “convinces” an LLM to do anything and promptly gains access to sensitive data. Both are nightmare scenarios for a company. Nevertheless, with the great success of chatbots and LLM apps, the integration of generative AI into business applications today plays a central role in the business strategy of many companies. In this session, Sebastian Gingter will shed light on how we can develop robust LLM-based solutions that are both innovative and secure. We will discuss real examples of problems in applications that arise directly from an LLM, such as hallucinations or prompt injection attacks. You will see what measures leading providers have taken to prevent such risks and what concrete options you have to keep generative AI in check and make it a safe, trustworthy, and value-adding part of your products.