
Artificial Intelligence is already shaping multiple areas of society and creating new opportunities. At the same time, it introduces risks that are not always visible. This seminar examines how AI can be used both to attack and to defend, and what it means to work with AI in a responsible way.
The session will explain how models work, how to manage and protect the data they rely on, and why it is essential to supervise their deployment and evolution. Through practical examples and clear recommendations, participants will learn how to prevent and defend against attackers who use AI, and how to establish limits that ensure safe and sustainable use.
Learning Objectives
By the end of the seminar, participants will:
- Understand key AI-enabled attack vectors and their implications.
- Learn how to apply best practices for secure and responsible AI use.
- Identify risks at the model, data, and deployment level.
- Recognise the role of cybersecurity as a necessary condition for AI applications.
Target Audience
The seminar is aimed at IT and security professionals, data and AI specialists, compliance officers, and anyone working with AI systems in practice.
Programme (3h)
Part I: Attacks with AI
- AI-driven personalised phishing
- Deepfakes and cloned voices
- Brute-force attacks and OSINT with AI
- Other cases: adaptive malware
- Key recommendations
Part II: Cybersecurity as a necessary condition for AI
- Main risks to consider
- Best practices for secure AI development and operation
- How AI and cybersecurity can work together
Format
The seminar is delivered online. It includes practical examples, actionable recommendations, and a short Q&A. Supporting materials will be shared with registered participants after the session.
Quick Information
- Date and time: Tuesday, 16 September 2025 · 09:00–12:00 (CET)
- Format: Online
- Duration: 3h
- Registration: Free, required in advance
- Link: Register here