Cybersecurity in AI Systems Training Course
Securing AI systems presents unique challenges that differ from traditional cybersecurity approaches. AI systems are vulnerable to adversarial attacks, data poisoning, and model theft, all of which can significantly impact business operations and data integrity. This course explores key cybersecurity practices for AI systems, covering adversarial machine learning, data security in machine learning pipelines, and compliance requirements for robust AI deployment.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI and cybersecurity professionals who wish to understand and address the security vulnerabilities specific to AI models and systems, particularly in highly regulated industries such as finance, data governance, and consulting.
By the end of this training, participants will be able to:
- Understand the types of adversarial attacks targeting AI systems and methods to defend against them.
- Implement model hardening techniques to secure machine learning pipelines.
- Ensure data security and integrity in machine learning models.
- Navigate regulatory compliance requirements related to AI security.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Course Outline
Introduction to AI Security Challenges
- Understanding security risks unique to AI systems
- Comparing traditional cybersecurity vs. AI cybersecurity
- Overview of attack surfaces in AI models
Adversarial Machine Learning
- Types of adversarial attacks: evasion, poisoning, and extraction
- Implementing adversarial defenses and countermeasures
- Case studies on adversarial attacks in different industries
Model Hardening Techniques
- Introduction to model robustness and hardening
- Techniques for reducing model vulnerability to attacks
- Hands-on with defensive distillation and other hardening methods
Data Security in Machine Learning
- Securing data pipelines for training and inference
- Preventing data leakage and model inversion attacks
- Best practices for managing sensitive data in AI systems
AI Security Compliance and Regulatory Requirements
- Understanding regulations around AI and data security
- Compliance with GDPR, CCPA, and other data protection laws
- Developing secure and compliant AI models
Monitoring and Maintaining AI System Security
- Implementing continuous monitoring for AI systems
- Logging and auditing for security in machine learning
- Responding to AI security incidents and breaches
Future Trends in AI Cybersecurity
- Emerging techniques in securing AI and machine learning
- Opportunities for innovation in AI cybersecurity
- Preparing for future AI security challenges
Summary and Next Steps
Requirements
- Basic knowledge of machine learning and AI concepts
- Familiarity with cybersecurity principles and practices
Audience
- AI and machine learning engineers looking to improve security in AI systems
- Cybersecurity professionals focusing on AI model protection
- Compliance and risk management professionals in data governance and security
Open Training Courses require 5+ participants.
Cybersecurity in AI Systems Training Course - Booking
Cybersecurity in AI Systems Training Course - Enquiry
Cybersecurity in AI Systems - Consultancy Enquiry
Testimonials (1)
The profesional knolage and the way how he presented it before us
Miroslav Nachev - PUBLIC COURSE
Course - Cybersecurity in AI Systems
Upcoming Courses
Related Courses
ISACA Advanced in AI Security Management (AAISM)
21 HoursThe AAISM framework provides advanced guidance for assessing, governing, and managing security risks associated with artificial intelligence systems.
This instructor-led training, available both online and on-site, is designed for advanced professionals seeking to implement robust security controls and governance practices within enterprise AI environments.
Upon completion of this program, participants will be equipped to:
- Assess AI security risks using industry-standard methodologies.
- Implement governance models that support the responsible deployment of AI.
- Align AI security policies with organizational objectives and regulatory requirements.
- Strengthen resilience and accountability across AI-driven operations.
Course Format
- Instructor-led lectures complemented by expert insights.
- Hands-on workshops and assessments.
- Practical exercises based on real-world AI governance scenarios.
Customization Options
- To tailor the training to your specific organizational AI strategy, please contact us to customize the course.
AI Governance, Compliance, and Security for Enterprise Leaders
14 HoursThis instructor-led live training in Czech Republic (online or onsite) is designed for intermediate-level enterprise leaders who wish to learn how to govern and secure AI systems responsibly and in alignment with emerging global frameworks like the EU AI Act, GDPR, ISO/IEC 42001, and the U.S. Executive Order on AI.
By the end of this training, participants will be able to:
- Understand the legal, ethical, and regulatory risks of using AI across departments.
- Interpret and apply major AI governance frameworks (EU AI Act, NIST AI RMF, ISO/IEC 42001).
- Establish security, auditing, and oversight policies for AI deployment in the enterprise.
- Develop procurement and usage guidelines for third-party and in-house AI systems.
AI Risk Management and Security in the Public Sector
7 HoursArtificial Intelligence (AI) creates new layers of operational risk, governance complexities, and cybersecurity vulnerabilities for government bodies and departments.
This guided, live training (available online or on-site) targets IT and risk specialists in the public sector who have limited background in AI and want to learn how to assess, monitor, and secure AI systems within a governmental or regulatory framework.
Upon completing this training, participants will be capable of:
- Deciphering essential risk concepts associated with AI systems, such as bias, unpredictability, and model drift.
- Implementing governance and auditing frameworks specific to AI, including NIST AI RMF and ISO/IEC 42001.
- Identifying cybersecurity threats aimed at AI models and data pipelines.
- Developing cross-departmental risk management strategies and aligning policies for AI implementation.
Course Format
- Interactive lectures and discussions featuring public sector use cases.
- Practical exercises on AI governance frameworks and policy mapping.
- Scenario-based threat modeling and risk evaluation.
Customization Options
- To arrange customized training for this course, please reach out to us.
Introduction to AI Trust, Risk, and Security Management (AI TRiSM)
21 HoursThis instructor-led live training in Czech Republic (online or onsite) is designed for IT professionals at beginner to intermediate levels who seek to understand and implement AI TRiSM within their organizations.
Upon completing this training, participants will be equipped to:
- Comprehend the fundamental concepts and significance of managing AI trust, risk, and security.
- Identify potential risks linked to AI systems and apply mitigation strategies.
- Execute security best practices specific to AI environments.
- Gain insight into regulatory compliance and ethical implications for AI deployment.
- Formulate effective strategies for AI governance and management.
Building Secure and Responsible LLM Applications
14 HoursThis instructor-led live training in Czech Republic (online or onsite) targets intermediate to advanced AI developers, architects, and product managers who wish to identify and mitigate risks associated with LLM-powered applications, including prompt injection, data leakage, and unfiltered output, while incorporating security controls like input validation, human-in-the-loop oversight, and output guardrails.
By the end of this training, participants will be able to:
- Understand the core vulnerabilities of LLM-based systems.
- Apply secure design principles to LLM app architecture.
- Use tools such as Guardrails AI and LangChain for validation, filtering, and safety.
- Integrate techniques like sandboxing, red teaming, and human-in-the-loop review into production-grade pipelines.
EXO Security and Governance: Offline Model Management
14 HoursThis instructor-led, live training in Czech Republic (online or onsite) is designed for security engineers and compliance officers who wish to harden EXO deployments, control model access, and govern AI workloads running entirely on-premise.
Introduction to AI Security and Risk Management
14 HoursThis instructor-led, live training in Czech Republic (online or onsite) is aimed at beginner-level IT security, risk, and compliance professionals who wish to understand foundational AI security concepts, threat vectors, and global frameworks such as NIST AI RMF and ISO/IEC 42001.
By the end of this training, participants will be able to:
- Understand the unique security risks introduced by AI systems.
- Identify threat vectors such as adversarial attacks, data poisoning, and model inversion.
- Apply foundational governance models like the NIST AI Risk Management Framework.
- Align AI use with emerging standards, compliance guidelines, and ethical principles.
OWASP GenAI Security
14 HoursBased on the latest OWASP GenAI Security Project guidance, participants will learn to identify, assess, and mitigate AI-specific threats through hands-on exercises and real-world scenarios.
Privacy-Preserving Machine Learning
14 HoursThis instructor-led, live training in Czech Republic (online or onsite) is designed for advanced professionals who wish to implement and evaluate techniques such as federated learning, secure multiparty computation, homomorphic encryption, and differential privacy in real-world machine learning pipelines.
By the end of this training, participants will be able to:
- Understand and compare key privacy-preserving techniques in ML.
- Implement federated learning systems using open-source frameworks.
- Apply differential privacy for safe data sharing and model training.
- Use encryption and secure computation techniques to protect model inputs and outputs.
Red Teaming AI Systems: Offensive Security for ML Models
14 HoursThis instructor-led live training in Czech Republic (online or onsite) is designed for advanced-level security professionals and ML specialists who wish to simulate attacks on AI systems, uncover vulnerabilities, and enhance the robustness of deployed AI models.
By the end of this training, participants will be able to:
- Simulate real-world threats to machine learning models.
- Generate adversarial examples to test model robustness.
- Assess the attack surface of AI APIs and pipelines.
- Design red teaming strategies for AI deployment environments.
Securing Edge AI and Embedded Intelligence
14 HoursThis instructor-led, live training in Czech Republic (online or onsite) is aimed at intermediate-level engineers and security professionals who wish to secure AI models deployed at the edge against threats such as tampering, data leakage, adversarial inputs, and physical attacks.
By the end of this training, participants will be able to:
- Identify and assess security risks in edge AI deployments.
- Apply tamper resistance and encrypted inference techniques.
- Harden edge-deployed models and secure data pipelines.
- Implement threat mitigation strategies specific to embedded and constrained systems.
Securing AI Models: Threats, Attacks, and Defenses
14 HoursThis instructor-led, live training in Czech Republic (online or onsite) is designed for intermediate-level professionals in machine learning and cybersecurity who want to understand and mitigate emerging threats against AI models, using both conceptual frameworks and practical defenses like robust training and differential privacy.
By the end of this training, participants will be able to:
- Identify and categorize AI-specific threats such as adversarial attacks, inversion, and poisoning.
- Utilize tools like the Adversarial Robustness Toolbox (ART) to simulate attacks and evaluate models.
- Implement practical defenses, including adversarial training, noise injection, and privacy-preserving techniques.
- Design threat-aware model evaluation strategies for production environments.
Security and Privacy in TinyML Applications
21 HoursTinyML represents a methodology for deploying machine learning models on low-power, resource-limited devices operating at the network edge.
This instructor-led, live training (available online or onsite) targets advanced professionals seeking to secure TinyML pipelines and integrate privacy-preserving techniques into edge AI applications.
Upon completing this course, participants will be equipped to:
- Recognize security risks specific to on-device TinyML inference.
- Deploy privacy-preserving mechanisms for edge AI implementations.
- Strengthen TinyML models and embedded systems against adversarial threats.
- Apply best practices for secure data handling in constrained environments.
Course Format
- Interactive lectures complemented by expert-led discussions.
- Practical exercises focused on real-world threat scenarios.
- Hands-on implementation utilizing embedded security tools and TinyML technologies.
Customization Options
- Organizations can request a customized version of this training to align with their specific security and compliance requirements.
Safe & Secure Agentic AI: Governance, Identity, and Red-Teaming
21 HoursThis course explores governance, identity management, and adversarial testing for agentic AI systems, with a focus on enterprise-safe deployment patterns and practical red-teaming techniques.
This instructor-led, live training (available online or onsite) is designed for advanced practitioners who want to design, secure, and evaluate agent-based AI systems in production environments.
By the end of this training, participants will be able to:
- Define governance models and policies for safe agentic AI deployments.
- Design non-human identity and authentication flows for agents with least-privilege access.
- Implement access controls, audit trails, and observability tailored to autonomous agents.
- Plan and execute red-team exercises to discover misuses, escalation paths, and data exfiltration risks.
- Mitigate common threats to agentic systems through policy, engineering controls, and monitoring.
Format of the Course
- Interactive lectures and threat-modeling workshops.
- Hands-on labs: identity provisioning, policy enforcement, and adversary simulation.
- Red-team/blue-team exercises and end-of-course assessment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.