AI#Cyberguard Bootcamp
Everything you need to know about AI Security – from the basics to the details in development to the secrets of AI and cyber security organization
Course Content in a nutshell
Our AI Security Program starts with the AI Security Primer. As always, we cover the basics, before we go into details. This Course consists of three Lessons with seven modules (topics) and 5 Quizzes. The first three topics are about general terms and definitions, about Tools and Basics. The second lesson introduces security for AI and ML – a presentation gives an overview on threats and threat actors to modern systems.
The third lesson is special – it addresses the organizational structure of AI/ML projects and organizations. We talk about necessary skill level, use cases and finish with the discussion of a potential RACI Table for AI/ML Projects.
This is all about machine learning. We talk in details about supervised learning, including math theory and concepts. We included detailed readings and code examples. We proceed with unsupervised learning and reinforcement ML. We discuss deep learning and finally the implication of ML on Cybersecurity.This is a long and demanding course with one lesson and 10 modules with several quizzes.
This is not a scientific lesson – we talk about the different techniques from a more practical approach. But still, you need good skills in math and some python knowledge. Take your time with this lesson – it is a fundament for understanding risk analysis of algorithms or doing security assessments on Models.
The course starts with a theoretical review of a modern cyber security model. The ZoneLock Model was developed by GNSEC in Singapore and is still under development – but it can help a lot to calculate security zones and visualize complex architectures.
The first lesson ends with some insights on AI/ML related attack vectors and risk and a general review on cyber-security requirements for our models. The second part is all about the different attack vectors for data and models and about protection mechanisms. These are very in depth lessons with some interesting but maybe challenging examples.
This course bridges AI/ML risks and threats with more traditional cyber-security. Based on ENISAS work, we look on Model Lifecycle definitions, on Risk definition, on security controls and more. We try to blend both worlds together, not just adapting AI/ML security into cyber-security. This is not working. Even if it helps the more traditional cyber-security expert to understand the new world, it will not help him to really understand the “thinking” of a model.