Masterclass Adversarial Machine Learning

MAML2021
Engels

How can you tell the difference between elephants and lions? We, humans, will know for sure when we encounter one in the wild, but machines can nowadays make an excellent guess too. With more than 99.5% accuracy, modern machine learning algorithms know very well how to exploit visual features as best as possible. But, alas, that is not the complete picture: machines are easily fooled too. With some clever tricks, any picture of a lion can easily be manipulated in such a way that humans do not notice any difference, but that the machine learning model sees a completely different animal. Such a manipulated image is called an adversarial example.
What is going on there? Have we built the most powerful machine learning models ever on brittle foundations?

 

The above is an example of adversarial learning, a subfield of machine learning that is concerned with what happens when we fool a machine learning model and how we can prevent and/or exploit this. While the first example here is mostly concerned with issues, an example of exploitation is found in the class of machine learning models called generative adversarial networks. Ever since the dawn of generative adversarial networks, the field of generative machine learning has taken multiple leaps forward and has paved the way to application domains we couldn’t even think of before. When training generative adversarial networks, we let two machines play a game against each other: one machine gradually tries to become a master painter, while the other machine is the art critic that gets better and better at discerning genuine paintings from counterfeits. By making the painter try to fool the critic over and over again, the painter becomes more skilled and will produce more realistic paintings.

 

In this series of masterclasses, two researchers from Ghent University will take you deep into the field of adversarial learning. On the first morning we will examine generative adversarial networks in full detail, while the second morning will all be about finding and protecting against adversarial examples.
Each class will be wrapped up by a speaker from industry (ML6 and IBM), where theory becomes practice.

The masterclasses will take place on 1 & 3 December 2021 from 9h till 12h30 (via livestream only).

This masterclass is a collaboration between UGain and VAIA.

 

 

Day 1: Generative Adversarial Networks

 

Part 1 (9h – 11h) - Cedric De Boom

In this masterclass, you will learn everything there is to know about generative adversarial networks (GANs) and how they are trained. We will look deep into some of the training issues that arise and how they can be solved using some “black belt ninja tricks”. Finally, we will look at three interesting classes of GANs that have proven their merits: CycleGAN, Wasserstein GAN and StyleGAN.

  • 1. Introduction to generative models
  • 2. Introduction to generative adversarial networks (GANs)
  • 3. “The game of GANs”: generator vs discriminator
  • 4. How to train GANs
  • 5. Training challenges: game saturation & mode collapse
  • 6. Black belt ninja tricks for GANs
  • 7. Performance evaluation for GANs
  • 8. CycleGAN
  • 9. Wasserstein GAN
  • 10. StyleGAN
  • 11. Application: adversarial autoencoders

 

Part 2 (11h30 – 12h30): Lucas Desard

Lucas Desard will take you on an exciting trip and show you what can be done in practice with generative adversarial networks in the present day. He will talk about deep fakes and how they can be detected, as well as face swaps, colorisation of historic footage, lip synchronization, etc.

  • 1. Deep fakes: creation and detection
  • 2. Colorisation of historical photographs
  • 3. Face swaps and face transfers
  • 4. Artificial lip synchronization
  • 5. And much more

 

Day 2: Adversarial machine learning


Part 1 (9h – 11h): Jonathan Peck (Ghent University)

In this masterclass you will learn how to fool machine learning models by attacking them with adversarial techniques. You will also learn how to to make your models more robust and how to protect them against such attacks. Should we worry, or is this all just a theoretical exercise? Jonathan will tell you everything about it.

  • 1. Types of attacks on ML models
  • 2. Adversarial examples, threat model and Kerckhoffs's principle
  • 3. Attacks: L-BFGS, fast gradient sign, PGD, Carlini-Wagner, Transfer...
  • 4. Real-life adversarial examples
  • 5. Defenses: denoising, detection, hardening; certified, uncertified; robust optimization; randomized smoothing
  • 6. Arms race, Schneier's law, no free lunch
  • 7. Optimal transport
  • 8. Robustness vs accuracy; brittle features

 

Part 2 (11.30 - 12.30) - Beat Buesser (IBM Dublin)

Beat Buesser is researcher at IBM and works on adversarial machine learning. He is leading the development team of the ART toolbox, “Adversarial Robustness 360 Toolbox”, which is the industry standard when implementing and researching adversarial attack and examples. Beat will give you an overview of the toolbox and will provide some tutorials on how to experiment it yourself.

Schrijf je hier in voor lessen uit deze cursus

Adversarial Machine Learning

Online beschikbaar
Beschrijving

The masterclasses will take place on 1 & 3 December 2021 from 9h till 12h30 (via livestream only).

 

Fee

1 Day Online: € 35

Subscribe here: https://www.ugain.ugent.be/aml2021.htm