Cyber Security Seminar by Dr. Buse G.A. Tekgul - Title: Real-Time Adversarial Perturbations Against Deep Reinforcement Learning Policies: Attacks and Defenses

14 March 2023 12:00 till 13:45 | Add to my calendar

14 MARCH 2023 FROM NOON TILL appr. 12:45 - please join through zoom:

Join Zoom Meeting
https://tudelft.zoom.us/j/97355999207?pwd=SFJzaFNycE8zSnFHTEhpWnF2WWV2UT09

Meeting ID: 973 5599 9207
Passcode: 056535

Abstract:
Deep reinforcement learning (DRL) is vulnerable to adversarial perturbations. Adversaries can mislead the policies of DRL agents by perturbing the state of the environment observed by the agents. Existing attacks are feasible in principle, but face challenges in practice, either by being too slow to fool DRL policies in real time or by modifying past observations stored in the agent’s memory. 

In this talk, we first present challenges of calculating and injecting adversarial perturbations to the state of the environment due to innate characteristics of reinforcement learning, and discuss unrealistic adversarial capabilities used in previous work. In the second part of the talk, we show that Universal Adversarial Perturbations (UAP), independent of the individual state to which they are applied, can fool DRL policies effectively and in real time. We introduce three attack variants leveraging UAP. Via an extensive evaluation using Arcade Learning Environment, we show that our attacks are effective, as they fully degrade the performance of three different DRL agents (up to 100%, even when the maximum amount of perturbation as small as 0.01). It is faster than the frame rate (60 Hz) of image capture and considerably faster than prior attacks (1.8 ms). Our attack technique is also efficient, incurring an online computational cost of 0.027 ms. Using two tasks involving robotic movement, we confirm that our results generalize to complex DRL tasks. Furthermore, we demonstrate that the effectiveness of known defenses diminishes against universal perturbations. We introduce an effective technique that detects universal perturbations against DRL policies.

Short bio:

Buse G. A. Tekgul is a security researcher at Nokia Bell Labs in Finland. Before joining Nokia Bell Labs, she has been working as a doctoral student in the Secure Systems Group at Aalto University, where she obtained her PhD in August 2022. Her research interests include security and privacy of machine learning; particularly, adversarial examples and robust machine learning, model extraction attacks and defenses, dataset privacy, and ownership verification in both centralized and distributed machine learning. For more information about her research, visit her homepage buseatlitekgul.github.io .