• Skip to content (access key 1)
  • Skip to search (access key 7)
FWF — Austrian Science Fund
  • Go to overview page Discover

    • Research Radar
      • Research Radar Archives 1974–1994
    • Discoveries
      • Emmanuelle Charpentier
      • Adrian Constantin
      • Monika Henzinger
      • Ferenc Krausz
      • Wolfgang Lutz
      • Walter Pohl
      • Christa Schleper
      • Elly Tanaka
      • Anton Zeilinger
    • Impact Stories
      • Verena Gassner
      • Wolfgang Lechner
      • Birgit Mitter
      • Oliver Spadiut
      • Georg Winter
    • scilog Magazine
    • Austrian Science Awards
      • FWF Wittgenstein Awards
      • FWF ASTRA Awards
      • FWF START Awards
      • Award Ceremony
    • excellent=austria
      • Clusters of Excellence
      • Emerging Fields
    • In the Spotlight
      • 40 Years of Erwin Schrödinger Fellowships
      • Quantum Austria
    • Dialogs and Talks
      • think.beyond Summit
    • Knowledge Transfer Events
    • E-Book Library
  • Go to overview page Funding

    • Portfolio
      • excellent=austria
        • Clusters of Excellence
        • Emerging Fields
      • Projects
        • Principal Investigator Projects
        • Principal Investigator Projects International
        • Clinical Research
        • 1000 Ideas
        • Arts-Based Research
        • FWF Wittgenstein Award
      • Careers
        • ESPRIT
        • FWF ASTRA Awards
        • Erwin Schrödinger
        • doc.funds
        • doc.funds.connect
      • Collaborations
        • Specialized Research Groups
        • Special Research Areas
        • Research Groups
        • International – Multilateral Initiatives
        • #ConnectingMinds
      • Communication
        • Top Citizen Science
        • Science Communication
        • Book Publications
        • Digital Publications
        • Open-Access Block Grant
      • Subject-Specific Funding
        • AI Mission Austria
        • Belmont Forum
        • ERA-NET HERA
        • ERA-NET NORFACE
        • ERA-NET QuantERA
        • Alternative Methods to Animal Testing
        • European Partnership BE READY
        • European Partnership Biodiversa+
        • European Partnership BrainHealth
        • European Partnership ERA4Health
        • European Partnership ERDERA
        • European Partnership EUPAHW
        • European Partnership FutureFoodS
        • European Partnership OHAMR
        • European Partnership PerMed
        • European Partnership Water4All
        • Gottfried and Vera Weiss Award
        • LUKE – Ukraine
        • netidee SCIENCE
        • Herzfelder Foundation Projects
        • Quantum Austria
        • Rückenwind Funding Bonus
        • WE&ME Award
        • Zero Emissions Award
      • International Collaborations
        • Belgium/Flanders
        • Germany
        • France
        • Italy/South Tyrol
        • Japan
        • Korea
        • Luxembourg
        • Poland
        • Switzerland
        • Slovenia
        • Taiwan
        • Tyrol-South Tyrol-Trentino
        • Czech Republic
        • Hungary
    • Step by Step
      • Find Funding
      • Submitting Your Application
      • International Peer Review
      • Funding Decisions
      • Carrying out Your Project
      • Closing Your Project
      • Further Information
        • Integrity and Ethics
        • Inclusion
        • Applying from Abroad
        • Personnel Costs
        • PROFI
        • Final Project Reports
        • Final Project Report Survey
    • FAQ
      • Project Phase PROFI
      • Project Phase Ad Personam
      • Expiring Programs
        • Elise Richter and Elise Richter PEEK
        • FWF START Awards
  • Go to overview page About Us

    • Mission Statement
    • FWF Video
    • Values
    • Facts and Figures
    • Annual Report
    • What We Do
      • Research Funding
        • Matching Funds Initiative
      • International Collaborations
      • Studies and Publications
      • Equal Opportunities and Diversity
        • Objectives and Principles
        • Measures
        • Creating Awareness of Bias in the Review Process
        • Terms and Definitions
        • Your Career in Cutting-Edge Research
      • Open Science
        • Open-Access Policy
          • Open-Access Policy for Peer-Reviewed Publications
          • Open-Access Policy for Peer-Reviewed Book Publications
          • Open-Access Policy for Research Data
        • Research Data Management
        • Citizen Science
        • Open Science Infrastructures
        • Open Science Funding
      • Evaluations and Quality Assurance
      • Academic Integrity
      • Science Communication
      • Philanthropy
      • Sustainability
    • History
    • Legal Basis
    • Organization
      • Executive Bodies
        • Executive Board
        • Supervisory Board
        • Assembly of Delegates
        • Scientific Board
        • Juries
      • FWF Office
    • Jobs at FWF
  • Go to overview page News

    • News
    • Press
      • Logos
    • Calendar
      • Post an Event
      • FWF Informational Events
    • Job Openings
      • Enter Job Opening
    • Newsletter
  • Discovering
    what
    matters.

    FWF-Newsletter Press-Newsletter Calendar-Newsletter Job-Newsletter scilog-Newsletter

    SOCIAL MEDIA

    • LinkedIn, external URL, opens in a new window
    • , external URL, opens in a new window
    • Facebook, external URL, opens in a new window
    • Instagram, external URL, opens in a new window
    • YouTube, external URL, opens in a new window

    SCILOG

    • Scilog — The science magazine of the Austrian Science Fund (FWF)
  • elane login, external URL, opens in a new window
  • Scilog external URL, opens in a new window
  • de Wechsle zu Deutsch

  

Game Over Eva(sion): Securing Deep Learning with Game Theory

Game Over Eva(sion): Securing Deep Learning with Game Theory

Pascal Schöttle (ORCID: 0000-0001-8710-9188)
  • Grant DOI 10.55776/I4057
  • Funding program Principal Investigator Projects International
  • Status ended
  • Start January 1, 2019
  • End June 30, 2023
  • Funding amount € 354,137
  • Project website

Disciplines

Computer Sciences (100%)

Keywords

    Deep Learning, Security, Game Theory, Evasion Attacks

Abstract Final report

The project Game Over Eva(sion): Securing Deep Learning with Game Theory aims to protect deep learning classifiers against targeted attacks. Deep learning classifiers are not only popular in scientific research but are also more and more adapted to the daily life, as self-driving cars, smartphones, and digital personal assistants use these kind of algorithms. Unfortunately, recent research has shown that almost all deep learning classifiers are vulnerable to so-called evasion attacks. In an evasion attack, an attacker can slightly modify a benign object and by this achieves a misclassification with a very high probability. The robustness of deep learning classifiers to these attacks is an open problem. We will model the competition of an attacker who is able to launch an evasion attack and the defender who wants to train a deep learning classifier that is robust against such an attack with means of game theory. In a first step, we will analyze which concepts of other research areas, such as adversarial classification, can be translated to the domain of secure deep learning. Then, we will develop a game-theoretic model that captures all relevant aspects and dependencies between the attackers and the defenders strategies. In the course of the project, we expect the first theoretically well-founded results on the achievable security of deep learning classifiers in the presence of evasion attacks. Furthermore, we will evaluate existing countermeasures against evasion attacks to gain insights about their optimality when facing a strategic attacker. Finally, we want to implement the key properties from out theoretical models in a practical deep learning classifier. We will compare our classifier against other state-of- the-art classifiers in terms of robustness against evasion attacks and accuracy on benign input objects. By this, we can validate if it is possible to make deep learning classifiers robust against evasion attacks. The expected results of the project will enable us to judge if deep learning classifiers are suitable for scenarios where an attacker has incentives to explicitly fool them, i.e., security-critical areas and widespread consumer products.

In the research project "Game Over Eva(sion): Securing Deep Learning with Game Theory", we dedicated ourselves to the topic of machine learning in adversarial environments (Adversarial Machine Learning) and its significance for the security of Deep Learning systems. The focus was on developing strategies and methods to enhance the robustness of these systems against targeted attacks. The research field addressed has existed since 2004, spurred by the discovery that even complex spam filters can be deceived by minimal changes to emails. This realization led to an intensive engagement with the security of Machine Learning models, especially regarding Deep Neural Networks (DNNs), which have been increasingly in focus since 2013. In the course of the project, we focused on two different types of adversarial examples: Sensitivity-based and Invariance-based adversarial examples. Sensitivity-based adversarial examples highlight how minor modifications in the input data can affect and deliberately deceive the predictions of a model. In contrast, Invariance-based adversarial examples concentrate on how changes in the data differently affect the perceptions of humans and machines. In summary, for both types of adversarial examples, the decisions of humans and the algorithms of machine learning differ. A central result of the project is the development of the "Advanced Adversarial Classification Game," a game-theoretical model that represents the interactions between attackers (who create sensitivity-based adversarial examples) and defenders (who try to protect their models against such attacks). This approach allows us to investigate and understand the economic and strategic aspects of Adversarial Machine Learning. Furthermore, experimental studies were conducted within the project to explore the effects of Invariance-based adversarial examples on human perception. This research led to the development of algorithms for generating such examples, in turn providing deeper insights into the differences between human and machine perception. The findings of the "Game Over Eva(sion)" project highlight the importance of continuous development and adaptation of security strategies in the field of Deep Learning. Especially since machine learning algorithms increasingly influence various aspects of daily life, the security of these algorithms becomes increasingly important, both for the academic world and for practical application in industry.

Research institution(s)
  • MC Innsbruck - 100%
International project participants
  • Thomas Pevny, Czech Technical University in Prague - Czechia

Research Output

  • 76 Citations
  • 11 Publications
  • 2 Fundings
Publications
  • 2023
    Title On the Effect of Adversarial Training Against Invariance-based Adversarial Examples
    DOI 10.1145/3589883.3589891
    Type Conference Proceeding Abstract
    Author Nocker M
    Pages 54-60
  • 2023
    Title On the Effect of Adversarial Training Against Invariance-based Adversarial Examples
    DOI 10.48550/arxiv.2302.08257
    Type Other
    Author Nocker M
    Link Publication
  • 2024
    Title On the Economics of Adversarial Machine Learning
    DOI 10.1109/tifs.2024.3379829
    Type Journal Article
    Author Merkle F
    Journal IEEE Transactions on Information Forensics and Security
  • 2023
    Title Pruning forPower: Optimizing Energy Efficiency inIoT withNeural Network Pruning; In: Engineering Applications of Neural Networks - 24th International Conference, EAAAI/EANN 2023, Len, Spain, June 14-17, 2023, Proceedings
    DOI 10.1007/978-3-031-34204-2_22
    Type Book Chapter
    Publisher Springer Nature Switzerland
  • 2020
    Title Machine Unlearning: Linear Filtration for Logit-based Classifiers
    DOI 10.48550/arxiv.2002.02730
    Type Preprint
    Author Baumhauer T
  • 2021
    Title When Should You Defend Your Classifier?
    DOI 10.1007/978-3-030-90370-1_9
    Type Book Chapter
    Author Samsinger M
    Publisher Springer Nature
    Pages 158-177
  • 2021
    Title Adversarial Examples Against a BERT ABSA Model – Fooling Bert With L33T, Misspellign, and Punctuation,
    DOI 10.1145/3465481.3465770
    Type Conference Proceeding Abstract
    Author Hofer N
    Pages 1-6
  • 2022
    Title Machine unlearning: linear filtration for logit-based classifiers
    DOI 10.1007/s10994-022-06178-9
    Type Journal Article
    Author Baumhauer T
    Journal Machine Learning
    Pages 3203-3226
    Link Publication
  • 2022
    Title Pruning in the Face of Adversaries
    DOI 10.1007/978-3-031-06427-2_55
    Type Book Chapter
    Author Merkle F
    Publisher Springer Nature
    Pages 658-669
  • 2022
    Title U Can’t (re)Touch This – A Deep Learning Approach for Detecting Image Retouching
    DOI 10.1007/978-3-031-06430-2_11
    Type Book Chapter
    Author Aumayr D
    Publisher Springer Nature
    Pages 127-138
  • 0
    DOI 10.1145/3589883
    Type Other
Fundings
  • 2021
    Title Secure Machine Learning Applications with Homomorphically Encrypted Data (ICT of the Future)
    Type Research grant (including intramural programme)
    Start of Funding 2021
    Funder Austrian Research Promotion Agency
  • 2023
    Title Josef Ressel Centre for Security Analysis of IoT Devices
    Type Research grant (including intramural programme)
    Start of Funding 2023
    Funder Christian Doppler Research Association

Discovering
what
matters.

Newsletter

FWF-Newsletter Press-Newsletter Calendar-Newsletter Job-Newsletter scilog-Newsletter

Contact

Austrian Science Fund (FWF)
Georg-Coch-Platz 2
(Entrance Wiesingerstraße 4)
1010 Vienna

office(at)fwf.ac.at
+43 1 505 67 40

General information

  • Job Openings
  • Jobs at FWF
  • Press
  • Philanthropy
  • scilog
  • FWF Office
  • Social Media Directory
  • LinkedIn, external URL, opens in a new window
  • , external URL, opens in a new window
  • Facebook, external URL, opens in a new window
  • Instagram, external URL, opens in a new window
  • YouTube, external URL, opens in a new window
  • Cookies
  • Whistleblowing/Complaints Management
  • Accessibility Statement
  • Data Protection
  • Acknowledgements
  • IFG-Form
  • Social Media Directory
  • © Österreichischer Wissenschaftsfonds FWF
© Österreichischer Wissenschaftsfonds FWF