Semester | Winter 2020 |
Course type | Block Seminar |
Lecturer | TT.-Prof. Dr. Wressnegger |
Audience | Informatik Master & Bachelor |
Credits | 4 ECTS |
Room | 148, Building 50.34 and online |
Language | English or German |
Link | https://campus.kit.edu/campus/lecturer/event.asp?gguid=0x66466C5ACDD84711B566D0A1A058DD74 |
Registration | https://ilias.studium.kit.edu/goto_produktiv_crs_1265042.html |
Due to the ongoing COVID-19 pandemic, this course is going to start off remotely, meaning, the kick-off meeting will happen online. The final colloquium, however, will hopefully be an in-person meeting again.
To receive all the necessary information, please subscribe to the mailing list here.
This seminar is concerned with explainable machine learning in computer security. Learning-based systems often are difficult to interpret, and their decisions are opaque to practitioners. This lack of transparency is a considerable problem in computer security, as black-box learning systems are hard to audit and protect from attacks.
The module introduces students to the emerging field of explainable machine learning and teaches them to work up results from recent research. To this end, the students will read up on a sub-field, prepare a seminar report, and present their work at the end of the term to their colleagues.
Topics cover different aspects of the explainability of machine learning methods for the application in computer security in particular.
Date | Step |
Tue, 3. Nov., 14:00–15:30 | Primer on academic writing, assignment of topics |
Thu, 12. Nov | Arrange appointment with assistant |
Mo, 16. Nov - Fr, 20. Nov | Individual meetings with assistant |
Wed, 16. Dec | Submit final paper |
Wed, 20. Jan | Submit review for fellow students |
Fri, 22. Jan | End of discussion phase |
Fri, 29. Jan | Submit camera-ready version of your paper |
Thu, 11. Feb | Presentation at final colloquium |
News about the seminar, potential updates to the schedule, and additional material are distributed using a separate mailing list. Moreover, the list enables students to discuss topics of the seminar.
You can subscribe here.
Every student may choose one of the following topics. For each of these, we additionally provide a recent top-tier publication that you should use as a starting point for your own research. For the seminar and your final report, you should not merely summarize that paper, but try to go beyond and arrive at your own conclusions.
Moreover, all of these papers come with open-source implementations. Play around with these and include the lessons learned in your report.
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, ICLR 2014
"Why Should I Trust You?": Explaining the Predictions of Any Classifier, KDD 2016
Axiomatic Attribution for Deep Networks, ICML 2017
On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation, PLOS ONE 2015
GNNExplainer: Generating Explanations for Graph Neural Betworks, NeurIPS 2019
LEMNA: Explaining Deep Learning-based Security Applications, CCS 2018
Fooling Neural Network Interpretations via Adversarial Model Manipulation, NIPS 2019
The schedule of the final colloquium can be found here.