Fair and Interpretable Machine Learning
Course type: | Seminar |
Time: | Wednesday 16:00 - 18:00 |
Location: | G.-Köhler-Allee 051, SR 00 034 |
Organizers: | Janek Thomas, Noor Awad |
Web page: | HisInOne |
Seminar on Fair and Interpretable Machine Learning
The seminar language will be English (even if everyone is German-spoken, to practice presentation skills in English).
*First meeting:
- 27th April, 16:00-18:00, SR 00 034 (G.-Köhler-Allee 051).
*Regular meetings:
- Every Wednesday, 16:00-18:00, SR 00 034 (G.-Köhler-Allee 051).
Background
The seminar focuses on interpretable machine learning in the first half of the semester and fair machine learning in the second half.
We will discuss model-agnostic tools for fairness and interpretability as well as specific algorithms. Further topics include measuring fairness, a causal perspective on fairness and how to consider fairness in AutoML.
Organization
Each week: All Students read relevant literature. Three students prepare the topic with slides and applications. Three other students are assigned as discussants. Discussants have to meet with the group presenting prior to the session, give feedback and prepare critical discussion points and open questions.
End of the semester: Each student has to write a short paper (10 pages) about their topic.
Requirements
We strongly recommend that you know the foundations of
- Machine Learning
- For some topics: Deep Learning
Main Literature
[1] Molnar, Christoph. Interpretable machine learning. Lulu. com, 2020. - https://christophm.github.io/interpretable-ml-book/
[2] Mehrabi, Ninareh, et al. "A survey on bias and fairness in machine learning." ACM Computing Surveys (CSUR) 54.6 (2021): 1-35.
[3] Barocas, Solon, Moritz Hardt, and Arvind Narayanan. "Fairness in machine learning." Nips tutorial 1 (2017): 2.
Schedule
Date | Topic | Main Ref. | Further Refs. |
27.04.2022 | Introduction, groups and topic assignments | - | |
04.05.2022 | - | ||
11.05.2022 | - | [1] Chapter 1-3 | |
18.05.2022 | Interpretable Machine Learning Methods | [1] Chapter 5 | Friedman, Jerome H., and Bogdan E. Popescu. "Predictive learning via rule ensembles." The annals of applied statistics 2.3 (2008): 916-954. |
Lou, Yin, et al. "Accurate intelligible models with pairwise interactions." Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. 2013. | |||
Hofner, Benjamin, et al. "A framework for unbiased model selection based on boosting." Journal of Computational and Graphical Statistics 20.4 (2011): 956-971. | |||
25.05.2022 | Global Model-Agnostic Interpretability Methods | [1] Chapter 8 | Apley, Daniel W., and Jingyu Zhu. “Visualizing the effects of predictor variables in black box supervised learning models.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 82.4 (2020): 1059-1086. |
Wei, Pengfei, Zhenzhou Lu, and Jingwen Song. "Variable importance analysis: a comprehensive review." Reliability Engineering & System Safety 142 (2015): 399-432. | |||
Kim, Been, Rajiv Khanna, and Oluwasanmi O. Koyejo. "Examples are not enough, learn to criticize! criticism for interpretability." Advances in neural information processing systems 29 (2016). | |||
01.06.2022 | Local Model-Agnostic Interpretability Methods | [1] Chapter 9 | Goldstein, Alex, et al. "Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation." journal of Computational and Graphical Statistics 24.1 (2015): 44-65. |
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "" Why should I trust you?" Explaining the predictions of any classifier." Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016. | |||
Karimi, Amir-Hossein, et al. "Model-agnostic counterfactual explanations for consequential decisions." International Conference on Artificial Intelligence and Statistics. PMLR, 2020. | |||
08.06.2022 | - | [3] Chapter 1 | |
15.06.2022 | Interpretability Methods for Neural Networks | [1] Chapter 10 | Zeiler, Matthew D., and Rob Fergus. "Visualizing and understanding convolutional networks." European conference on computer vision. Springer, Cham, 2014. |
Kim, Been, et al. "Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)." International conference on machine learning. PMLR, 2018. | |||
Jain, Sarthak, and Byron C. Wallace. "Attention is not explanation." arXiv preprint arXiv:1902.10186 (2019). | |||
22.06.2022 | Multi-Objective and constrained Optimization and Model Selection | TBD | Deb, Kalyanmoy, et al. "A fast and elitist multiobjective genetic algorithm: NSGA-II." IEEE transactions on evolutionary computation 6.2 (2002): 182-197. |
Knowles, Joshua. "ParEGO: A hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems." IEEE Transactions on Evolutionary Computation 10.1 (2006): 50-66. | |||
Gardner, Jacob R., et al. "Bayesian Optimization with Inequality Constraints." ICML. Vol. 2014. 2014. | |||
29.06.2022 | Measures for Fairness | [2] Section 4.1 | Hardt, Moritz, Eric Price, and Nati Srebro. "Equality of opportunity in supervised learning." Advances in neural information processing systems 29 (2016). |
Kearns, Michael, et al. "An empirical study of rich subgroup fairness for machine learning." Proceedings of the conference on fairness, accountability, and transparency. 2019. | |||
Chouldechova, Alexandra. "Fair prediction with disparate impact: A study of bias in recidivism prediction instruments." Big data 5.2 (2017): 153-163. | |||
06.07.2022 | Debiasing Methods | [2] Section 5.1 | Mehrabi, Ninareh, et al. "Debiasing community detection: The importance of lowly connected nodes." 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEE, 2019. |
Calmon, Flavio, et al. "Optimized pre-processing for discrimination prevention." Advances in neural information processing systems 30 (2017). | |||
Kamiran, Faisal, and Toon Calders. "Data preprocessing techniques for classification without discrimination." Knowledge and information systems 33.1 (2012): 1-33. | |||
13.07.2022 | Fair Machine Learning Algorithms | [2] Section 5.2 | Zafar, Muhammad Bilal, et al. "Fairness constraints: Mechanisms for fair classification." Artificial Intelligence and Statistics. PMLR, 2017. |
Kamishima, Toshihiro, et al. "Fairness-aware classifier with prejudice remover regularizer." Joint European conference on machine learning and knowledge discovery in databases. Springer, Berlin, Heidelberg, 2012. | |||
Berk, Richard, et al. "A convex framework for fair regression." arXiv preprint arXiv:1706.02409 (2017). | |||
20.07.2022 | Causal Perspective on Fair ML | [3] Chapter 4 | Zhang, Lu, Yongkai Wu, and Xintao Wu. "A causal framework for discovering and removing direct and indirect discrimination." arXiv preprint arXiv:1611.07509 (2016). |
Loftus, Joshua R., et al. "Causal reasoning for algorithmic fairness." arXiv preprint arXiv:1805.05859 (2018). | |||
Nabi, Razieh, and Ilya Shpitser. "Fair inference on outcomes." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 32. No. 1. 2018. | |||
27.07.2022 | Fairness & Interpretability in Model Selection | Wu, Qingyun, and Chi Wang. "Fair AutoML." arXiv preprint arXiv:2111.06495 (2021). | |
Cruz, André F., et al. "Promoting Fairness through Hyperparameter Optimization." 2021 IEEE International Conference on Data Mining (ICDM). IEEE, 2021. | |||
Perrone, Valerio, et al. "Fair bayesian optimization." Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 2021. |
Grading
- Presentation: 40%
- Paper: 40%
- Role as discutant: 20%
Further information
For questions, please send an email to one of the organizers: Janek Thomas