Conformalized Quantile Regression
-
Updated
Apr 6, 2022 - Jupyter Notebook
Conformalized Quantile Regression
[Official Codes] Experiments on Generalizability of User-Oriented Fairness in Recommender Systems (SIGIR 2022)
📊 R package for computing and visualizing fair ML metrics
[Nature Medicine] The Limits of Fair Medical Imaging AI In Real-World Generalization
Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency
[Science Advances] Demographic Bias of Vision-Language Foundation Models in Medical Imaging
Source code and models for the paper "Cyberbullying Detection with Fairness Constraints". IEEE Internet Computing, 2020
Python library with the core algorithms used to do FA*IR ranking.
Fair search elasticsearch plugin
A School for All Seasons on Trustworthy Machine Learning
FairER: Entity Resolution with Fairness Constraints
Analyzing Adversarial Bias and the Robustness of Fair Machine Learning
Ethnic bias analysis in medical imaging AI: Demonstrating that explainable-by-design models achieve 80% bias reduction across 5 ethnic groups (50k images)
Code implementation for BiasMitigationRL, a reinforcement learning-based bias mitigation method.
Disparate Exposure in Learning To Rank for Python
UC Berkeley Human Contexts & Ethics Public Materials
Trustworthy AI/ML course by Professor Birhanu Eshete, University of Michigan, Dearborn.
Algorithmic inspection for trustworthy ML models
Implementation of debiasing algorithm in "Debiasing Representations by Removing Unwanted Variation Due to Protected Attributes" on ProPublica's COMPAS data set
This project enhances fairness in recommender systems by introducing DNCF, a debiased neural collaborative filtering model capable of handling both binary and multi-subgroup fairness. It builds on NCF and NFCF by extending debiasing and regularization techniques to support nuanced demographic subgroups without compromising performance.
Add a description, image, and links to the algorithmic-fairness topic page so that developers can more easily learn about it.
To associate your repository with the algorithmic-fairness topic, visit your repo's landing page and select "manage topics."