Automatic Discovery of Trade-off Between Accuracy, Privacy and Fairness for ML models
Overview
When machine learning models are deployed to solve real world problems, they are often trained on sensitive data, e.g. healthcare or financial records. Practitioners need to ensure fairness and privacy of the resulting model. Often privacy and fairness guarantees may only be achieved through sacrificing accuracy (as classically measured). Usually both privacy and fairness are set as fixed constraints, and the exact effect of such constraints on accuracy is unclear. This project proposes to develop a procedure of automatic discovery of the trade-off between these three metrics.