Skip to content

Latest commit

 

History

History
16 lines (13 loc) · 1.94 KB

aim.md

File metadata and controls

16 lines (13 loc) · 1.94 KB

   HOME    |    ABSTRACT    |    METHODS    |    CITATION    |    DEMO    |    CONTACT-US    

Abstract

Facial attribute classification algorithms frequently manifest demographic biases, disproportionately impacting specific gender and racial groups. Existing bias mitigation techniques often lack generalizability, require demographically annotated training sets, exhibit application-specific limitations, and entail a trade-off between fairness and classification accuracy. This trade-off implies that achieving fairness often results in diminished classification accuracy for the most proficient demographic sub-group. In this paper, we propose a continual learning framework designed to mitigate bias in facial attribute classification algorithms by integrating human-machine partnership, especially during the deployment stage. Our methodology harnesses the expertise of human annotators to label uncertain data samples, subsequently used to fine-tune a deep neural network over a time period. Through iterative refinement of the network's predictions with human guidance, we seek to enhance the accuracy and fairness of facial attribute classification. Extensive experimentation on gender and smile attribute classification tasks validates the efficacy of our approach, supported by empirical results from four datasets. The outcomes consistently demonstrate accuracy improvements and reduced bias across both classification tasks.

Contribution