Adding intersectional bias mitigation to AIF360
@mnagired @hoffmansc
We have implemented an intersectional bias mitigation algorithm based on https://doi.org/10.1007/978-3-030-87687-6_5 (see also https://doi.org/10.48550/arXiv.2010.13494 for the arxiv version) as discussed further on issue https://github.com/Trusted-AI/AIF360/issues/537. Additional details are available in the demo notebook.
According to pytest, our code does not reach the desired coverage of 80%. This happens because our code is multi threaded and it was not obvious to us how pytest can support such kind of code. Nonetheless, we have checked that all functions of the main algorithm file are called during our tests.
@RahulVadisetty91
Dear Rahul,
we are very grateful for taking the time to review our code and offer some very valuable comments.
We are also extremely sorry we couldn't address your comments earlier as we had to observe some critical deadlines in our work and also coordinate our actions regarding our pull request.
About your comment on the init method. This is a great catch. Indeed self.model is not needed in the current setting. We had only put it there in case we wanted in the future to expand support of our Intersectional Fairness to more algorithms. Under current circumstances it makes sense to comment out this line (line 27 of your screenshot).
You have also made some very valuable comments in your first version of your comment (before the edit). We considered all of them, and although they are all very important we could only use current resources to address some of them.
More specifically, although it would be nice to switch to TensorFlow 2 for future compatibility, our code is based on the code of the Adversarial Debiasing algorithm found in AIF360 which in turn is based on TensorFlow 1. It is very difficult to modify our code to support TensorFlow 2 the time Adversarial Debiasing uses TensorFlow 1. If in the future the original algorithm is updated we would be happy to also update our code.
Following one of your comments we have now implemented evaluation progress bars in our algorithm.
Once again thank you for your time and we would be happy to further discuss any of your suggestions or concerns.
Best regards, Chrysostomos
Dear @hoffmansc thank you very much for your patience while reviewing our code. We agree to all of your comments and we have tried to address them to the best of our understanding. We are committed to continue working with you, until we reach the required standards for publication.
Dear maintainers, we have now addressed the requested changes to the best of our understanding and we are looking forward to your further comments at your convenience. Thank you in advance.
Dear @hoffmansc, thank you very much for your latest review. We have now addressed all of your comments to the best of our understanding and we are waiting for further communication. We hope we are reaching closer to publication.
@hoffmansc Please let us know if there is anything we can amend on our end?
@hoffmansc Please let us know if there is anything we can amend on our end?
I see one test related to this code: FAILED tests/test_isf.py::TestStringMethods::test01_AdversarialDebiasing - AssertionError: DataFrame.iloc[:, 2] (column name="selection_rate") are different
The rest you can ignore.
@hoffmansc Please let us know if there is anything we can amend on our end?
I see one test related to this code: FAILED tests/test_isf.py::TestStringMethods::test01_AdversarialDebiasing - AssertionError: DataFrame.iloc[:, 2] (column name="selection_rate") are different
The rest you can ignore.
@hoffmansc Thank you for your prompt and kind response. We have made appropriate changes to the test file and successfully tested the code in our environment. Please, let us know if the problem persists.
@hoffmansc Thank you very much for your hard work and for assisting us in every step.
Of course, congrats for seeing it through. Sorry it took so long!