GeoPython2019

Building a Secure and Transparent ML Pipeline Using Open Source Technologies
2019-06-25, 14:30–15:00, Room 1

Learn about open-source tools for creating scalable, end-to-end ML pipelines that are open, transparent and fair.


The application of AI algorithms in domains such as criminal justice, credit scoring, and hiring holds unlimited promise. At the same time, it raises legitimate concerns about algorithmic fairness. There is a growing demand for fairness, accountability, and transparency from machine learning (ML) systems. And we need to remember that training data isn’t the only source of possible bias and adversarial contamination. It can also be introduced through inappropriate data handling, inappropriate model selection, or incorrect algorithm design.

What we need is a pipeline that is open, transparent, secure and fair, and that fully integrates into the AI lifecycle. Such a pipeline requires a robust set of bias and adversarial checkers, “de-biasing” and ""defense"" algorithms, and explanations. In this talk we are going to discuss how to build such a pipeline leveraging open source projects such as AI Fairness 360 (AIF360), Adversarial Robustness Toolbox (ART), and Fabric for Deep Learning (FfDL), Model Asset eXchange (MAX) and Seldon Core.