IBM Trusted AI toolkits for Python combat AI bias

IBM Trusted AI toolkits for Python combat AI bias

50
0





IBM Trusted AI toolkits for Python combat AI bias | InfoWorld










. You can access API docs at .

  • AI Fairness 360, or AIF360, provides metrics to check for unwanted bias in data sets and machine learning models and contains algorithms to mitigate bias. With this toolkit, IBM aims to prevent the development of machine learning models that could give certain privileged groups a systematic advantage. Bias in training data, due to either prejudice in labels or under-sampling or over-sampling, leads to models with biased decision-making. Introduced in September 2018, AIF360 includes nine algorithms that can be called in a standard way. AIF360 contains tutorials on credit scoring, predicting medical expenditures, and by gender. AIF360 code can be found . Documentation is accessible .
  • Adversarial Robustness Toolbox is a Python library supporting developers and researchers in defending DNNs (Deep Neural Networks) against adversarial attacks and thus making AI systems more secure and trustworthy. DNNs are vulnerable to adversarial examples, which are inputs (say, images) deliberately modified to produce a desired response by the DNN. The toolbox can be used to build defense techniques and deploy practical defenses. The approach for defending DNNs involves measuring model robustness and model hardening, with approaches such as preprocessing DNN inputs to augment training data with adversarial samples, and leveraging runtime detection methods to flag any inputs that might have been tampered with by an adversary. Released by IBM Research Ireland in April 2018, the Adversarial Robustness Toolbox can be found .
  • Moving forward, IBM also is pondering the release of a tool for accountability of AI models. The intent is that over the lifecycle of a model, provenance and a data trail would be maintained, so the model can be trusted.