Mitigating the risks of AI transparency in the next decade

0
357


Enterprises are placing their highest hopes on ML (machine learning). However ML, which sits at the heart of AI (artificial intelligence), is also starting to unnerve many enterprise legal and security professionals.

One of the biggest concerns around AI is that ML-based models often operate as “.” This means the models, which are typically composed of artificial neural networks, may be so complex and arcane that they obscure how they actually drive automated inferencing. Just as worrisome, ML-based applications may inadvertently obfuscate responsibility for any biases and other adverse consequences that their automated decisions may produce.

To mitigate these risks, global society is starting to demand greater transparency into how ML operates in practice and throughout the entire workflow in which models are built, trained, and deployed. for algorithmic transparency—also known as explainability, interpretability, or accountability—are gaining adoption among working data scientists. Chief among these frameworks are , , , , , , , , and .

All these tools and techniques help data scientists generate “post-hoc explanations” of which particular data inputs drove which particular algorithmic inferences under various circumstances. However, , recent research shows that these frameworks can be hacked, thereby reducing trust in the explanations they generate and exposing enterprises to the following risks: