Advanced Computing in the Age of AI | Friday, April 19, 2024

KPMG Launches Framework to Help Businesses Gain Greater Confidence in Their AI Technologies 

SAN FRANCISCO, Feb. 22, 2019 -- As artificial intelligence (AI) technologies accelerate business transformation, with more decisions shaped by machine-learning (ML) algorithms, responsible use of these powerful tools is paramount. Moreover, appropriate governance must be in place to achieve desired outcomes.

To help organizations manage and evolve AI responsibly, KPMG has introduced AI In Controla framework supported by a set of methods, tools, and assessments to help organizations realize value from AI technologies, while achieving imperative objectives like algorithm integrity, explainability, fairness and agility.

Increasingly, algorithms are making vital decisions that impact our lives. Their ubiquity is pushing society to demand that AI technologies be reliable and ethically sound, according to Professor Sander Klous, PhD, global lead of KPMG’s AI In Control offering, and a partner with KPMG in the Netherlands.

“With many businesses on the road to digital transformation, most executives guiding the journey don’t trust the analytics that generate decisions within their organizations[1],” Prof. Klous said. “The growing awareness of the need for trust in decisions generated by AI is now focused on the organizations that develop these technologies, and their responsibility for ensuring quality and integrity. This focus should make it a priority for executive and supervisory boards of these organizations.”

Said Cathy O’Neil, mathematician, author of “Weapons of Math Destruction” and CEO of O’Neil Risk Consulting and Algorithmic Auditing, Inc. (ORCAA): “We’re pleased to be working with KPMG in the management of algorithms. The collaboration will allow us to focus on ethics and accountability, and scale up the algorithm review process by combining with KPMG’s experience in technology risk and governance. Across the AI landscape, there is an urgent need to manage bias, fairness and accountability. Collaborations like this are an important way to begin to address these issues.”

KPMG’s AI In Control offering utilizes a framework to help guide organizations along the AI evolutionary lifecycle from strategy through execution to evolution.  The AI In Control framework includes methodologies, tools, and recommended controls for an AI program to drive better business outcomes through:

Artificial Intelligence Governance – key features:

·         Designs and sets up criteria for building and continuous monitoring and control of AI solutions and their performance, without impeding innovation and flexibility.

Artificial Intelligence Assessment – key features:

·         Conducts diagnostic reviews of AI solutions, and risk assessments of control environments to determine organizational readiness for effective AI control.

·         Provides methods and tools to evaluate business-critical algorithms, puts testing controls in place, and oversees design, implementation and operation of AI programs to help address AI’s inherent challenges: integrity, explainability, fairness and agility.

AI In Control for the City of Amsterdam

KPMG in the Netherlands is currently working with the City of Amsterdam to assess a digitized municipal service that will help enhance the public’s confidence in a safe and well-maintained city, and to assist the City in its mission to protect the digital rights of residents.

The project is part of the new digital agenda set by the City of Amsterdam’s Deputy Mayor Touria Meliani and will be announced in the coming weeks. The ethical guidelines were created in collaboration with scientists from the University of Amsterdam. It serves as an example of the Cities Coalition for Digital Rights, which the City of Amsterdam recently founded with Barcelona and New York City, according to Ger Baron, Chief Technology Officer for the City of Amsterdam.

“Amsterdam is one of the world’s major cities undergoing digital transformation,” said Baron. “We aim to protect the digital rights of our citizens, and we have a responsibility to be inclusive and transparent about the machine learning algorithms we put in place to support our municipal services and programs.”

Amsterdam’s issue management system for public spaces allows residents to easily file service requests online for matters such as trash on the street; the algorithm identifies issue type and which municipal service unit will respond. In the near future, the application will also determine the priority level of issues. Through machine learning, the algorithm’s decision-making should improve over time.

The potential risks are inherent in machine bias. Data such as the geographic location of the issue, for example, could inadvertently lead to patterns of learning that the algorithm would eventually use as a rule, possibly drawing the wrong conclusions and resulting in biased decision making. AI In Control allows for an effective evaluation of the City’s risk management framework to monitor processes, providing continuous control of evolving AI applications.

Said Martin Sokalski, KPMG Global Emerging Technology Risk network leader and a Principal with KPMG in the US: “The true art of the possible for Artificial Intelligence will become unlocked as soon as there is more trust and transparency. This can be achieved by incorporating foundational AI program imperatives like integrity, explainability, fairness and agility which are the premise behind our offering.”


Source: KPMG

EnterpriseAI