Debate Intensifies on Algorithm Accountability
The accelerating deployment of AI is prompting closer scrutiny of the underlying algorithms many observers fear could embed bias or worse as into a growing number of everyday applications.
Concerns about the rise of automation via the “Algorithmic Economy” prompted a Washington-based technology think tank to initiate further discussion of the need to hold the users rather than the developers of AI applications accountable for the unintended consequences of those algorithms.
The question put to a panel of analysts and Microsoft’s director of consumer affairs was: “How can policy makers hold organizations accountable for how they use algorithms while also accelerating adoption of technologies like AI?” said Daniel Castro, vice president of the Information Technology and Innovation Foundation.
How government choses to regulate algorithms “will have a significant impact on the economy,” added Castro, who also directs the Center for Data Information, which sponsored this week’s forum. The group promotes a “light-touch” approach to technology regulation.
The group also issued a report this week that lays out a framework for algorithmic accountability. “Policymakers should carefully consider the benefits algorithms can generate against the potential for these decisions to go awry and cause harm,” the report concludes.
The group’s proposed framework “provides users of algorithms and regulators alike clear rules that can simultaneously maximize the benefits of algorithmic decision-making and narrowly target and prevent harmful outcomes,” the authors added.
“Everybody recognizes the benefits of artificial intelligence,” said Frank Torres, Microsoft’s director of consumer affairs. “At the same time, they also recognize the potential harm and the need to do something about that.”
Torres said Microsoft (NASDAQ: MSFT) has established a “ethics review board” to consider “sensitive uses, looking at safety-critical applications of AI [and] what should the parameters around those [applications] be throughout the development” of algorithms? Among the parameters, he said, are the data sets used to train machine learning algorithms.
“If I get denied a loan, if I get denied a job, and [the decision] is based upon an algorithm, how do we translate even some of the existing laws to protect against that?” Torres added.
Other panelists stressed the need to focus rules on how algorithms are applied. “If we write rules that say, ‘If you harm consumers, you’re going to have a challenge with legal compliance’,” said Neil Chilson, a former Federal Trade Commission official and currently a researcher with the Charles Koch Institute. “If we aim at the ends, that works a little better.”
Chilson also endorsed rules that focus on “operators” rather than algorithm developers. “We should focus on the people who have the wrong incentives: Often that will be the person using the algorithm, not the person who has written the algorithm.”
The debate over algorithmic accountability parallels similar efforts to develop guidelines for the appropriate use of AI. Earlier studies on the rise of machine learning have called for developing a U.S. strategy that preserves the American technology lead in AI and machine learning while initiating the process of managing future risks.