GDPR Seen Slowing AI Innovation
Pending European data privacy rules that also address “automated decision-making” are increasingly seen as having a broad impact on enterprise deployment of AI applications, prompting a U.S.-based group to predict the rules will stymie development within the European Union.
The Center for Data Innovation warned that provisions targeting AI technology within the EU General Data Protection Regulation (GDPR) would do little to protect consumers while at the same time slowing AI development across the continent. “By both indirectly limiting how the personal data of Europeans gets used and raising the legal risks for companies active in AI, the GDPR will negatively impact the development and use of AI by European companies,” the Washington-based group warned in a report.
Among the concerns over EU data rules that enter into force on May 25 are recently adopted AI guidelines establishing the right of a “data subject” to “not be subjected to a decision based solely on automated processing, including profiling….”
The guidelines are likely to profoundly [have an] impact [on] AI-based business models,” adds a recent blog poston the upcoming data rules by Norton, Rose Fulbright, a law firm specializing in the legal implications of emerging technologies such as AI.
Among the biggest concerns of GDPR critics is a provision requiring companies to manually review decision made by algorithms, a step they warn would raise labor costs. That in turn would create an economic disincentive for using AI as a way to automate routine functions, the data group said.
AI ethicists note that deep learning and other AI platforms are trained using a disproportionate amount of data produced by humans. The biases inherent in these data sets can be baked into machine intelligence systems, “thereby perpetuating and even amplifying existing societal biases,” according to a recent AI ethics study.
While hyper-scalers and AI developers in the U.S. also must comply with the GDPR, “the greatest impact will be on European companies because, in most cases, European data will be more important to them as they seek to use or develop AI than to companies whose presence is stronger in foreign markets,” the data group noted. “Because of these restrictions, firms in the EU developing or using AI will be at a competitive disadvantage compared with their competitors in North America and Asia.”
Nevertheless, several European nations have released AI blueprints. For example, the U.K. unveiled its AI strategy last October that calls for developing an indigenous AI sector.
A recent study advocating a U.S. strategy for developing machine intelligence also noted the potential barriers to development that include GDPR and other data privacy efforts. The study released by the Center for Strategic and International Studies warned that the shift to “data localization” will require “balancing legitimate concerns around privacy and consumer protection both in the United States and abroad with the need for an open, flexible data ecosystem that supports innovation and experimentation in AI.”
The data innovation group goes further, arguing that European regulators must revisit GDPR provisions on AI development in order to avoid stifling development. Among the groups recommendations are eliminating the right to human review of algorithmic decisions and amending the new data rules to focus solely on consumer protection. The result, the group asserted, would be improved regulation of AI development in Europe without “needlessly limiting the use of data at the expense of data innovation.”