Inside Advanced Scale Challenges|Friday, November 16, 2018
  • Subscribe to EnterpriseTech Weekly Updates: Subscribe by email

GDPR, AI, Blockchain: Making Legal Fiction a Reality 

(Apoint via Shutterstock)

While some people worry about AI automating away high-paying jobs, inherent developer bias present in a new AI project called Claudette has ended up creating new work — for lawyers.

Claudette is an AI tool for analyzing the privacy policies and terms and conditions that consumers must agree to before using many online services. Claudette reads end-user license agreements (EULAs) and calls out “legal weasel words.” In one of its earliest iterations, Claudette was trained to identify policies that violate the EU's new General Data Protection Regulation (GDPR), which lays out clear rules for what is and is not permissible under European privacy law, especially as it applies to disclosure of, and consent to use, personal digital data.

Claudette was "trained" by having several lawyers annotate 50 sample usage policies, highlighting language that violated the GDPR guidelines on clarity. Developers then fed those annotated policies to Claudette's AI engine. After a few training cycles, Claudette could identify troublesome policy language with what its developers say is 93 percent accuracy.

With growing reliance upon AI technology to perform complex tasks of this nature, concerns about accuracy — and verifiability — of bot activity have reached a boiling point.

Those skeptical of AI’s ability to identify and analyze activity without error will certainly argue that it has no role to play in compliance or legal analysis. However, the reality is that AI tools such as Claudette could offer extraordinary value to — and take a great deal of guesswork and argument out of — legal disputes, but only if we can trust them to.

Claudette's developers did not earn that trust, they merely trained Claudette to agree with Claudette's lawyers on what constitutes a GDPR violation in a policy statement. We know this because Claudette was loosed upon GDPR-revised privacy statements from 14 major companies — AirBnB, Amazon, Apple, Booking.com, Epic Games, Facebook/Instagram, Google, Microsoft, Netflix, Skyscanner, Steam, Twitter, Uber, and WhatsApp — and found that over a third of the language in the policies was non-compliant.

In the opinion of Claudette's trainers, the policy statements were vague and open to interpretation. The GDPR is rather specific about ferreting out vagueness so users know exactly how their data will be used, and how they can opt out of specific uses.

Naturally, the highly paid lawyers for those 14 tech giants disagree that their policies are in violation of the GDPR, and would argue so in court. Claudette wasn't trained on their opinions, but on those of its developers' preferred lawyers, who apparently have a more conservative view of GDPR strictures.

The divergence of opinions, the openness for interpretation that sees lawyers from each side of the debate spinning their own legal arguments, is of course the foundation of all legal actions. It is however, a perfect demonstration as to why the seamless integration of AI and blockchain would be truly revolutionary.  However, trust and identity problems continue to permeate the mainstream conversation around the technology. We can address these issues, in addition to other gaps in the bot certification and registration infrastructure, using blockchain technology.

Blockchain can bring about a degree of trust and security to the AI landscape, allowing for a universal registry of bot identify, certification, and most importantly, training.

Trusted virtual, aggregate agents are the future of AI — a solid sourcing and accountability mechanism for training data, based on the blockchain, will be vital to its success. Blockchain-backed reputation frameworks that emerge when people “stake” currency on their claims have already demonstrated the ability to reach a provably valid consensus and help people trust legacy projects, such as Ethereum. In such systems, actors with better reputations, ones that improve with each correct assertion, serve as reasonable and trustworthy benchmarks that move the system forward. When this level of verifiable trust is applied to AI, we reach a level of accuracy that even the lawyers of 14 of the biggest companies in Europe cannot dispute.

Imagine if Claudette was trained not by lawyers, but a statistically valid random sampling of "reasonable persons” and other blockchain-verified intelligent legal assistants such that Claudette could serve as a virtual, reasonable person for the court.

For example, rather than asking lawyers whether Facebook's privacy policy clearly warned users about Facebook’s data intentions, Claudette's developers could invite a sampling of regular people to read several good and bad privacy policies and then ask them what those policies do and don't allow.

Claudette's "trainers" could also ask those same reasonable people to highlight the sections of the policies that empower or prohibit companies to take certain actions. If the person thinks a photo-sharing site can't use their photos without permission, highlight where the site's policy says that, and so forth.

If an AI like Claudette were trained on these annotations from a valid sampling of provably unbiased actors, the court would have a consistent, impartial metric by which to evaluate the opinion of a "reasonable person."

If Claudette could be shown to clearly understand what, for example, Google could do with your browsing data based on a read of the Google privacy policy, then the policy would be GDPR-compliant. If Claudette misunderstood what the policy allowed or forbade, the policy would be non-compliant. With a blockchain based registry of intelligent legal assistants, all with verified training data, it wouldn't be up to lawyers to depict an imaginary reasonable person; Claudette could be an aggregate, virtual reasonable person for the purposes of the court.

If AI is going to scale up a virtual army of any type of individual, the world needs verifiably reasonable artificial and traditional actors more than it needs more lawyers.

Rob May is CEO of BotChain.

Add a Comment

Do NOT follow this link or you will be banned from the site!
Share This