Regulators have a message for your AI strategy

Sven Muehlenbrock

 

With two recent publications, regulatory bodies on both Luxembourg and EU levels give AI developers and AI users ample material to digest—and act upon.

In December 2018, two papers about AI were published. One, by the European Commission’s High-Level Expert Group on Artificial Intelligence, is a draft of ethical guidelines for trustworthy AI. By February 2019, more than 500 comments had been submitted regarding these guidelines, reflecting the AI community’s enormous interest in them. The other, by the CSSF, is a white paper about opportunities, risks, and recommendations regarding AI.

 

Although the papers take slightly different directions—the CSSF paper focuses on AI use cases, inherent risks, and opportunities while the European Commission’s regards the trustworthiness of AI systems—they conceptually share a few common denominators. They both:

  • provide the foundation for an ongoing dialogue between all AI stakeholders (including regulators, AI developers, and deploying organizations)
  • are not legally binding (per se)
  • provide practical guidance and point towards focus areas in which binding legislation is already implemented or is likely to be implemented in the not-so-distant future

In this article, I want to distill four of the themes inherent in these two papers. These themes may not always be obvious to organizations, even those already engaging with AI in the financial sector. For any organizations using or eying technologies like intelligent process automation, chatbots, robo-advisors, as well as AI-enabled fraud detection or credit scoring—and that probably includes most financial organizations—these two papers are worth paying attention to.

1. Data privacy: how GDPR affects the scope of your AI strategy

One of the key drivers of AI’s exponential growth over the past decade has been the increasing availability of data, especially data pertaining to individuals. It is therefore understandable that both the CSSF and the European Commission, in their papers, highlight the application of article 6 of the General Data Protection Regulation (GDPR). According to this article, the processing of individuals’ data is only permissible if it has a legal basis. For example, it would be lawful for the purposes of Anti Money Laundering (AML) procedures. In all instances, organizations need—from the beginning—to draft their business cases with this legal criterion in mind, as losing a substantial part of input data at a later stage could compromise the viability of the entire AI project.

2. Accessibility: how to open AI to the entire society

The purpose and mission of an AI system should be transparent to all stakeholders—including the end-users. When, for example, considering the adoption of client-facing AI solutions such as virtual assistants, it is paramount to consider how and in which service areas customers are willing to deal with an android. And where they are, to what extent do they need to be familiarized with this new environment?

On this topic, the European Commission suggests a “design for all” proposition, allowing access to and use of the application for all citizens regardless of age, disability, or social status. Notably, the legislative roots of this proposition go back to the United Nations Convention on the Rights of Persons with Disabilities. Setting “design for all” as a strategic priority of the AI business model will likely benefit a firm’s relationship with both clients and regulators.

3. Auditability: how to design for transparency

One aspect that is getting heightened attention from regulators is the auditability of AI services, which starts with proper documentation. Regulators have rigorously applied the mantra What isn’t documented isn’t done! to many seemingly straightforward topics (like risk appetite statements), and essentially they are applying it towards AI, too—but the nature of AI models adds another layer of complexity here. To ensure traceability, not only does an AI model need to be documented in the traditional sense, but so do the decisions it makes and how it makes them throughout its lifetime.

An important side-note in this context is that, to ensure seamless audit and verification workflows, the call is also for regulatory bodies to undergo a “digital transformation” and implement the toolsets needed to evaluate this documentation. Ultimately, early investments in an AI tool’s auditability are likely to pay off beyond the actual audit: standing the test of internal and external audits may, according to the European Commission, contribute also to the tool’s acceptance by end-users.

4. Governance: how to integrate human oversight

A way to make AI systems more accountable is to set up proper (human) governance frameworks: in this context, the CSSF recommends involving an entire team in the analysis of AI models, including experts from compliance, risk management, and information security departments—as well as a Data Protection Officer (DPO). Adding to this, the European Commission suggests appointing a responsible person for AI ethics, or in some cases even an entire panel or board for this. Implementing a sophisticated AI system thus may ultimately mean more, rather than fewer, personnel. The European Commission notes that the more autonomous an AI system, the more it requires extensive testing and strict governance. The rationale for including such a diverse team of experts lies, according to the CSSF, in the complexity of the risk and benefit analysis of such projects: factors to be considered require subject-matter expertise in areas like data scope, potential data biases, data security, and the mapping of the resulting provisions to the business context, including considerations such as efficiency gains.

Paying attention now will pay off

While both of these papers have a guiding, rather than legally binding, character, many of their themes and suggestions relate to legislation that already exists in other contexts, like (worst-case) scenario analyses, accountability mechanisms, governance structures (human oversight), and the validation of models and workflows (think, for example, of the provisions for operational risk management in the ICAAP context). The latter one, in particular, is a case where a familiar concept needs to be reconsidered for an AI context: as AI systems are non-deterministic and operate on non-closed algorithms, intelligence is manifested during program execution in terms of ongoing “learning” and recalibration of the model, thus requiring ongoing, rather than periodic, validation to ensure that the system behaves as intended throughout its entire life cycle.

In any case, accepting regulators’ invitation to join the discussion will likely pay off at a later stage. The goal is to investigate how to potentially adapt familiar concepts to an AI context. It’s also about shaping the regulatory framework for more unfamiliar items. As it is likely for these issues to be addressed by more concrete legislation in the future, incorporating them into strategic agendas now promises a natural advantage later on, when AI models grow more sophisticated and guidelines turn into concrete legislation.

Following its “Ethics Guidelines for Trustworthy AI” paper, the European Commission plans to publish a more binding AI-related “Policy & Investment Recommendations” later this year. In this publication, it envisages a discussion of which existing regulation will need revision or adaptation, and what new regulation should be introduced. It is not too late to have a meaningful and lasting impact on this discussion.

To assess the trustworthiness of your AI solution, we recommend considering the assessment list provided by the European Commission in the above-cited ethics guidelines (p. 24).

Share this: