The recent report Ghosts in the Machine: Artificial Intelligence, Risks and Regulation in Financial Markets is based on a survey of senior executives from financial institutions and FinTech companies. One key finding was:
A total of 44 % of all survey respondents and 56 % of survey respondents with a legal background felt that their businesses (not third party businesses) did not fully understand the legal risks associated with AI.
This is a remarkable finding as it appears to clash with financial institution’s (“FI”) approach to legal risk. FIs have low appetite for conducting any business or activity with high legal risk, and both numbers – 44% and 56% – are off the radar from the perspective of acceptable risk levels.
Legal risk appetite in FIs is at an all time low. The financial crisis followed by the wave of civil litigation and regulatory enforcement action has resulted in the primacy of effective risk mitigation and certainty over the uncertainty of unknown legal risk, subject to the caveat that legal risk has to be managed as it can never be eliminated. That said, FIs have to innovate and radically change themselves on multiple dimensions if they are to flourish in the new world order of ultra competitive banking services provided by the new digital and FinTech market entrants. Consequently, there is a tension between commercially imperative evolution and the comfort of legal certainty; the development and implementation of AI needs to find the right balance between the two. The survey suggests that the current balance is weighed too heavily in favour of adopting AI even though the legal risks associated with AI are not yet fully understood. If this is the case, it could be seen as a relatively high risk approach and the balance should be reset to one where legal risks are identified and mitigated before unexpected problems arise.
To Dos for FIs
FIs need to fully integrate their legal department experts in the development of AI so that the legal risks inherent in the emerging applications can be identified, measured and mitigated. In the past, legal teams were often involved too late in the process and the opportunity to input during the design phase was missed. This can lead to wasted time, effort and cost as well as inhibit the speed to market. The legal experts supporting the AI teams must approach the subject thinking both horizontally and vertically to ensure all risks have been identified.
Three questions legal teams should ask themselves are:
- Is the legal team sufficiently closely involved in the development of AI?
- What is the process for identifying the possible legal risks? Is it bottom up (from potential users) as well as top down? Does it think outside the box and across the breadth of the commercial application, scanning the breadth of the horizon? A narrow siloed approach will be too restrictive.
What is the optimum way to measure and then mitigate the identified risks so that the commercial benefits can be achieved within the risk appetite of the firm?
- The message from the survey is clear: the legal risks of AI are not yet sufficiently understood by organisations. To fix this deficit, FIs’ legal teams need to proactively engage with the AI developers and business planners in order to identify legal risks at an early stage and implement effective risk mitigation strategies. Otherwise, avoidable legal risks will be run leading to liability issues.
The message from the survey is clear: the legal risks of AI are not yet sufficiently understood by organisations. To fix this deficit, FIs’ legal teams need to proactively engage with the AI developers and business planners in order to identify legal risks at an early stage and implement effective risk mitigation strategies. Otherwise, avoidable legal risks will be run leading to liability issues.