In 2022, the emergence of generative artificial intelligence (AI), capable of human-like text that can analyze vast amounts of data and create new content, caught the public’s imagination. In fact, it has turbocharged interest in potential AI use cases while heightening concerns over the inherent risks.
Over recent years, developing AI and machine-learning technology have seen gradual AI adoption within financial services. As examples, fraud management is one of the most promising AI use cases in the financial industry. It is also likely to be applied more widely to impact investing, in which fund managers seek to gain social or environmental benefits from their investments, as well as financial gains. Developing algorithms that identify portfolio investment targets with high-potential carbon-reduction technologies is another prime area for AI use. Although AI and Gen AI will likely affect every aspect of financial services within the next five years, the sector is currently a long way from realizing the potential benefits to companies and to consumers.
There is of course a lot of concern over AI, about how it works and the potential for unfairness or abuse if not properly used. Limited explainability remains a major legal risk for institutions, which is worsened by weak AI governance. In terms of accountability and governance, it’s important to establish a robust risk and control framework for AI, striking the right balance. That is to set clear parameters for its use that reflect a proper understanding of AI risks and the potential consequences. Other important concerns include fairness e.g., data bias, respecting intellectual property rights and the need to provide for contestability and redress. In this regard, businesses in the financial sector should consider appointing chief AI officers to be responsible for effective governance systems for this new technology.
In this light, it’s not surprising that many governments are minded to regulate AI or least to put in place appropriate governance. Different regions are taking varying approaches. Financial regulators in the US are starting to involve themselves in the financial sector’s use of AI and the outward representations they make to the public. The EU’s AI Act could become the global standard for AI regulation as it seeks to set out a comprehensive approach to protecting individuals from potential AI harms.
It’s too early to tell who the winners will be from AI, but there’s an expectation of significant efficiency gains and savings in costs. In theory, customers and consumers should benefit from better-tailored services. Investment in the technology is vital, as is rethinking how a business operates to allow AI to be best used. It remains to be seen whether the winners will be: the larger financial institutions that have the resources to invest and have greater access to data, or smaller, more agile fintech start-ups. On this point, what’s problematic for AI-enabled competition and innovation is the dominant access (that larger financial institutions are likely to enjoy) to the data that feeds models and drives their improvement. It’s likely that only a few key players will have the resources necessary to collect, analyze and gainfully use this data. This could reduce financial industry innovation, thereby lessening the impact of AI, contrary to current expectations.
To learn more, listen here to the latest episode of our global financial institutions podcast, FInsight. Here, Bradford Newman, a partner in Baker McKenzie’s Palo Alto office and one of the leaders of the Firm’s North America AI practice, expands on these AI themes.