Advertisement
X

Artificial Intelligence And Financial Markets – With Great Promise Comes Great Risks

Financial services, being the largest spender on technology traditionally, has been quick to embrace AI, and its latest iteration, generative AI

A few days ago, the British government hosted the Artificial Intelligence Safety Summit, bringing together governments, academia and industry. Elon Musk and Sam Altman (founder of ChatGPT) were present, showcasing the importance of the meeting, essentially a brainstorming session to find global commons on regulation of this frontier technology area. Symbolically, the summit took place at Bletchley Park, the storied home of computing with the halo of Alan Turing’s genius around it.

Advertisement

Financial services, being the largest spender on technology traditionally, has been quick to embrace AI, and its latest iteration, generative AI (GenAI). As per OECD, firms are expected to spend over $100 billion in AI in 2024. From regulatory compliance to risk underwriting, from customer service via bots to managing money – AI has numerous exciting possibilities. But with great promise also come great risks. These risks are all non-linear in nature, which also brings in a sobering realization that human beings are unlikely to become redundant anytime soon.

Embedded Bias In a paper written in 1996, Batya Friedmann and Helen Nissenbaum described Embedded Bias as computer systems that systematically and unfairly discriminate against certain individuals or groups of individuals in favor of others. Bias can arise out of either incomplete data, or indeed data inputs that have inherent existing biases. Take for example, a fully automated AI-driven credit underwriting system, one where the AI algorithm takes the place of human underwriters. If the design of the algorithm and/or the data fed into the system has embedded biases, say, against women borrowers, the underwriting will tend to keep more women applicants out. As opposed to a system with human intervention, where biases of one credit underwriter can only affect a small set of applicants, while other underwriters with no such biases will end up rectifying the bias to a certain extent.

Advertisement

As AI gets more ubiquitous, other digital tools like Search Engine Optimisations (SEO) engines will be geared to influence AI algorithms being “trained” in a certain manner. Such non-random, potentially non-balanced training will result in the algorithm embedding biases that gets more exacerbated over time.

In brasstacks terms, a chatbot that is trained incorrectly on (say) to underplay the risk of equities could end up articulating a lower-than-required illustration of the downside risk of equities to an inquiring investor. However, regulators wont excuse the money manager whose chatbot did so from the fiduciary responsibility arising out of the same.

In short, sans human assessment and judgement to periodically intervene, algorithms are unlikely to generate sub-optimal, inefficient and in extreme cases, societally dangerous outcomes.

LTCM – Embedded Biases Leading to Catastrophic Outcomes

Long Term Capital Management (LTCM), the storied case of a hedge fund blow-out, so much so that it needed a mini systemic bailout by the US Fed. Founded by smart bankers with Nobel Prize winners on its board, LTCM went down from being a star to a bankrupt entity in a matter of weeks. The primary reason for the blow-out was leverage, the fund was leveraged up 30:1 when the run on its equity capital started. Part of the reason why the smart traders at LTCM remained sanguine with such high leverage was their faith in their algorithms – the models, based on past data, showed the likely path and correlations between various asset markets. What the algorithms didn’t (or couldn’t)

Advertisement

predict, was a non-linear event like default on local currency sovereign debt (a once in many century occurrence that happened in 1998, with Russia). Now, black swan events like that are hard to predict by anyone, man or machine, but what an “unbiased” algorithm would have done is to place greater weightages on worst case scenarios while throwing up trading signals. In a world where all investment managers use AI-led investment models, the risks will not be limited to one hedge fund, but can quickly become systemic. If one LTCM could nearly cause a melt-down, imagine what embedded biases in multiple large institutions can do.

Synthetic Data Issues

Given privacy issues with most financial services related data, usage of real, live data to train AI algorithms is often impractical (and at times, impossible due to regulatory injunctions). Synthetic data is a viable alternative being increasingly adopted. Synthetic data is basically a machine-generated data set, with a statistical distribution profile that mimics the real data-set being studied. However, the process of generating synthetic data, by itself a human-engineered algorithm, is prone to non-randomisation of the data-set. In simple terms, in the real world, human being make mistakes, different people have different sets of biases to different problems facing them. Real, live data-sets capture real emotions, biases and mistakes. A synthetic data set, engineered by a few, could leave out the mistakes and biases of a larger population. This could potentially make the system blind to them, with errors magnifying over time.

Advertisement

See, Touch, Feel – Critical to Financial Services

AI-generated music cannot replicate Lata Mangeshkar’s voice, to the connoisseur. Financial services works at more mundane levels of human endeavour, but human intervention remains key. The enormous potential of AI can be best leveraged with sustained and trained intervention by human beings to nudge the systems the right way.

The author is the Chief Investment Officer, ASK Private Wealth. The views and opinions expressed in this article are personal.

Show comments