corporate

    AI in financial services: is this the new dotcom bubble?

    AI in financial services: is this the new dotcom bubble?

    Artificial intelligence (AI) is all around us – the Amazon Alexa and Apple Siri digital voice assistants, for instance, rely on AI for learning and interpreting instructions; AI enables face ID for unlocking mobile phone home screens; and it is AI that curates social media and video streaming feeds.

    Other AI use-cases are less visible but perhaps even more critical to modern life, such as the fraud detection that operates for every electronic transaction made through the major payment platforms, or AI's role in determining consumer credit scores and analysing insurance claims. We might not even think of many of these things as being AI – as John McCarthy, one of the founding fathers of artificial intelligence, put it, “as soon as it works, no one calls it AI anymore.”

    The recent proliferation of ‘generative’ AI tools – such as Chat GPT for text generation and Dall-E for producing images – has put the spotlight back on artificial intelligence. Many commentators are anticipating deep economic disruption across multiple industries – so what will AI mean for financial services?

    Generative AI differs from traditional AI in that it is free to be creative. When it comes to text, this gives generative AI a tendency to “hallucinate”, producing outputs that are factually incorrect or even nonsensical

    Read also: Tackling cyber fraud using both human and artificial intelligence

     

    Am I hallucinating?

    Generative AI differs from traditional AI in that it is free to be creative. When it comes to text, this gives generative AI a tendency to “hallucinate”, producing outputs that are factually incorrect or even nonsensical. Microsoft Bing’s recently-launched AI chat search interface carries the disclaimer, “AI can make mistakes…Bing will sometimes misrepresent the information it finds, and you may see responses that sound convincing but are incomplete, inaccurate or inappropriate.” This tendency – which is common across generative AI models – drastically limits its usefulness in fields where accuracy and precision are important, such as in financial services.

    In the heavily-regulated financial services industry, AI models that could give clients incorrect or even illegal advice simply won’t gain the necessary license to practis

    A better customer experience

    In the heavily-regulated financial services industry, AI models that could give clients incorrect or even illegal advice simply won’t gain the necessary license to practise. Where generative AI will play a growing role, however, is in customer services, where interactions will become faster and more personalised. AI will even enable users to “program” their own custom investment dashboards and generate interactive reports based only on brief natural language instructions.

     

    Detecting user fraud

    AI has long been an important part of fraud detection in the financial services industry. Visa’s AI fraud detection service, for example, watches for unusual patterns of spending in up to 65,000 transactions per second, and detects suspected fraud within 300 milliseconds of a transaction taking place. It’s a facility that doesn’t come cheap – in 2020, firms spent more than USD 217 billion on AI for fraud detection and risk assessment.

    Increasingly, AI models are able to detect activity that fits patterns of money laundering

    Increasingly, AI models are able to detect activity that fits patterns of money laundering, while future AI models will be deployed to anticipate as yet uncommitted frauds by probing for ‘loopholes’ or weaknesses in the way that regulators, institutions and investors interact with one another.

    Read also: The CLIC® Chronicles: welcome to the future of agricultural technology

     

    Biggest data wins

    The ability to identify patterns of spending and detect anomalies requires vast datasets – the bigger the dataset the more “training” an AI model can go through, and the more reliable the fraud detection is likely to be. This favours the incumbent institutions that both have access to the largest datasets and the investment power to gather and coherently store billions of data points. To take Visa as an example again, at the end of 2021 the firm held 60 petabytes of data – the storage equivalent of 60 million standard definition movies. This creates a formidable ‘moat’ for the world’s largest payment networks, which already hold decades of payment information.

    As datasets build, and AI becomes more sophisticated, this targeting will become increasingly accurate

    Targeted services

    This dataset gives established institutions a further advantage – the ability to build a detailed picture of each user’s habits, likes and lifestyles. With analytics performed by AI, information such as when and where a customer tends to travel, to what streaming services they subscribe, or to whom they regularly transfer money, allow firms to offer financial services in a targeted way – for instance, a personalised insurance quote following the purchase of a car, or budgeting and investment ideas for savers. As datasets build, and AI becomes more sophisticated, this targeting will become increasingly accurate.

    Vast power needs

    According to Ian Bratt, leader of the Machine Learning Technology group at British semi-conductor and software design firm Arm, the energy needed to train new AI models is growing exponentially: “If you look at the amount of energy taken to train a model two years back they were in the range of 27 kilowatt-hours…today, it is more than half a million kilowatt-hours.” [An increase of more than 18,000-fold.]

    Though unlikely to be a limiting factor in the short term, this raises the prospect of AI progress stalling as the energy required by AI (and increasingly vast data storage) approaches our capacity for energy production. Energy efficiency is likely to become increasingly important as firms’ reliance on AI grows.

    AI advances are making it harder for employees to take fraudulent advantage of their access to customer data

    Read also: IT’ll be back

     

    Detecting internal fraud

    At some of the biggest financial institutions, AI’s fraud detection guns are being turned inwards – the same pattern and anomaly detection that analyses customer behaviour is also being used to monitor employees. From following online and even physical activity at work, through to identifying stress or unusual speech patterns during conversations on company telephones, AI advances are making it harder for employees to take fraudulent advantage of their access to customer data.

     

    Garbage in, garbage out

    The output from AI will only ever be as good as the input data – where input data is unreliable this could expose financial institutions to unexpected risks. For example, in the US, the dataset of consumer credit ratings agencies is largely composed of the transaction history of rich, white populations – the result is lower credit ratings, on average, for other demographic groups. This has led to criticism from politicians and potential missed opportunities for mortgage companies. IBM estimates that the cost of poor input data across all sectors in the US alone amounts to USD 1.3 trillion. At worst, AI decisions based on unrepresentative or poor-quality data could lead to fines and reputational damage.

     

    Cybersecurity

    AI represents an ever-evolving cybersecurity threat to firms across all sectors, and financial firms in particular. Hacking via so-called “phishing” (where consumers are tricked into handing over personal information that is then used to breach security) becomes easier with AI’s ability to rapidly create profiles and gather data at scale, including via voice and image generation. And with new AI hacking techniques continuously being developed, financial institutions must both invest in defensive cybersecurity and put in place contingency plans for when their data or AI models are compromised.

    Unfortunately, 20% of listed companies across all industries fail to employ even basic cyber protection, meaning they have “known exploited vulnerabilities” – as AI attacks become increasingly sophisticated these vulnerabilities will be exploited more quickly. According to analysis by consultancy firm McKinsey, this fast-growing threat could lead to a cybersecurity industry worth USD 2 trillion1.

    The total AI market, including services and hardware, is expected to reach a value of USD 900 billion by 2026

    A winter of discontent?

    The total AI market, including services and hardware, is expected to reach a value of USD 900 billion by 2026, achieving a Compound Annual Growth Rate (CAGR) of 19%. Despite this impressive growth, investors with only a short-term horizon may find themselves disappointed.

    AI hype is not new – since the emergence of artificial intelligence in 1950 there have been several cycles of high expectation followed by so-called AI “winters”, when progress has slowed and anticipated breakthroughs have not materialised. The latest hype over generative AI has created company valuations to rival the dotcom bubble – this has set the stage for another AI winter, when inflated expectations are likely to disappoint.

    For the long-term, however, the AI revolution is very real. For investors the key is to look beyond the marketing hype, to analyse how AI is truly impacting company fundamentals and to look for those firms adapting to both the risks and opportunities AI is creating. AI is unlikely to turn the financial services industry on its head overnight – instead, the winning companies will be those that invest in AI for sustained innovation in efficiency, security and better customer service.


     

    1 New survey reveals $2 trillion market opportunity for cybersecurity technology and service providers | McKinsey

    Important information

    This document is issued by Bank Lombard Odier & Co Ltd or an entity of the Group (hereinafter “Lombard Odier”). It is not intended for distribution, publication, or use in any jurisdiction where such distribution, publication, or use would be unlawful, nor is it aimed at any person or entity to whom it would be unlawful to address such a document. This document was not prepared by the Financial Research Department of Lombard Odier.

    Read more.

     

    let's talk.
    share.
    newsletter.