SFS: Tackling the evolving AI journey
05 May 2026 US
Image: jStock/stock.adobe.com
Opening the fourth annual 厙惇勛圖 Finance Symposium in Boston, emcee Olivia Russell of GLMX welcomed the first panel Shaping the Future of AI in 厙惇勛圖 Finance which explored the shifting conversation of AI within financial services.
The session was moderated by Chelsea Devereaux, head of US asset owners client management, Financing Solutions at State Street, and provided an engaging and interactive discussion among market participants on how they are using AI in their day-to-day.
The speakers identified a number of areas in which AI is used, for example, as a learning assistant or for coding, as well as within client workflows (largely within onboarding workflows) and to achieve a scalable interpretation of legal documents.
Adopting AI models typically starts with a business or technology use case which has defined outcomes for example a need for faster client onboarding or easier contract and fee extraction. From there it is a build versus buy decision, according to one speaker. Firms need to determine how AI solutions such as custom copilots, chat bots, and assistance can be embedded into workflows, rather than simply be a stand alone tool.
Commenting on the mention of buy versus build, one panellist highlighted that clients are looking to build for alpha and buy for speed. Firms are more likely to build if the tool is impacting the business and the generation of that. While other firms are looking to partner to help speed up their processes.
Further, it was noted that Copilot, ChatGPT, or other frontier models, are increasingly commoditised. In terms of the hallucination risk which occurs when generative AI models produce confident but incorrect, misleading, or fabricated information one panellist believes the issue is not as great as it may appear, as these models are becoming increasingly sophisticated and more nuanced in how they answer.
A key point was highlighted in this discussion that it is not a model problem, but a data problem. A magnifying glass is being placed on the data, the structure of the data, databases, data lakes, etc. In the end, what will differentiate firms in terms of AI, will be the data and how it is structured.
Shifting the direction of conversion on AI, it was noted that there is some fear and curiosity around what comes next and how the AI journey will evolve. From this, the audience discussed whether there is enough governance around artificial intelligence.
Turning to the audience to share their views, one audience member noted that AI consumes so much energy. They feared that the largest risk was the loss of independent thinking, adding: If we can't contribute to AI and make it even better, AI will dominate humans AI will govern humans rather than humans governing AI.
Another member of the audience noted the importance to not remain complacent in implementation. Despite believing there is a sufficient oversight and governance in where AI is rolled out, the person asked: If there is a one in 1000 chance a model could produce a bad output, if that error later appears, how can firms build a robust system to ensure that output is caught and the risk to the firm is reduced?
To tackle this issue, panellists noted the need to be transparent and provide traceability, as well as auditability, of all records being produced.
Further, it was stated that 90 per cent of enterprise data is unstructured. Therefore, if a business is using this unstructured data to feed its AI models, the result will be poor. Using a creative analogy to drive home this statement, one panellist said: You can have a brand new kitchen, new appliances, and a Michelin star chef, but if the food's expired, it doesn't matter what you're cooking, it is still going to taste bad.
Concluding the discussion, market participants were reminded that training is important. They were advised to be critical of answers that come from AI models when asking a broad question, and that they should also refine their prompts and add files where possible.
The session was moderated by Chelsea Devereaux, head of US asset owners client management, Financing Solutions at State Street, and provided an engaging and interactive discussion among market participants on how they are using AI in their day-to-day.
The speakers identified a number of areas in which AI is used, for example, as a learning assistant or for coding, as well as within client workflows (largely within onboarding workflows) and to achieve a scalable interpretation of legal documents.
Adopting AI models typically starts with a business or technology use case which has defined outcomes for example a need for faster client onboarding or easier contract and fee extraction. From there it is a build versus buy decision, according to one speaker. Firms need to determine how AI solutions such as custom copilots, chat bots, and assistance can be embedded into workflows, rather than simply be a stand alone tool.
Commenting on the mention of buy versus build, one panellist highlighted that clients are looking to build for alpha and buy for speed. Firms are more likely to build if the tool is impacting the business and the generation of that. While other firms are looking to partner to help speed up their processes.
Further, it was noted that Copilot, ChatGPT, or other frontier models, are increasingly commoditised. In terms of the hallucination risk which occurs when generative AI models produce confident but incorrect, misleading, or fabricated information one panellist believes the issue is not as great as it may appear, as these models are becoming increasingly sophisticated and more nuanced in how they answer.
A key point was highlighted in this discussion that it is not a model problem, but a data problem. A magnifying glass is being placed on the data, the structure of the data, databases, data lakes, etc. In the end, what will differentiate firms in terms of AI, will be the data and how it is structured.
Shifting the direction of conversion on AI, it was noted that there is some fear and curiosity around what comes next and how the AI journey will evolve. From this, the audience discussed whether there is enough governance around artificial intelligence.
Turning to the audience to share their views, one audience member noted that AI consumes so much energy. They feared that the largest risk was the loss of independent thinking, adding: If we can't contribute to AI and make it even better, AI will dominate humans AI will govern humans rather than humans governing AI.
Another member of the audience noted the importance to not remain complacent in implementation. Despite believing there is a sufficient oversight and governance in where AI is rolled out, the person asked: If there is a one in 1000 chance a model could produce a bad output, if that error later appears, how can firms build a robust system to ensure that output is caught and the risk to the firm is reduced?
To tackle this issue, panellists noted the need to be transparent and provide traceability, as well as auditability, of all records being produced.
Further, it was stated that 90 per cent of enterprise data is unstructured. Therefore, if a business is using this unstructured data to feed its AI models, the result will be poor. Using a creative analogy to drive home this statement, one panellist said: You can have a brand new kitchen, new appliances, and a Michelin star chef, but if the food's expired, it doesn't matter what you're cooking, it is still going to taste bad.
Concluding the discussion, market participants were reminded that training is important. They were advised to be critical of answers that come from AI models when asking a broad question, and that they should also refine their prompts and add files where possible.
NO FEE, NO RISK
100% ON RETURNS If you invest in only one securities finance news source this year, make sure it is your free subscription to 厙惇勛圖 Finance Times
100% ON RETURNS If you invest in only one securities finance news source this year, make sure it is your free subscription to 厙惇勛圖 Finance Times
