Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
HSBC Emerging Markets Spotlight Podcast Series - Digital innovation in Markets image

HSBC Emerging Markets Spotlight Podcast Series - Digital innovation in Markets

HSBC Global Viewpoint
Avatar
24 Plays1 year ago

Generative AI is one of the latest technological megatrends. In this episode, our experts take an in-depth look at some of the financial, ethical and societal concerns that come along with this technology, as well as the broader AI ecosystem and the benefit it brings.


Emerging Markets Spotlight is a podcast miniseries created and hosted by HSBC that seeks to explore and understand the complex and critically important issues facing the world’s emerging markets. For further insight and information around emerging markets, visit Accessing Emerging Markets | HSBC


Hosted on Acast. See acast.com/privacy for more information.

Recommended
Transcript

Introduction to Global Viewpoint Podcast

00:00:02
Speaker
Welcome to HSBC Global Viewpoint, the podcast series that brings together business leaders and industry experts to explore the latest global insights, trends, and opportunities.
00:00:13
Speaker
Make sure you're subscribed to stay up to date with new episodes.
00:00:16
Speaker
Thanks for listening.
00:00:17
Speaker
And now onto today's show.

Overview of GEMS 2023 Panel on Digital Innovation

00:00:24
Speaker
Hello, and welcome to our GEMS 2023 panel on digital innovation, specifically focused today on generative AI in capital markets.
00:00:36
Speaker
Some of you might have been with us last year where we just talked about AI and machine learning.
00:00:40
Speaker
This is almost the next step, and we'll see a lot has happened in a year.

Expert Introductions and Credentials

00:00:45
Speaker
My name is Jeff Wertheimer.
00:00:47
Speaker
I am the global co-head of electronic sales and the America's head of distribution platforms.
00:00:53
Speaker
We have three domain experts here today.
00:00:56
Speaker
I'm going to introduce them if that's okay.
00:00:58
Speaker
And then we're going to ask some questions.
00:01:00
Speaker
So in no particular order, we have Dr. Dara Sosulsky.
00:01:05
Speaker
Dara is the head of AI and model management for markets and security services.
00:01:10
Speaker
She's led a variety of teams specializing in data science, analytics, and modeling.
00:01:15
Speaker
So she's a good person to have with us today.
00:01:18
Speaker
She also has a neuroscience PhD from Columbia University, which I think is actually relevant as well.
00:01:25
Speaker
She then went on to the University College of London on a fellowship in the Royal Society.
00:01:30
Speaker
So, Dara, thank you for being with us.
00:01:32
Speaker
Thank you.
00:01:33
Speaker
Next, we have Mark McDonald.
00:01:35
Speaker
So, Mark is the head of data science and analytics for HSBC Global Research.
00:01:40
Speaker
Mark's the lead author of HSBC's flagship quant publication, Data Matters, and is responsible for our proprietary risk-on risk-off analysis, the prize indices, and the Little Mac FX valuations.
00:01:55
Speaker
Mark also advises investors on building systematic models and applying machine learning techniques to the financial markets.
00:02:02
Speaker
So again, another good expert here, someone who's out talking to clients, and will not just have an HSBC view, but will have a good sense of what's out there happening.
00:02:10
Speaker
with our client base.
00:02:12
Speaker
He has a applied mathematics degree from the University of Oxford, doctoral degree.
00:02:18
Speaker
And last but certainly not least is Tom Croft.
00:02:20
Speaker
So Tom is the head of EFX options distribution for markets and security services.
00:02:27
Speaker
He runs the distribution for electronic FX options.
00:02:31
Speaker
He has an MA in math from Edinburgh and an MSc in maritime engineering
00:02:36
Speaker
from South Hampton, which I think is the most interesting degree I've heard.
00:02:40
Speaker
I will find out if that is relevant or not, but what is very relevant is Tom is the lead on a new HSBC product called AI Markets that is employing some of the technologies we will talk about today.
00:02:56
Speaker
Without further ado, let's get started.
00:02:59
Speaker
I appreciate people have different levels of understanding, but let's just make sure we're all on the same page.

What is Generative AI?

00:03:04
Speaker
So Mark, I'd love to start with you if that's okay.
00:03:08
Speaker
And ask, what is the difference between generative AI and just historical old artificial intelligence?
00:03:20
Speaker
That is a great question because obviously there have been many previous waves of hype and excitement about AI.
00:03:29
Speaker
And you could argue that this current wave of AI hype began back in 2012, but it's really been kicked into overdrive since sort of November last year when OpenAI released ChatGPT.
00:03:44
Speaker
And what's really changed with generative AI rather than normal AI is generative AI, these are machine learning models that can create new content.
00:03:54
Speaker
And until recently,
00:03:56
Speaker
This act of being creative, that's something that we thought of as like a uniquely human capability, but it turns out that machines are really good at this too.
00:04:05
Speaker
And as a result, people have been amazed by the ability of machines to do this.
00:04:10
Speaker
I think, you know, people who
00:04:12
Speaker
who are not experts in the space, were probably not expecting the capability to arrive so soon and so suddenly.
00:04:17
Speaker
And so it's led to a great deal of excitement.
00:04:21
Speaker
And of course, with excitement tends to come a lot of hype.
00:04:24
Speaker
And so one of the challenges for people is trying to work out what's hype and what's real in this space.
00:04:28
Speaker
Thanks, Mark.
00:04:30
Speaker
Sarah, maybe a question for you.
00:04:31
Speaker
So we've all seen the expansive growth.
00:04:35
Speaker
My child is talking about it.
00:04:36
Speaker
My mother is talking about it.
00:04:38
Speaker
What was that catalyst to go from
00:04:41
Speaker
I can use the term geek, right?
00:04:42
Speaker
I mean, certainly people have been looking at this for a while, but now it's gotten very mainstream.
00:04:49
Speaker
Do you have a sense of what really created that inflection point with the technology?

Mainstream Adoption of Generative AI

00:04:53
Speaker
Yeah, of course.
00:04:54
Speaker
I won't be offended by your use of the word geek either, Jeff, to describe us old guard who have been using AIML for many years now, basically.
00:05:03
Speaker
So I think the sea change really happened for two reasons with generative AI.
00:05:09
Speaker
First is that it's widely applicable.
00:05:11
Speaker
So like you said, the reason that your kids or my husband won't stop talking about it is because for the first time, I think that people realize that these tools can be used for use cases that aren't the traditional purview of kind of analytics and data.
00:05:26
Speaker
So these are things like Q&A, chatbots, summarizing documents, creating content, images, music, you know, what have you basically.
00:05:34
Speaker
And the other really key thing here is that for the first time ever, a lot of these tools are accessible to a wide variety of people.
00:05:41
Speaker
So I'm sure many of the people on this call have already experimented with chat GPT and maybe stable diffusion mid journey, some of the other offerings we have already in Gen AI.
00:05:51
Speaker
And that's because these tools are being presented to people.
00:05:54
Speaker
on the web via interfaces where pretty much anybody can pay or just have access to them straight off the bat and kind of get a sense of the art of the possible, really.
00:06:05
Speaker
Thank you very much.
00:06:07
Speaker
Not to dive in too deep, but I think now might be the appropriate time, Mark, to talk about the large language models.

Training of Large Language Models

00:06:14
Speaker
How are they able to generate
00:06:16
Speaker
the text, which just sounds so informed and intelligent.
00:06:20
Speaker
I'm also curious about just, you know, how is it and where is it pulling all this data from?
00:06:25
Speaker
Yeah, of course.
00:06:26
Speaker
So the way that these models are trained is you feed in lots of examples of text, of words in a sentence, and you ask it to predict missing words.
00:06:38
Speaker
So either words that are missing in the sentence or the ones that we use for generative models, trying to predict what words come next.
00:06:44
Speaker
And so these models are really trained to just
00:06:46
Speaker
produce plausible continuations of an input.
00:06:50
Speaker
And it might seem bizarre that a model like this can appear intelligent when trained at such a simple seeming task.
00:07:01
Speaker
But let me give you an example.
00:07:03
Speaker
Let's say I start a sentence and say, as a data scientist, I am good at.
00:07:07
Speaker
Now there are some words which are likely to follow that.
00:07:09
Speaker
So maybe coding or statistics or data science.
00:07:12
Speaker
Um, there are some words that are not likely to follow that.
00:07:14
Speaker
If I said, as a data scientist, I am good at cage fighting.
00:07:17
Speaker
Now that is a weird sentence, right?
00:07:20
Speaker
And anyone who's like half listening to this call while checking their emails would suddenly be like, hang on a minute.
00:07:25
Speaker
What did he just say?
00:07:26
Speaker
It's clearly absurd.
00:07:28
Speaker
And we all know it's absurd because we have the context of how the world normally works and what are typical things for people to say.
00:07:36
Speaker
For a AI model to be able to complete any sentence on any topic in a sensible manner, it's actually a very hard task and it needs to encode a lot of knowledge about how the world typically works in there.
00:07:51
Speaker
And this is how you train them.
00:07:53
Speaker
You give it pretty much all the text you can find on the internet.
00:07:58
Speaker
And this
00:08:00
Speaker
Once you have a big enough model that's powerful enough, it can encode a lot of information about how language is typically used.
00:08:07
Speaker
And as a result, it's seen the whole internet.
00:08:10
Speaker
So it's seen all the classic works of literature in every language.
00:08:14
Speaker
It's seen vast amounts of technical information on any topic.
00:08:19
Speaker
It's seen huge amounts of commentary on news and current affairs.
00:08:24
Speaker
And so it can seem intelligent and well-informed about all these things.
00:08:29
Speaker
The downside of course, is that it's seen all the things like Reddit and 4chan and perhaps less salubrious forms of commentary.
00:08:36
Speaker
And this is one of the reasons why one of the big challenges about these large language models is how do you constrain the information that they produce so that they don't end up encoding and exacerbating existing biases.
00:08:52
Speaker
And I have to say, Mark, that's probably the clearest description I have heard on generative AI so far.
00:08:58
Speaker
So I really do appreciate that.
00:09:00
Speaker
That's incredibly useful.

Ethical Implications of AI Models

00:09:02
Speaker
Tom, maybe a tougher one for you, but when we did this session on AI and machine learning last year, one of the interesting points was sometimes when a model is running, and we were talking about more economic models or trading models, the best outcomes
00:09:21
Speaker
could come about through illegal means, right?
00:09:24
Speaker
Through ways of engaging the market that are considered unethical, perhaps, but if it's just a machine that's learning these things, you know, maybe it is possible.
00:09:37
Speaker
So just curious how we avoid outcomes that we have.
00:09:43
Speaker
you know, we certainly don't want to see.
00:09:45
Speaker
No, that's a good question, Jeff.
00:09:46
Speaker
I think that's, you know, that area of risk management is really the interesting one or one of the kind of core of a lot of conversations on this.
00:09:54
Speaker
I think the best way to look at it is that large language models are just that.
00:09:59
Speaker
They're a model.
00:09:59
Speaker
They're a way of deriving some output from a set of inputs.
00:10:03
Speaker
And, you know,
00:10:04
Speaker
the techniques that might have a large language model or generative AI at them from potentially, as you say, doing some trading or generating some text or even generating some code are really just the same kind of things that we do and have done for many, many years.
00:10:21
Speaker
So, you know, we have, you know, we have governance forums and we have, we have model review that's looking at if you've got a trading model or you've got a mathematical model or, you know, some sort of a system that's designed to do something else.
00:10:34
Speaker
taking some inputs and producing some modeled output.
00:10:37
Speaker
We have governance and we have framework around that.
00:10:39
Speaker
And this is no exception.
00:10:41
Speaker
So I think it's fashionable and it's current, but in many ways, the ethical side of it in terms of bias in one way or another is akin to bias or ethical review that's needed in trading models.
00:11:00
Speaker
All right.
00:11:00
Speaker
Thank you, Tom.
00:11:01
Speaker
I appreciate that.
00:11:02
Speaker
I was fortunate
00:11:03
Speaker
earlier this year in a training course to hear Peter Hinson speak.
00:11:07
Speaker
Peter is a well-known speaker and author who wrote The New Normal and The Day After Tomorrow, which were books that sort of examine these topics.
00:11:17
Speaker
He said the transformation we're going through right now with AI and Gen AI is not that uncommon.
00:11:22
Speaker
If you look back at major technology shift, whether it was the railroad, the phone,
00:11:28
Speaker
the internet.
00:11:29
Speaker
He said the difference is if you look, it used to take eight to 10 years for real change to come about, and now it's happening just at lightning speed.
00:11:39
Speaker
Dar, maybe you're the best position to answer this.
00:11:42
Speaker
I'm just curious, what are the dangers of moving this quickly?
00:11:46
Speaker
Is it as simple as the change just happens faster, or are we at risk for potentially frightening outcomes?
00:11:53
Speaker
Oh my gosh, it's such an interesting and multifaceted and difficult question to answer, right?
00:11:58
Speaker
I guess just to pick up on a few threads I think are worth drawing out.
00:12:03
Speaker
So great start of a 10 there in terms of risk management and our approach to balancing some of these challenges and the risks that are posed by these technologies, while still making sure we can adopt them in a really agile manner that best benefits the business, right?
00:12:19
Speaker
And so in most cases, I think sometimes people act like we've reached a singularity and the models, like the minute you open the box, they can just do whatever they want and they can all collude and distort the markets or, you know, whatever.
00:12:32
Speaker
In reality, it's a lot more.
00:12:34
Speaker
more rote.
00:12:35
Speaker
And we have really well trawled risk management frameworks and processes for model development and moving models into production and then monitoring models to make sure they're performing as expected and nothing is biased.
00:12:47
Speaker
Outputs aren't being produced and everybody's behaving properly.
00:12:50
Speaker
So that's the thing that I think it's worth just reminding people, because sometimes when we talk about generative AI, we think it's like we're unleashing some kind of beast.
00:12:58
Speaker
But in reality, at least at HSBC, it's certainly not the case.

Generative AI's Role in Financial Markets

00:13:02
Speaker
Thank you.
00:13:02
Speaker
Thank you.
00:13:03
Speaker
So now we've level set what gen AI is.
00:13:06
Speaker
I actually probably understand it a bit better than I did when I went to bed last night.
00:13:10
Speaker
I'd like to talk a little bit more about how it's being used today, and then we'll kind of work our way into what does that mean for the markets?
00:13:19
Speaker
Maybe it's a better question just to say, is it used today, right?
00:13:22
Speaker
So I know AI is used in capital markets.
00:13:25
Speaker
Is generative AI really used today?
00:13:29
Speaker
in markets today.
00:13:30
Speaker
And actually I'd like to hear from Mark and Tom.
00:13:32
Speaker
Why don't we start with Mark, but I'm curious what Tom thinks as well.
00:13:36
Speaker
Yeah, of course.
00:13:37
Speaker
So I think, as Dara said, people have this idea that something like ChatGPT can do anything now.
00:13:45
Speaker
And so there are definitely people who think the way generative AI will be used is it will come up with your trade ideas for you and it will pick the stocks you should be investing in and it will do the research for you.
00:13:57
Speaker
I think that is unlikely.
00:14:00
Speaker
As I mentioned earlier, these models are trained to produce plausible sounding continuations of text.
00:14:06
Speaker
And what that really means is that they're trying to come up with words that are the sort of things that everybody else would say.
00:14:12
Speaker
Now, as an investor, if you do the sorts of things that everybody else does, then you tend to get the same results as everybody else, in which case you may as well go for a passive investment product.
00:14:23
Speaker
It seems unlikely that a generative model will by itself come up with ideas and investment ideas that lead to outperformance.
00:14:32
Speaker
Having said that, generative AI is having a huge impact on financial markets today.
00:14:38
Speaker
And, you know, there are kind of several ways that that's happening.
00:14:41
Speaker
So one is obviously there are some sectors like tech and semiconductors where the prospect of potential future earning streams from generative AI are having a significant impact as those future earning streams are being priced into asset prices today.
00:15:00
Speaker
And then the sort of second round impact, I would say, is that
00:15:06
Speaker
Now, if you had to summarize what generative AI is likely to do in a sentence, it's that it will dramatically reduce the cost of producing content.
00:15:15
Speaker
So any industry or any sector or any section of a company which is primarily involved in producing content is likely to be severely disrupted.
00:15:27
Speaker
And so probably the winners and losers within those sectors of the economy are likely to be those that can use this technology more effectively in order to
00:15:36
Speaker
and make their existing workforce more effective in producing this content.
00:15:41
Speaker
Good content?
00:15:43
Speaker
Will it produce good content or just content?
00:15:45
Speaker
Right.
00:15:45
Speaker
I'm sort of curious based on what we're saying is it's really good at following patterns, right?
00:15:51
Speaker
I think if I think of what our HSBC research team does today, it's, you know, it certainly looks at patterns, but then we come up with our own ideas.
00:16:00
Speaker
Just curious if it can get there or there's a level it will never touch.
00:16:06
Speaker
I mean, never is a dangerous word given the pace of change in this area.
00:16:12
Speaker
I mean, if you look at the sorts of things that ChatGPT can do or models like that, people have this idea that, oh, it's just reproducing exactly the same stuff it's seen before, but it's not quite doing that.
00:16:24
Speaker
It is more creative than that.
00:16:26
Speaker
It shows elements of genuine creativity.
00:16:29
Speaker
But it needs somebody to provide the kernel of a great idea.
00:16:33
Speaker
So you need to tell it what you want and you can tell it to do something ridiculous, like please answer all these questions in the form of a haiku.
00:16:41
Speaker
And it will do that.
00:16:42
Speaker
It's clearly not reproducing haikus about this subject.
00:16:46
Speaker
It can be creative, but it didn't come up with the idea to do it all in the form of haiku itself.
00:16:50
Speaker
Basically, the way I think about these technologies is they're kind of like auto-complete on steroids.
00:16:56
Speaker
You could give it some bullet points by what you want, and it will produce coherent text.
00:17:00
Speaker
You could say, please reformat that existing text that I created in the form of a Twitter thread or, you know, like the script for a video.
00:17:09
Speaker
You can use the image generation models to produce like visual assets for slides in a PowerPoint deck.
00:17:16
Speaker
And I think for many investors, it's quite a rare investor that would have the luxury of being able to come up with an investment idea and then just immediately go and implement it.
00:17:26
Speaker
Generally, you've got somebody that you've got to go and convince, whether it's your boss, an investment committee, you've got to convince end investors to give you capital to invest.
00:17:37
Speaker
Somebody needs to be convinced.
00:17:39
Speaker
And so
00:17:39
Speaker
The process of coming up with investment strategies and then getting them implemented for many people will involve creating some kind of content.

Impact of Generative AI Across Industries

00:17:48
Speaker
And I think generative AI can help a lot with that sort of area.
00:17:53
Speaker
I think actually a more urgent problem for investors at the moment is trying to sort out is what's priced in currently
00:17:59
Speaker
stocks that are strongly influenced by this technology, is it appropriate or is it hype?
00:18:05
Speaker
Because we've seen with previous waves of market excitements about disruptive technologies, the market tends to get excited too early.
00:18:14
Speaker
Now, in this case, I would say that it's not too early.
00:18:16
Speaker
This technology is, it's technology that
00:18:23
Speaker
is widely available.
00:18:24
Speaker
General consumers can get easy access to this technology.
00:18:27
Speaker
So it's not like some disruptive technologies where people get excited about it years before it's come to fruition.
00:18:34
Speaker
But the longer the excitement goes on, there seems to be a tendency for markets to either price in too big an impact or price in an impact too early or to price in an impact that will happen, but will actually be a winner take all kind of impact.
00:18:48
Speaker
They price it in broadly.
00:18:50
Speaker
So all the stocks in a sector see the benefit from it, even though in reality, one or two of them will end up dominating as a result of the new tech.
00:18:58
Speaker
So that's kind of how I see it influencing markets at the moment.
00:19:01
Speaker
But interested to hear what Tom thinks.
00:19:04
Speaker
I think it's really interesting to listen to Mark's view and obviously from a kind of equity and a firm and a stock perspective.
00:19:10
Speaker
I think the way I look at it is more about some of the diverse.
00:19:15
Speaker
Overall, you know, the question I think is, is it being used?
00:19:18
Speaker
Right.
00:19:19
Speaker
And I think broadly the answer to that is no, not really.
00:19:21
Speaker
At the moment, I think we're at the beginning of the journey.
00:19:24
Speaker
I think it will be a very interesting journey.
00:19:26
Speaker
I think when you, especially when you start to consider some of the areas that generative AI will lend to.
00:19:31
Speaker
So I think we've talked quite a lot about text generation and content generation.
00:19:34
Speaker
I think that's really interesting.
00:19:35
Speaker
I'm sure we'll see that feeding into all aspects, whether that's something you do as part of your day job, whether that is your job generating content as a research analyst, whatever it might be.
00:19:44
Speaker
There's lots of other areas that I think it will permeate into potentially more quickly where it's very, very good at some more specific things.
00:19:51
Speaker
Maybe that's helping with data science.
00:19:53
Speaker
Maybe you need a one-line operation to run to work out.
00:19:58
Speaker
Maybe it's looking at what your revenues were in a particular sector on a particular data set or what a client's doing or whatever it might be.
00:20:06
Speaker
You know, maybe you're looking at a chatbot example or natural language processing like we do within HSBC AI markets.
00:20:13
Speaker
And I think when you start coming down to some of those more niche areas, no, we're not live yet.
00:20:18
Speaker
But I think in many cases, the journey to using it as an engineering tool will be easier and quicker.
00:20:25
Speaker
And we'll start to see those coming out quite, quite sooner.
00:20:27
Speaker
In fact, maybe we won't.
00:20:28
Speaker
Maybe there'll be tools like other software development tools that will just kind of sit there and feed our products and services more quickly.
00:20:36
Speaker
Thank you.
00:20:37
Speaker
I'm still thinking about AI Markets, the product that you work on, uses natural language processing, NLP.
00:20:44
Speaker
I'm just curious, as I'm thinking about it, the ability to process language is a component of it.
00:20:53
Speaker
Yeah.
00:20:53
Speaker
How important is it?
00:20:55
Speaker
Are they in lockstep, the development of the two?
00:20:57
Speaker
Are they dependent on each other?
00:20:59
Speaker
Just curious how that plays out.
00:21:00
Speaker
Yeah, that's a good question.
00:21:01
Speaker
They're not dependent on each other.
00:21:02
Speaker
They're not in lockstep.
00:21:03
Speaker
But I think, you know, what we offer in AI markets is really a portal for accessing the best of HSBC.
00:21:10
Speaker
So, you know, whether that's drawing data from our global desks, whether it's accessing liquidity and accessing prices.
00:21:16
Speaker
And in many, many cases, those are specific things that one of our clients are looking to do or a member of our staff.
00:21:22
Speaker
And
00:21:23
Speaker
that kind of contextual awareness and that generative AI, you know, I understand the meaning and some of the context and the background is not applicable.
00:21:32
Speaker
And we have, you know, AI markets live now using many, many non-generative AI applications and it's very successful.
00:21:39
Speaker
I think,
00:21:40
Speaker
The interesting thing will be where that contextual awareness and understanding a little bit more about what you're trying to do can expand what we're trying to do in our markets.
00:21:51
Speaker
Give an example where we look at research and maybe some of the content that Mark's putting out.
00:21:57
Speaker
Mark's publishing.
00:21:58
Speaker
you know, a piece every day on, let's say it's on AI or on emerging markets or whatever it might be, you know, our clients can read that and can access that in a number of places, but maybe they want a four-line summary of the last one week's discussion on emerging market or property in China or on elections in Turkey.
00:22:18
Speaker
And generative AI can help distill both the question
00:22:22
Speaker
and understand that context and drive the search or help distill the content and give summarization.
00:22:28
Speaker
And we're looking at all of those things with AI market.
00:22:30
Speaker
So, you know, I think it will be, they're not in lockstep, but it will be a very, very interesting aspect to it.

Transparency and Regulation in AI Usage

00:22:37
Speaker
So how do companies who are employing generative AI give clients confidence by providing a level of transparency?
00:22:48
Speaker
And if that's not a hard enough question,
00:22:51
Speaker
you know, are the regulators up to speed?
00:22:54
Speaker
You know, is this something that the regulators will be able to get their hands around?
00:22:58
Speaker
But I am curious just what you think in terms of transparency.
00:23:01
Speaker
How do you give your clients comfort without giving away your intellectual property?
00:23:08
Speaker
Yeah, I'm going to answer that in two parts.
00:23:10
Speaker
If I think the transparency bit, number one, I'm going to hand back to Dara, who I know is the expert on the regulatory view.
00:23:17
Speaker
For me, I think
00:23:18
Speaker
When you're thinking about transparency, the first thing to think about is how much are you asking for of your generative AI?
00:23:25
Speaker
So I think if you're saying, tell me everything that I need to do and I need to know about the world in three sentences and you don't look at anything else, you're asking an awful lot of a model.
00:23:34
Speaker
And obviously, the accuracy threshold on that needs to be very high.
00:23:37
Speaker
So the first thing I'd say is where we're looking at it in practice, we're looking at it on the applied AI side, is to try and keep those use cases
00:23:44
Speaker
small and succinct.
00:23:46
Speaker
And, you know, we're asking for individual things where the measurement of transparency is easier, right?
00:23:51
Speaker
And that might be, you know, give me some news or give me some commentary on something.
00:23:56
Speaker
And there, rather than relying on, you know, on that three-line output, for example, we'll provide the references.
00:24:01
Speaker
And I think that kind of traceability, you know, yes, we've got a four-line summary of what's happening in the market.
00:24:06
Speaker
You know, here's the six links.
00:24:07
Speaker
I won't name names, but, you know, to some well-known news sites that's going to explain away where I got that content from.
00:24:13
Speaker
So you've got that summary, but it's not a black box, right?
00:24:17
Speaker
But I think having that transparency in those early client-facing and internal live-facing models is really, really important.
00:24:25
Speaker
And I think being able to dig into, you know, we've got some of these things that we're looking at in development, and I can ask for summaries on news, summaries on market events.
00:24:33
Speaker
But the first thing I'm doing is validating that.
00:24:36
Speaker
So if I see there's been something interesting happening in my two-line summary or eight bullet points that I've asked for to summarize a particular piece of market activity, the first thing you're doing is cross-checking that and you're understanding the detail behind it.
00:24:47
Speaker
And I don't think that's just for me.
00:24:48
Speaker
I think that would be for in all cases, you know, with this kind of thing, that that transparency from front to back is something that's going to be really key.
00:24:55
Speaker
And also, I think that's a place that we can
00:24:58
Speaker
You know, we can differentiate ourselves because if you can have an implementation that's not a black box, you're using some of that generative AI technology under the bonnet, but you can provide those references out to the world to allow people to a certain extent do their own independent research.
00:25:12
Speaker
You can provide that reliability and that confidence, you know, that your distillations are right.
00:25:19
Speaker
Dara, to the regulatory point, let me hand back to you.
00:25:24
Speaker
Sure.
00:25:25
Speaker
I actually think the regulators, particularly in the US, are being very clever about how they proceed with requirements for two things in particular, which everybody has probably heard about in the context of AI and ML.
00:25:38
Speaker
So that's transparency, as we just spoke about.
00:25:41
Speaker
So first, disclosing when AI is being used, first and foremost, which isn't always obvious, right?
00:25:46
Speaker
And two, enabling people to understand how an AI system is developed, trained, operates, and how it's deployed.
00:25:54
Speaker
And that's something that, like Tom says, is model risk management 101.
00:25:57
Speaker
So that's nothing new there.
00:25:59
Speaker
In terms of having to be a real IP or anything like that, actually, the regulators and regulators
00:26:04
Speaker
The consensus view is that's actually not required in most cases.
00:26:07
Speaker
And they appreciate the fact that in some contexts, it's not possible.
00:26:11
Speaker
In some, in other contexts or in many contexts, it's not actually that valuable for someone to just send a whole bunch of code or an enormous, you know, data set or something across and saying, well, here it all is.
00:26:21
Speaker
So they don't even bother making that a requirement to begin with.
00:26:24
Speaker
And a lot of the way regulators certainly in the US are proceeding as well is to look at that question of, okay, we know that a lot of risk management frameworks already exist, model risk management, data risk management, third party risk, all sorts, right?
00:26:38
Speaker
What are the gaps?
00:26:39
Speaker
Are there any AI specific gaps that need to be plugged here?
00:26:43
Speaker
If so, what are they before they start blanketing us with kind of new requirements, which I must say is very, very appreciated in my line of work, certainly.
00:26:53
Speaker
Mark, can, can Jenny, I tell you how it derived

Accuracy and Data Security Concerns

00:26:57
Speaker
an answer.
00:26:57
Speaker
Can I, can I ask it how it derived an answer and does that have an impact on information security for our client?
00:27:06
Speaker
Meaning, you know, these, these models are going out and trying to collect patterns and collect information.
00:27:13
Speaker
it almost seems like we have to be more careful than ever to make sure that we're only exposing levels of information that we are comfortable doing so.
00:27:20
Speaker
So I guess two questions.
00:27:21
Speaker
Number one, can I go ask it to tell me how it derived an answer and is it as simple as seeing it?
00:27:27
Speaker
So you can ask a generative model how it derived the answer.
00:27:33
Speaker
If you just go direct to a generative model and say, answer this question, tell me where you got this information from,
00:27:40
Speaker
because it's just generating plausible continuations of text, it's likely to give you a confident sounding answer and then make up fake references.
00:27:50
Speaker
So it'll say, oh, I found it in this academic article by some academics with names that sound plausible and a journal name that sounds plausible but doesn't actually exist, or like some lawyer discovered in the US.
00:28:04
Speaker
that it will generate a legal case for you filled with precedents that don't actually exist.
00:28:09
Speaker
So generally the sorts of workarounds to this are actually constrain the way that you ask it the question.
00:28:17
Speaker
So for example, many of the sort of question answering questions that we're starting to see in heavily regulated industry, a user puts in a question.
00:28:27
Speaker
What actually gets sent to the generative model is the first step is
00:28:33
Speaker
that question is searched in a normal way, like standard technology that's been around for a long time, searched in a database of reference material, whether it's,
00:28:45
Speaker
let's say it was an HSBC specific thing and you were looking in all the HSBC documents that were internal.
00:28:51
Speaker
I'd say, go and look in this database and find records in this database that seem to be good search responses for this question.
00:29:00
Speaker
And then what actually gets sent to the generative model is please answer this question using this information.
00:29:06
Speaker
And then what the generative model can send back is hopefully the generative response will then take the little snippets of information that it's given and answer the question just using that rather than making things up.
00:29:21
Speaker
And as Tom was mentioning earlier, the system can be built in a way that will show the links and say, if you want to go and verify this information, this is the information which a standard search would say,
00:29:34
Speaker
should answer these questions.
00:29:35
Speaker
So you can actually go and if you want to interrogate this in more detail, you can like read the raw source material yourself.
00:29:42
Speaker
That seems to be a much safer way than just asking the model because the model, it's trained on everything that humans say, humans sometimes lie and make things up too.
00:29:51
Speaker
And that's, that is the downside.
00:29:54
Speaker
Mark, Dara, Tom, thank you so much for your comments.

Conclusion and Future of Generative AI

00:30:01
Speaker
Generative AI, emerging technology, these are things that clients are looking to understand.
00:30:09
Speaker
I think we've started a conversation here.
00:30:11
Speaker
You guys have given me a really good sense, first of all, of exactly what is generative AI.
00:30:16
Speaker
But I think I have an understanding of where we are on the continuum.
00:30:20
Speaker
And it feels like there's value to be derived today, but it's nothing compared to the value that we're going to see play out.
00:30:29
Speaker
in the course of the coming days, months, and years.
00:30:31
Speaker
So I look forward to updating, you know, hopefully sooner, but if not anything else on the next GEMS panel in the following year.
00:30:39
Speaker
And I can't wait to see how far we've gone.
00:30:42
Speaker
Thank you for joining us at HSBC Global Viewpoint.
00:30:46
Speaker
We hope you enjoyed the discussion.
00:30:48
Speaker
Make sure you're subscribed to stay up to date with new episodes.