Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Rui Falhas Santos - Unfolding Successful AI Use Case in Banking image

Rui Falhas Santos - Unfolding Successful AI Use Case in Banking

Straight Data Talk
Avatar
13 Plays4 months ago

Rui Falhas Santos - is the Manager of Data Analytics at one of the largest banks in the world. He joined us to discuss the AI applications they have implemented. The creativity and the use case are very captivating, as it is applied in one of the most regulated and legacy-bound industries. Rui is a hands-on manager with an extensive background in data. His data team helped build a solution that reads, on average, 300k articles a month to catch any alarming signals about their clients. Additionally, we discussed responsible AI from the decision-making side of model output and how data is collected for model processing.

Transcript

Introduction to Stray Data Talk

00:00:01
Speaker
Hi, I'm Gillette Kachova, CEO and co-founder at NASA Data. Hi, I'm Scott Herlman. I'm a data industry analyst and consultant and the host of Data Mesh Radio. We're launching a podcast called Stray Data Talk and it's all about hype and data field and how this hype actually meets reality. We invite interesting guest guests who are first of all data practitioners to tell us their stories, how they're putting data into action and extracting value from it. But we also want to learn their wins and struggles as a matter of fact. And as you said, we're we're talking with these really interesting folks and that a lot of people don't necessarily have access to and that these awesome conversations are typically happening behind closed doors. And so we want to take those wins and losses, those those struggles, as well as the the the big value that they're getting.
00:00:47
Speaker
and bring those to light so that others can and can learn from those. And we're going to work to kind of distill those down into those insights so that you can apply these amazing learnings from these really interesting and fun people and apply them to your own organizations to drive significant value from data. Yeah, so every conversation is not scripted um in a very friendly and casual way. So yes, this is us. Meet our next guest. And yeah, I'm very excited for it. Hi, everyone.

Meet Rui: AI Enthusiast

00:01:18
Speaker
um I'm Julia, and today we're hosting ah Together with Code. We are hosting Vui, who I met about three weeks ago in Amsterdam in a Google Cloud event where she was sharing about their fantastic experience applying AI for good.
00:01:34
Speaker
in a very guard-lailed industry. ah What was super fascinating to me about Rui is that he sees AI and also disruption that it brings to the world and technologies, you know, to our lives in a very optimistic and beautiful way. So, Rui, thank you so much for coming. Please introduce yourself. yeah Thank you, Yulia, and Scott for ah inviting me. It is a big pleasure to be with you in this ah conversation. So my name is Huy. I'm Portuguese and I moved to the to the Netherlands a few years ago, so more or less four and a half years ago. I have a passion for data, it's a passion for technology, so ah that's my background. and So previously, so in Portugal, I was a consultant for more or less 20 years, 21 years as a consultant, working in many
00:02:24
Speaker
companies like Deloitte, PwC, Kachemini, among others. And when I moved to to Netherlands, so I started working at ING, where I'm having that really great experiences also. um A bit just from my personal side, I am married, I have a doctor, two cats. I really like sports, so I run a lot. I also practice nunchaku. um and I am a photographer also and I did um in the past influenced by my wife some volunteering also and well in a nutshell is more or less who I am just to highlight that well this is ah for this podcast

AI in Banking: Use Cases and Benefits

00:03:04
Speaker
I am representing just myself not my company so it is my opinions like we have discussed you in that Google event it was really really great
00:03:13
Speaker
yeah yeah it was so um I think it's a bright way to approach this conversation because we we're going to have been a few controversial questions to you might have. First of all, I would love to kick it off with the sexual use case you have in your current place. um What I was telling Scott before you joined is so It is a fascinating use case because ah ah basically you had this problem already solved with machine learning, as far as I remember, but and you were able to streamline the entire process by applying AI and not in a single place, but in few um services, which helps to ah which helped you to have it.
00:04:06
Speaker
have the results of higher quality, delivers the product faster, and simplifies lots of things. If you can onboard us on ah what kind of problem you're solving with it, um why it's difficult to do with machine learning model and why AI was such a good use for you, um yeah, and we're going to still ask you new questions. yeah Yeah, so if um if I can go a bit deep on on that. um So but when when if you think about this type of company, so in financial services or even insurance or in other industries, so we need to know how our customers are really important.
00:04:45
Speaker
and And not just for the company itself, but for the entire world. Because, for example, you have now ah several several banks in the world that um cannot go to bankruptcy at all. And so they are very scrutinated. um They have a lot of process ah to to to have and everything in place, so all security. And in this case, we need to know really how our customers, what we who we are doing business. And this business case was quite interesting. And when I started it, I was a love expert side hacker, as you can see, you say, and because what we what we what we are doing is and knowing better how our customers, we need to know their reputation and um and how the articles in the internet all over the world are speaking about our customers.
00:05:37
Speaker
And so what we have what we have built ah in a risk perspective was ah to collect all of these articles from the Internet that are public information so everybody can access to it. um And everybody can do this type of work manually. That is reading an article and say, OK, this is about entity A, entity B. And it's it says that there is a fraud problem or a bankruptcy problem or a possible bankruptcy problem on these companies. And this is quite easy to do if you would read an article. But if you think about there is around 300,000 articles a day that you can read, it it is quite an impossible mission ah to to to do it. it's It's impossible, even if you have, ah so well, you you you would need to have maybe 3,000 people doing this on a daily basis to face all of these articles. So it is impossible. Any company can do this.
00:06:34
Speaker
And so what we have done was so create a pipeline to read all of these articles, created and machine learning models to identify ah if this article is about bankruptcy or fraud or whatever it is, the mind the main models that we we wanted to have in our system. Also, ah and another there advanced model to identify, is this talking about a customer of us or not? Of course, it is not. We don't want it. We just want our customers' articles. um And then in the end, in the end so in ah end in a nutshell, we know that we have articles that are about the topics that we would like to see ah in a negative way, by the way, so we are trying to find negative articles. This is also important because, for example, if you if you ask me about human rights, ah there are a lot of articles in the Internet about human rights and about this entity, but it could be in a good way.
00:07:26
Speaker
I could say this entity is quite good about the employees, do this, this, and that. And this is about human rights in a positive way. We don't want that. We just want the bad news ah to understand. if there yeah that's deep that's that's That's the part that is not good in this project because we are always facing the the bad news. But it is how it how it works in a risk perspective. We need to know about but the the negative sentiment in this case. And then in the end, we can can say to our users, okay, you are responsible for this customer. um And so there are potential problem in this customer because of these, these, these, and these. And this is wonderful doing these already in machine learning learning models, traditional ones.
00:08:09
Speaker
And even using generative AI, it's even better. And it is even better well because of a lot of things. The first of all is that we don't need to um to think about translating the articles. Imagine that, for example, so in the world we have a lot of languages and we are collecting articles from all around the world, all countries that you can imagine. ah So in total we have 54 different languages. um and we need to translate all to English and when you use generative AI models you don't need to do that and this is a big advantage because you can ask so you can do the prompt engineer in English and and you can use articles for example in Portuguese.
00:08:52
Speaker
And the answer will be in English, and this is a great advantage of using those those models. um Even the accuracy a is better of these models, also a great plus. But the biggest thing then, it it is the next step. That is, for example, um now Let's say a human, you you you read an article and you and you say to me, oh, this article is talking about fraud and it is about this entity. Okay. And it is done. But then I can ask you something that how our traditional machinery models cannot answer. That is, okay, Yulia, but why are you saying this is about fraud?
00:09:33
Speaker
And you can explain, me ah, because I read here this, this, this, this, and so which is about fraud. Nice. And generative AI models can do that. And this is a value added, a tremendous value added, that is the why we can answer the why and not just saying to the users that this is about fraud. This is about human rights in a negative perspective, like I said. um You can tell this is about fraud. And why? Because it says here, this, and here, this. And so you can automatically ah highlight ah in the in the article, in this case, and parts of this article explaining the why we are saying it is about fraud. And so imagine

The Role of Human Oversight in AI

00:10:12
Speaker
you can do in other industries also, in other use cases, ah using this type of models. So when you're looking at this, how do you think about
00:10:23
Speaker
um Who is the user? right like ah Is this the internal risk manager or is it the client director for this bank where you're you're saying, hey, like you're doing business with this this ah company and we think that things are going negative for that? like what What is the actual like point of it ah rather than just saying, okay, we're We're finding that there's a lot of these articles, and I'd love to hear how you also think about managing kind of bias in from these articles, because you know let's say I've got a short position against this bank, and I just start putting out all these negative articles. like How do you think about bias and um credibility rating as well? yeah
00:11:06
Speaker
Now indeed, in this case, it's internally. It is internally, so it is not going to outside it, even because this is quite sensitive information. And we are not yet in a maturity level that we can have these 100% automated. Let's say, for example, these models discovered that, for example, you made frauds. and automatically raise you a letter saying okay we cancelled all your accounts and imagine that is false positive. It's not there yet so we cannot do it and in this case there is always so the
00:11:49
Speaker
The last, the very last step should be still a human step. So the human will need to decide so um what will be the last action to do. This is like if you are driving your car, you say that I am i want to go to Rotterdam and it tells you three different ways and you can choose. and you choose it. And then you can also apply in the settings if there is traffic, if you allow the system to change your routes, for example, you can you can choose it or you can choose not to have it. um and and And so we are not yet there to have these fully automated.
00:12:30
Speaker
So what you're saying to me that you have to surface and find insights quickly, but not to not necessarily make automated decisions, which is totally makes sense, right? It's kind of a supervised, as we used to say, machine learning, but now it's supervised insights by But tesa yeah yeah but but has ah has an my passion of data and technology ah in this type of ah ah use case, and I have some examples also from the from the past in other other companies, my as I usually say, my Nirvana.
00:13:05
Speaker
would be always would be always to have and to move from a predictive model to a prescriptive model ah where we can ah fully automate the But in most of the cases, ah we have already technology to do that, absolutely, like in cars, for example. ah Cars can fully automate
00:13:29
Speaker
try and process drive drive you to wherever you you want so um but well at least i don't i don't trust yet so completely in the fully automated car so i like the technology but i think we are not yet there but i'm quite sure in a um I don't know if it will be in our generation or and in the next two generations, I don't know, but I i i believe ah all the cars will be fully automated and at that stage ah we will solve the traffic problem because the cars will be able to communicate with each other um and it will be amazing. But I don't know if we will see this in our lives. I hope so.
00:14:10
Speaker
Yeah, okay. just I'm really curious about something before we jump further to other use cases, but can you please share it with us how much the input of um this insight that you were generating for your internal users were improved by applying LLM state? Did you measure it? I believe you measured it. How did you measure it and how much was the improvement? ah in this In this case, ah so we we have have a lot of stuff that you can measure in in these. The most obvious ah is the accuracy of the models. How many false positives or false negatives we we have. and The test we have done ah was really, really great using this type of of technology.
00:15:02
Speaker
but then you can measure it and in other ways and in some ways that some sometimes it's not tangible but it is more a feeling but for example in this case um we have 15 machine learning traditional models Okay. And um when we want to create a new ones, new models, and we have this experience, I think it was more or less two years ago, so we needed to implement two new models. What we needed to do was not completely from the scratch, um but almost from the scratch creating the model. And why? Because for example, if you if you think about
00:15:44
Speaker
In indeed this case, one of the models was was the the the human rights models. Of course, you can copy-paste the model from, for example, the sanctions model, but then all the words that are behind it are completely different so because sanctions is one thing, human rights are other things, so you need to train in different labels, different datasets, the positive words and the negative words are completely different, so you will we and we did spend some time ah to do this and now it's completely different because what you need to do is to spend the proper time refining ah so doing prompt engineer and so refining the questions that we are sending to the to the models and don't need to create or recreate the model from the scratch and this is a huge advantage in implementation in this case
00:16:32
Speaker
but also in the maintenance um because in this case you have a kind of a a big model that faces all of these with those questions, with those prompts and not 15 different models that you need to pay attention to each one. And that is a huge advantage in know in the, how to say, the back office of it, technology back office, that the users don't see this, but for us, it is a ah huge advantage. No, it's also time to value. you like yes yeah but It's like you're getting from updating the code base to redeploying it and and doing all the merge requests and everything. single Now you're getting there, I assume, in a couple of hours instead of
00:17:19
Speaker
Instead of a day or something. but what did Or more. Or more. Or more. more well yeah i mean that yeah and and And you were speaking only in the release part because the implementation and testing and so on, that is huge. It's a completely different story. After you have the basis, the base of the foundation, it will become much more easier to release new models. Absolutely. How did you think about when you were going through this? training from scratch versus existing industry models. And and then like you said, when you're going from to a new use case, how much did you go back to, I'm going to go back to, like you said, you've got they these models that are kind of that are quite similar and maybe you've got one kind of supermodel and that you've got, and but that's not the right term. but
00:18:06
Speaker
You've got some sub models that are like, Hey, this is our module for fraud. Here's our module for human rights. Here's our module for, for bankruptcy risk and things like that, but that it's still managed in the same code base. Like how did you think about. That is exactly what you're talking about, that bifurcating code base of, hey, we started with kind of the same thing, and then each one is manual managed totally individually, and then they become things that look very similar, but behave so much differently that the the reliability engineering underneath it becomes even harder than if they were super, super separate, because they look so similar, yet behave slightly differently, those fun little emergent behaviors.
00:18:46
Speaker
No, that is that is um exactly, and it is in the one of the topics that I was telling that is about um how do you train a model. and And again, for example, if you need to train a new model, you need to think about the words that are about that model. And in this case, in the generative AI, what we have done was so we needed to set up the the platform with some data sets, in this case about financial, about risk, and in those in those foundations ah can serve for all the models that you have. And then you can focus on the prompt engineer and not, ah like before in the in the machine the traditional machine learning,
00:19:30
Speaker
in finding all the words and the positive and the negative. But of course, one thing that is always important is to have a good data set. And this is valid for traditional machine learning. And now for generative AI is to have a good data set for training the models. But there are so big advantages in using these new worlds. like the ones I said already. so And it is a value added not just for the users, but also for us that are implementing it. I have a question. Okay, Scott, I just wanted to be first. Listen, you mentioned prompt engineering, but that requires not only just knowing data, but knowing a lot of context. And being very much into dApps, into every single problem, like what you're trying to
00:20:23
Speaker
highlight in those articles and those materials. So how is this prompt engineering happens? Like, do you collaborate with, so I don't know, like, I I wouldn't mention it. Like, are you sitting together at the same table with, you know, data science team and trying to keep up with those prompts and understanding where you're looking for, like how you can kind of catch everything. Or you have ah like a, your brainstorm that was a business, like how does this prompt engineering happening for you? For that, I have several comments. One is, in in this specific case, um we know already what we would like to answer. That's a good advantage that we that we have. so Like a core.
00:21:06
Speaker
Yes. And and and and so for us, that is ah an advantage for us because then we don't need to but take so much time from the users helping us. But of course, yes, we had some help also from a kind of a super user helping us with ah with this, but it was mostly a work from the data scientist that already knew about um about the the model. But you touched on one topic that is quite important, it is the contest. And the contest here, and I can give you a real example, ah and you you will understand the completely difference between the traditional machine learning models for against the generative AI. Because machine learning models will know that, okay, this is about business, this is about financial services, this is about and and doing business with these companies,
00:22:01
Speaker
And um ah we had we had at some at some point ah one article that we found that was about jaguar. And that the the machine machine learning in a traditional way confused jaguar, the animal with jaguar, the entity. And when ah yeah yeah you say to them when when you give the context to the these LLMs models that, ok okay, this is in a business world, a financial world, um and they can perform much better reading the articles and identifying if this jaguar that is in this article, if it is about an animal, and if it is about a company. And the result is,
00:22:45
Speaker
completely different ah for better in this case for better it was completely different so and this is a good example of the how the contest could influence the final result if the model doesn't know that this is not a zoo this is business This is so funny because like when when we as a human beings, when we communicate, and we can be very much on the same page, you know being in the same conversation, but we can have like different, like we we can still have a different meaning of the words. My recent one was tables, tables in database, or tables in a cafe, or ah you know like and we're had having a lot of it because it's face-to-face communication and we still imply different things yeah yeah at the same conversation. it's such a yeah i think I think both all languages have this. For example, Portuguese, you have a lot of those things, a lot. German might be the only one that that has fewer just because they decide to have insanely specific words that are 900 letters long and you're just like,
00:23:47
Speaker
Okay, why do we need that? It's like, oh, I guess it's specific communication. so they they it It is not possible to play with words like in Portuguese. yeah know repeat a lot to with its words yeah I don't know if there are good but puns when it comes to German. Maybe, I don't know, maybe German listeners could tell us about good puns and stuff. That is certain like but i such a beautiful case, but also when you talk about collecting this information from all media in the world, the you know there have been scandals on and all all although all ah around the world where, let's say, if I'm not mistaken, open I was using New York Times articles, which is considered to be a yeah
00:24:33
Speaker
um intellectual property and they have their content on the subscription

Ethical and Legal Implications of AI

00:24:40
Speaker
only. So like, how do you guys, because like basically OpenAI access those ah ah articles and expose it to the whole internet, like, right? So it's very much infringe, infringement. Yeah, infringement of intellectual property. So how do you guys deal with this? Yeah, no, we we first, mostly, ah the that the most of the articles are public. So you can go to the websites and it is it is free, so you can read, you can do whatever you want with ah we it.
00:25:15
Speaker
But in in this case, and well, like I said in the beginning, because we are a lot of, we have a lot of scretinues, we need to have a lot of security, and but in other industries, I think it is mostly did mostly the same. And in these kinds in this kind of tools, this kind of processes, um ah we need to have license. And so we we collect this information from third parties that are licensed.
00:25:43
Speaker
Okay, we need to I'm gonna ask a very controversial question right now. Heads up to you want to if you want me to cut it out. Yeah. Question is, do you have licenses for your model? Do you have licenses for account managers? Now in this case, so how it how it ah it is used is that we have license um to collect all of this information um and in this case it's a kind of enterprise license.
00:26:17
Speaker
ah But for example, we have another another case ah that the licensing is completely different and the license is a nominal for each user. So every time that we have more users, we need to extend the license. In these cases essentially yeah this case, it's different because it is a kind of enterprise license. And so then you can spread all the information around a around the users. No, you play according with the rules. No questions asked. No, no, and that's that that's for that's for sure. We we have ah plenty of internal procedures that we need to we need to follow and the we cannot, in that case, we cannot work around. so No way. just just but yeah cave and but better than in a startuposer but but did yeah and But you But in there you are you are touching in the also in the in the field of if we go ah back again to the artificial intelligence into the responsible AI, how you do it, how the companies and do it, ah if you trust or not trust those companies. So that's another world to discuss.
00:27:25
Speaker
So, I'd love to hear what are you actually exposing to your users, right? Is this like a fraud score or a bankruptcy score or a human rights score? it's It is simply if like like, for example, if you are ah responsible to know ah specific clients, to to to know their their health and reputation, it is just a flag saying, hey Scott, this is your client, we found this article and we are not sure that this company is performing bad or whatever it is, but we found this, have a look on it, ah take your decision, maybe there is a potential risk.
00:28:03
Speaker
and yeah was glad to How are you thinking about, like, ah how have your users reacted? So one of the reasons why I'm asking this specifically is ah so I'm very big on the FYI for your information. This is incremental information for you. There is no exact follow-up from it. um and A number of people get very frustrated by that simply because they're like, well, what am I supposed to do? You you gave me something, therefore I'm supposed to react. and It's like, no, you're supposed to add this to your corpus of information, your understanding of the client of what you're dealing with.
00:28:41
Speaker
Are you finding that people are understanding what this is for and when what's that? it It feels like banking is a little bit more of a mature industry and so people kind of get why you're doing this and and that, but like are you finding that people are going, well, why don't you just tell me if this is a problem or not instead of flagging something to me? or Are you finding that that your users are receptive to it? ah Yeah, yeah that's that's also a sensitive question. um Now, in this in this case, we want to move forward to a next step. I cannot go too deep on this because it is a kind of sensitive, ah but we would like to have some actions on this um these signals because of several stuff, regulations and compliance and and then those kinds of stuff.
00:29:26
Speaker
and but But again, one thing that I said in the beginning, and because it is so sensitive, and that is the word in this case, it is always human. So we are not yet in a stage that treat we can say, and let's stop doing business with this company, because of these alerts that we have. that we have created. We cannot do that at least yet. ah It is always a human decision in the indian end. So in this case, and in many use cases that we can we can discuss, so we are using artificial and intelligence and to accelerate some steps to try to reduce ah human error
00:30:15
Speaker
Like I said, 300,000 articles a day is quite impossible ah to perform. um so Well, we needed to have a lot of people, but so we want to accelerate this, but in the end, human decision is still there. so okay My question about this, so you're sending out these newsletters. correct No, it is more a kind of other. It's not newsletters. It is kind of a signal. Okay. A signal saying- It's a internal platform. Yeah, yeah, yeah, yeah. Out of 300,000 articles that you're reading, how many end up creating a flag? In the end, I don't have the numbers complete in my head, but I think it is around 3,000 or 2,500.
00:31:08
Speaker
articles in the end of the pipeline, some with signals and some without signals. Because, for example, if you if you are responsible for a specific client, you would like to see all the the stuff that could be a potential problem, but also other articles that are and talking about sher ah ya um your client and you could be also aware. so And in this case, we sent both. so Oh, interesting. So somebody can tune it. That's interesting. Yeah. Sorry, Julia. mike No worries. My question is, dean so if we're talking about any analysis, right, there is quantitative and quantitative part to it. So you mentioned that you are measuring the performance of yeah models, like false positives, all those, you know, shiny, nice metrics. Do you collect feedback from your users to this platform for if that was helpful or no?
00:31:58
Speaker
Yes, yes, in this case, um so we have a we have a feature. And I think this is um really, really important for these kind of models. Wherever it is, a traditional machine learning or ah generative AI model ah is to have the feedback from the from the users. and It is quite important to have it, and yes, we have it. um And more than having feedback from the user that are using the tool we have in this case specifically we have a team that are only ah giving feedback and so not using the tool or the business itself but just
00:32:36
Speaker
reading the articles we have there and saying, okay, this is is fine, this is not fine, um and to correct the the feedback. And if you have the options to to to have these in ah other models, the best the the the model will ah will be retrained. This is like a child. and a child so if you have ah If you have a child, they they start ah trying walking as much as they crawl and try and to to to stand up. the faster they will walk. And this is the same, has more feedbacks you have, has more you try, the better the models will perform. I have mixed feelings about it. Like you say, well, we started with machine learning and then we upgraded this all services to a lambsnit. It helped us to deliver more robust outputs. It helped us in attendance and everything. And yet you have a layer of
00:33:33
Speaker
as a team, like a yeah dedicated team to review the output before even it got to um a account managers. It is it is and but important and then you you are able to retrain the models ah with that feedback and improve them. Well, there is also part of that, like, why I'm having mixed feelings about it, because this is what you're saying from the technical but standpoint. We are able to refine the result and make it better over time, but it is also, but also signals about you being super responsible.
00:34:08
Speaker
about the output of the model because you totally understand the sensitivity of any decision that a person in the end of the pipeline could make. In this case, at least two people are looking at the same output before it's been, you know, proceed further your or act on, which I think is a beautiful example of responsible AI in this case. Yeah, absolutely. and in this well in in In these big companies, but you have ah you have a lot of processes to for security ah specifically for security in IT, but also on the data. and This touched yes on what you were saying in the responsible AI that is
00:34:57
Speaker
depends on each company the way ah you are implementing new models using these new technologies so and then ah all of us we can trust those companies and we can decide to buy or not to buy products from those companies if we trust those companies or not. And for example, in in in this specific case, um ah you see for the good and for the bad, we have a huge internal processes to review the models. So we have teams just to review, technically speaking, the models ah to review the process and to end ah what we are doing, which models we are doing, what are the output, why do we need that specific piece of information
00:35:42
Speaker
Do you really need that information to be there? um And these kind of questions are raised always in these ah in these cases. um and and And so you you you you need really to think deeply on the all the process ah to have well the best ah the best model the best possible model that you can ah that you can implement. so And this will lead also, in this case, I think there is not too much, but ah in some cases yeah it leads to them in this part of the responsible AI, if the model are biased or not ah by some ah specific ah specific topics.
00:36:21
Speaker
um And you analyze I think we have a lot of use cases that you use and on a daily basis that could be biased. and depending if it is a human or if it is not a human that decides on what to do. You know, like reflecting on what you just mentioned that machine learning, like any model, is highly influenced by the culture of the company where it was developed. One of the cases is so obviously the most prominent one, um the release of, I don't remember which model, by Google,
00:36:57
Speaker
You remember those images where they were generating? um not people You mean you mean the the Gemini model from Google? yeah yeah but They were releasing like black Nazi pictures and like, you know, hey, show us ah the senator from the 1800s and they just have, you know, people from all different skin color when it was, there was only by some white people in that. It was just like, if you ask specifically for that, yes, it should be able to generate that if you were asking for it to do that, but it just kept doing these things that were very questionable. just Just like, I mean, Google search is doing that right now where
00:37:33
Speaker
you know it it asks somebody asked it about like what should i do for depression and it said one reddit user said the best way is to jump off the golden bridge right like but for example if if you think about that um and i but i believe that the machines could perform better than humans No, it's for sure. It's just the influence of the internal culture where they wanted to do their best, obviously, to be inclusive and they were and and show their diversity, but it didn't reflect the reality. to like There is no Nazi.
00:38:07
Speaker
like Yeah, but for example, yeah in that in that case, it it is, ah I would say in a nutshell, ah a matter of um training the model in the

AI's Impact on Jobs and Work-Life Balance

00:38:17
Speaker
book, right? Exactly. In an impartial way of ah of deciding, of having such a decision um without looking to raise gender or whatever it is, or even if it is a friend of me or ah So that whatever it is the question here is, do you want the ah truthful answer or answer that is correct with their internal policies?
00:38:46
Speaker
I would rather prefer a truthful answer than, okay, let me see. Now look at me. I would rather prefer a truthful answer that resembles reality. I mean, I might be biased. No, but and you should always want that and that's why ah the companies need to to train the models, to to to have feedback, retrain the models in with using that feedback um because it is quite impossible to think about a specific model we implement is there and don't touch.
00:39:22
Speaker
That's not true. It is normal. Yeah, you you will fail. You will fail. You need to understand where the model are failing and fine tuning in a good way to fail less times. I believe that should be the the way of implementing it. but we But we don't, it is not realistic if we think that AI will solve all the problems, will be done at the first time, And that' that that's not realistic. We need we need to also to give a chance to the models because I think it could be quite often that you implement a model and then the first results are not so good. so and And then you have the user saying, um no, this will never work. and This is not good. ah So let's not do this. And this is not this is not fair. and right You need you need for time to implement, to train the model,
00:40:21
Speaker
to do mistakes, everybody do mistakes, and the models also, they are not... Well, I have a question on that. So how do we distinguish the false positives as a mistake of the model from its fundamental biases or fundamental, like, disregarding intellectual property or, you know, i go yeah and so and it In some it could be just a bad configuration on the model. And this is what I'm saying, yes.
00:40:53
Speaker
It could be, and and for example, you you could read an article, you could you could read something in a newspaper and you are telling me, oh we yeah, this this is a mess, this company were doing this, this, this, and this, and then I read the article and I say, but well, you missed this part. This, in reality, is in the good way. It's not in the bad way. Then you read again the article and say, oh yeah, you were right. so We humans understand that we can do mistakes, so we should also understand that the models will do mistakes and they need to learn. Because I think in the most cases what is in the in the human head is that the machine needs to perform right at the first time. And this is not true.
00:41:37
Speaker
yeah i say much and And specifically in AI, because if you if you give a and a specific ah task to a computer or to a robot ah that is always the same, no need, anything to think about it, it is always that. so Yes, maybe you can can expect that the machine will never fail and if you don't take off the the electricity, of course. ah They will never fail. But if you think about artificial intelligence, ah there is something that ah a way of thinking in the machine. So it is normal that they will fail a lot in the beginning.
00:42:15
Speaker
It's normal and and people need to understand this and we need to be a a bit more... and I'm missing the word. and Forgiving? Yes, maybe, don don'ts to don't... but Understanding. yeah Understanding on on that, yes. that's I think that's tricky. What were the expectations? This is how it's called. You can have high expectations, but you need to understand that they will fail in the beginning and they need to learn. Like a child, again, a child will not start walking at the first time. They will fall down a lot of times. All of us did it. We failed a lot. and And then suddenly, one day, we start walking and then we start running. But even when we start walking, we will fall down sometimes. Less times, but sometimes we fall down.
00:43:01
Speaker
But part maybe and and then if if you start running even harder, but but he and and it is understandable. And I think we need also to have this behavior. And this is a huge change in the behavior of a human that the machine can make mistakes. um And we just need to retrain or fine tune this. Just imply that there is a margin of error. That's it. As we do for everything. Yeah. Absolutely. But people have an inherent trust issue with data because people don't understand that they think the data is either a one or a zero. It's right or it's wrong. And so like yeah having that that understanding of
00:43:42
Speaker
there's a degree of correctness. And like you said, of having that forgiveness, having that understanding, we have to train people to have that understanding of, hey, it's not going to get it right 100% of the time. You don't get it right 100% of the time. You can't expect this thing that's that's trying to understand. It's trying to understand human behavior and and language and things like that. Language isn't precise. you know Is it a one or a zero is a much different question than What is this article saying about this company and is it even about that company when there are five companies with that same name? like Which one is it it about?
00:44:17
Speaker
you know no And and sometimes sometimes it's just ah ah how to say this, maybe a conservative thinking, because I can give you a ah real example. i'm I'm having some comments ah from people that saying to me, oh, don't go to the artificial intelligence. Don't work on that. And I'm working in this ah for a long time, and I love it. and don't But don't work on this, blah, blah, blah. They are saying this even before asking me, OK, what are you doing?
00:44:48
Speaker
they don't know. And yes, just judging as human beings, you know, less as a and And for me, this this is not normal. ah you You need first to understand what is behind the scenes in this case. What are you doing? Why are you doing that? And and then you can you can judge. Of course, everybody can have an opinion. That's true. And that we will not agree all in this ah revolution that we are living for a long time and it will will continue for sure.
00:45:21
Speaker
We will not agree on everything. We know that a lot of steps will not work. ah We will face a lot of of issues ah that's true. That's the price for this evolution that we need ah and you need to face and it is normal. It is a behavior change only. Yeah, so this is what I wanted to um come back to because during our prep call, you mentioned that you see the ILMs and everything that is happening today as a biggest disruption of humankind.
00:45:56
Speaker
ah And I could argue on that because I think the biggest disruption was inventing a clear bomb. I feel like this is yeah the biggest threat while I could be a flavor to all of it. Well, how do you think about that? Yeah, and we're getting philosophical. Yes, but I know we're about to fish. just Yeah. um In in in every every every revolution we had in humanity, every new invention and that comes, that pops up, and I think that is good things and bad things.
00:46:35
Speaker
um
00:46:38
Speaker
For example, as a simple a simple thing like a rope was a good invention because we can use it for a lot of things in our, everybody uses it on a daily basis, but there are people that are using it to tie on a bad way. And so, of course, all of these new innovation inventions and revolutions have good things, have bad things. um Well, I aim that old people will be in the the best ah in the in the good good side of the of the revolution.
00:47:14
Speaker
and But there are, well, in this in this case, the technological revolution and specifically and on AI. and For example, one of the the good things of technology is that it's available 24-7. But this is also a threat because humans are not available 24-7. And so you can say, oh yeah, okay, so we are going to have more unemployment. so Yes, and fortunately, yes, but what we need to think about is to think about what the people that will lose these jobs can help in
00:47:56
Speaker
other stuff. And and like one once one simple example that we had, I don't know, maybe 20 years ago, or less as I'm trying to remember is when you go by car and you you you need to pay in the in the highway 20 or 30 years ago, first you needed to have always cash with you. Then there was a huge advantage that you don't need cash anymore, so you can use cards, you give the card to the person and that's that's good. Then we had a new evolution that you can go there, there is no person there, so you put your card, you just pass. Now you have some machine boxes in your card, you don't even have to use your card, you just go.
00:48:39
Speaker
and your account will be discounted and so now you have several of these places with maybe two or three persons that you can see and before you have there 10 persons so this will happen ah also in the future and with artificial intelligence and what we need to do is we need to be prepared for that and and and try to be also creative to have different types of jobs so or maybe maybe and working less days of a week. Why not which is or king sons why not working four days a week or three days a week and and then we can spread the job for more people um and we have more time for ourselves.
00:49:23
Speaker
Because although this was I was reading something a few days ago, I don't know where, that one of the negative things of AI would be that term it will make the humans lazy, the next generations. And I can understand this. Lazy in the way of, okay, the machine will do this for me. But if you if you look this in a completely different point of view, I think this would be a benefit because you will have more time for you. You can just, if you have machines doing some of these tasks that you do usually, that's great. Let's use that time to do some sports or to read a book or to do nothing, to do whatever you want. I think it should be great. So depends on the point of view that we look at it and how we face it. Okay. It sounds like maybe we will
00:50:21
Speaker
be able to achieve for any little work-life balance, was it? Let's put it this way. I hope so. Yes, I hope so. It would be great if we can use this revolution to have... a Because now in the last maybe 30, 40, 50 years ago, and I think the the mind health of people are getting worse because of the stress, because of the, we are always pushing to to do this, this, this, and this, and more time and more. And for example, I think here in in the Netherlands, ah one of the really good things that I can see is that there are a lot of respect on your life.
00:51:06
Speaker
And this could be something that could help and humanity to have more time to and to do other things and to leverage that there what technology can can do for it for us. Very beautiful thinking. Roy, thank you so much for being with us. I had a great pleasure you know sharing and like listening to how empowering and inspiring you think about AI and it's so beautiful, you know, and how you guys managed to implement it in banking that and do such a great stuff and also being ah respectful about human rights, intellectual property and everything.

Closing and Appreciation

00:51:44
Speaker
Thank you so much for being with us.
00:51:45
Speaker
Thank you also again for the opportunity to be with you. I think we could speak about this for more two or three hours, ah but thank you for for the invitation. It was a great pleasure to be with you both. Thank you. Thank you.