Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
E 23: Turbo-chart CEO chats construction planning and schedule risk analysis image

E 23: Turbo-chart CEO chats construction planning and schedule risk analysis

E23 · The Off Site Podcast
Avatar
84 Plays1 year ago

In this episode, Jason & Carlos are joined by construction project planning and schedule risk analysis powerhouse Santosh Bhat.

The three discuss the world of risk assessment in construction, whether AI will play a positive or negative role, and how the benefit of planning might not be the plan, but the process. 

Follow Carlos on Linkedin | Follow Jason on Linkedin

Recommended
Transcript

Introduction to Episode 23

00:00:00
Speaker
Hello and welcome to episode 23 of the Offsite podcast where we chat all things construction and technology. My name is Carlos Caballo. And I'm Jason Lansuny. Good day, Carlos. How are you doing? Yeah, pretty good. Thanks.

HS2 Project News and Implications

00:00:12
Speaker
Interesting news yesterday on the HS2 front. The good news is they're going to build Houston Station. So that little leg into London is being done. If it was old at common, I wouldn't use it. So that's handy. But they have cancelled everything north of Birmingham. So we're spending
00:00:27
Speaker
32 billion pounds to reduce a train journey by 25 minutes, which, uh, I was gonna say it like that. Yeah. You didn't tell me you're going to bring this up. So I didn't even think about what, what to say regarding HS2 and it needs a trigger warning, but I did, I did hear someone say that like, it does.
00:00:48
Speaker
This is a risky conversation point, but it was like, is this essentially the UK saying they don't have the capability to build a train line between London and Manchester? I had no idea. The worst part of the conference yesterday, since the prime minister actually announcing it was the trains will continue up to Manchester, but they're going to run on existing lines and not at high speeds. So they're just... Yeah, it's fine. Yeah, you get out, everyone pushes a bit and you just get there in the end.
00:01:18
Speaker
Yeah, we'll see. Anyway, he did commit to spend the remaining 30 something billion pounds on, I'd quote, hundreds of infrastructure projects. So yeah, let's see how quickly they can go through planning approval. Or what the hell the saving is then. Yeah, exactly. Good news. We're not going to spend the extra 20 grand on the car. We're just going to buy 20 grand worth of lollies, Xboxes, and like toys.
00:01:47
Speaker
Yeah, wait for the problem to surface again in a few years time.

Meet Santos: Schedule Risk Expert

00:01:51
Speaker
Yeah, the last episode last week was one of the first that I've actually listened back to in full. And I've got to say, even I was interested in it. So that's, that's a win. You don't listen to the back and forth. I can't stand my voice, as with most people that listen to it.
00:02:09
Speaker
Yeah, to be fair, listening to your own voices. That's not good viewing for anyone. But yeah, no, it was a decent episode. So today, we have a guest who's a bit of a legend in the planning world. Well, he's one of the most renowned schedule risk experts in Australia. It's what she has experienced on major projects, even founded his own application in the planning world called chat turbo chart.
00:02:32
Speaker
So pretty serious CV. Welcome to the podcast, Santos. How are you? Thank you, Carlos. I'm very well, thank you. Santos, I was saying to Carlos the other day that you have, you do a lot of, uh, schedule risk work for most of the, or many of the major infrastructure projects across Australia and probably
00:02:53
Speaker
uh somewhat beyond and that would probably have the potential to uh make you like have you ever counted up the number of projects that you've worked on or or for because it's got to be one of the largest numbers out there
00:03:07
Speaker
I've never actually put a figure against it. I've certainly tried to put some sort of representation of the type of projects I've been involved in and what stages. I've lost track. I mean, I've been doing schedule risk for well over a decade and some of them have been very simple, basic. Here's a schedule run at analysis and we're not telling you anything else, which is a bit pointless, right up to being very heavily involved.
00:03:32
Speaker
been very heavily involved in a lot of infrastructure projects from that schedule risk perspective, I guess. Yeah. So it's very topical because you're almost like your own AI model. Bye.
00:03:44
Speaker
Uh, in trained on many projects. Yes. I mean, I'd be careful in saying that, but yeah, um, uh, certainly I've, I mean, my approach to it is, um, first of all, I say, I'm not a risk manager. I'm I've never been a risk manager. I come from a planning and scheduling background, but my perspective is schedules are quite complex beasts. You know, they're very, have, they have a lot of behavioral things around them. And I think.
00:04:11
Speaker
Schedule risk is often requires an understanding of how a schedule functions and how a schedule behaves to be able to do that risk analysis. And that's where I got interested in it. And that's where my background in planning and scheduling, I think fit in nicely with doing that risk analysis. But I do explicitly like to say, I'm not a risk manager per se. I guess to kick off the real discussion there, it'd be great if
00:04:35
Speaker
Considering our audience of engineers, what is QRSA in like its simplest form? Right.

Understanding QRSA for Engineers

00:04:42
Speaker
So QRSA or quantitative schedule risk analysis, I guess is a analytical way of assessing how risks can affect schedules or the program, as we call them here in Australia as well. And it can mean a few different things, right? And the easiest way that I like to explain it to engineers specifically is I generally don't like to approach it from.
00:05:03
Speaker
the typical risk perspective of, you know, start diving into probabilities and distribution shapes and things like that. What I like to say to engineers is to say, just tell me how things could change from what you expect, right? And if you view risk as being how things can change, then that actually makes it quite a simple perspective, right? You say,
00:05:26
Speaker
While we're doing this, here are all the things that may happen that makes me think of what can affect that thing that I'm doing. And if you just look at it simplistically like that, well, that is risk. That is risk in a way. Some risks are good where productivity is achieved or exceeded. Things go well, and you may do things faster and cheaper than you thought.
00:05:48
Speaker
and risks can also mean bad things where negative outcomes where you're saying it takes longer and cost more money you've hit some problems but the good thing about explaining it that way to engineers is engineers are already doing that you know i always like to say engineers if there's one thing engineers are very good at its planning and that's you know not planning as a role but planning as in the actual function of thinking about what we're doing and
00:06:10
Speaker
Engineers are very good at also thinking, well, what if this happens? What if that happens? What else could happen? It's already in their heads, really. And I guess the job of the schedule risk analyst or the risk analyst in that case is actually to tease that information out and formalize it a little bit more. That's the approach that I like to take is to say that the smarts and the knowledge is not in me. I'm not the person that's going to be able to tell you how to do your job well.
00:06:38
Speaker
Yeah, it's like playing out scenarios, right, which the engineers are good at. And I've never, I might sound like an idiot now, I've never heard someone talk about risks as good and bad. I've never heard of like a positive risk is like you go faster. Is that me being ignorant? Or is that like kind of normal thing?
00:06:55
Speaker
No, I mean, I think the reason why most people think of risks is because they often hear it in the terms of risks and opportunities. You know, people say, have you got an R and a register? Yeah. So people say risks is often perceived as being a threat. And, and the other, the other side of the equation is an opportunity. But if you look at the definitions of what risk is, like the, you know, the ISO standards, what they say is risk is uncertainty, right? It's just saying, uncertainty, things are unknown or variable, and that can mean good or bad.
00:07:23
Speaker
right? So in pure risk management term, it you know, we talk about threats and opportunities or in some cases both. It's it can be either side of that equation. Yeah, so Santos the I guess to also tie in the question about, you know, Carlos's question about about QSRA.

AI's Role in Schedule Risk Analysis

00:07:42
Speaker
We had been having these conversations over and over again with different teams or projects, both UK, Oz, New Zealand, about artificial intelligence, construction, how AI would have any impact on construction. A lot of the conversations then seemed to shift towards schedule and schedule analysis. And we wanted to kind of explore, I guess, where that, you know, what it is, where it is,
00:08:09
Speaker
from the perspective of someone on the ground delivering a project, you know, isn't something that actually is something I should be aware of today and what can it do for me. So last week we spoke with Dev and CEO of Enplan.
00:08:22
Speaker
as like a, I guess, a representative of a vendor of that type of technology. We asked the question, we asked questions around like, what can it do? What can it do? And one of the areas that he talked about, and obviously that's where their product is, is around this process of QSRA. So Carlos's question around this, uh, schedule risk analysis and QSRA is directly linked to this being an area that some companies are saying that AI can make a difference. So.
00:08:52
Speaker
With that in mind, I don't know if you have a general thought about that. And if you are, I guess, from someone on the ground, do you see, maybe if I push you for, are there elements of the process that you run in QSRA on projects that could be assisted conceptually by some form of artificial intelligence, do you reckon?
00:09:16
Speaker
I think there is definitely an opportunity for it. And, you know, whilst I'm generally a little bit of a skeptic in AI and not because of AI, as in the technology and what it's capable of, but fundamentally from the data or the data that we use, the schedules that we use for this artificial intelligence,
00:09:38
Speaker
I just think it's not ready. It's just not suitable for it. And we can elaborate on that in a moment. But when you think about what QSRA is, what we're doing with QSRA is we're saying, here's a schedule. It gets us from start to end. And it consists of activities, relationships, durations, all that business, all that fun stuff. And what we're applying is we're saying, what happens if those things change?
00:10:05
Speaker
So typically what we used to do with QSRA is we used to take durations of activities and modify them. We would say it's not 10 days, it's 15 days. What does that do to the end date? And we can apply that up and down the schedule and we use that as a quasi measure for risk because we're saying the risk is that those durations may change.
00:10:27
Speaker
Now to me that's a very basic or fundamental form of risk analysis. I wouldn't even call it risk analysis actually because I would say all you're doing is you're checking the sensitivity of the durations, right? You're saying if durations change my results are different. In fact, you don't even need a high paid consultant and fancy technology to come to that conclusion, right?
00:10:48
Speaker
So where AI can fit into it is to say, if you have this huge pool of data, and you find similarities in activities and projects, you could potentially tweak durations to understand what impact that had. And say, based on my pool of data, here's the average, or P80, as we say, 80% confident duration for that range of activities, and here's the result it's produced.
00:11:17
Speaker
The big problem with it that I see is that I would say, well, that's nice. I mean, that's interesting, but it's not very helpful because the first thing is, and I put it to you in this position, you said to an engineer who was responsible for a piece of work that said your duration of 10 days is wrong. It should be 15 days. The first question they're going to say is why?
00:11:38
Speaker
What is it that's going to make it take 15 rather than my 10 that I expect? And that's where I think the whole thing falls over. Because first of all, the context that tells you why it's 15 not 10 is just simply not there in schedules. I've lived and breathed schedules for, as Carlos said, for the better part of two decades. I understand how people develop schedules and what's contained in schedules.
00:12:06
Speaker
And all we essentially have is descriptions that says install steel, excavate trench. We have no understanding of what's behind that. And that's where I think my comment earlier about the data behind the AI tools is just not ready.
00:12:25
Speaker
What I guess sent us to steel man, to steel man, the opposite case. If I repeat back to you a little bit, what you said is that there's, okay, maybe AI could tell us that it believes that there's a risk of something happening or a part of our schedules is more risky than another. And, but it can't, what it, what it struggles to describe at the moment to it, let's say an engineer is why, and maybe even what to do about it.
00:12:55
Speaker
if that's a kind of fair bad summarization of you if i was to still man the other side is there always this problem in schedule risk analysis of like unknown unknowns and the fact that there is a risk just because you can't describe what it is
00:13:13
Speaker
Because aren't you, I guess, to say another way, if you're only describing all the risks that someone can describe why it's a risk, that yeah, that you're only looking at the core things that you think about? For sure. I mean, you know, that's one of these things that we always need to think about is, and what else? You know, what else is there? What other risks can happen?
00:13:36
Speaker
you are relying on all those biases that exist, the recency bias, confirmation bias, all those, there's a whole stack of them to provide you the input. The problem again is AI can sift through schedules, but none of that's ever contained in schedule information. There's never, I mean, you can say, here's our plan schedule and here's our actual schedule that we work with. And the difference between the two is never explained anywhere in that schedule data.
00:14:05
Speaker
It may be captured somewhere else in the delay register. It may be captured somewhere in the forensic analysis. In fact, that's probably why whole industry exists for doing that sort of that sort of analysis. Yeah, but certainly if you talk about just XCR or MPP files, it's not there right?
00:14:24
Speaker
very rarely would someone go to the diligence to actually put duration change due to XYZ. And that's the big concern. Now, where I think that sort of level of duration analysis does work really well
00:14:41
Speaker
is when you're using schedules at a very macro level. If you're looking at a very simplistic overall duration for a project, you're saying, this piece of highway, we plan on 36 months. But based on other information that we have, we know that similar projects are done as 28 to 40, et cetera. The problem with that is that sort of analysis is OK if you're not in control of those durations.
00:15:10
Speaker
But if you are in control of those durations, if you are charged with the delivery and execution of those projects, you are in control of the work, of the risks, of managing the risks and managing the duration. So just to simply say the duration may change and the result might be different is not good enough. You need to dive a bit deeper to understanding, well, what is it? What is causing that? What potentially?
00:15:32
Speaker
not to sound pessimistic, but what AI can offer us is to actually start telling us what is that information that we need. If we could produce schedules that contained all these various pieces of information and they could be captured into a repository and used for the future, then what is it? And I think that's, as an industry, we're not there yet. Certainly not in the scheduling space.
00:15:58
Speaker
Yeah. And I think I agree with you, you know, we've had previous conversations to this effect. Like there's just so much thing, there's so much that happens on a project that's captured maybe in a piece of paper shoved under a supervisor's trunk seat of that car that isn't like really isn't in the schedule.
00:16:14
Speaker
And when we spoke to Dev last week, he made this statement of, or maybe I prodded him for the statement that you can think of AI as someone that's had lots and lots of experience or hundreds or hundreds of thousands of projects. But then I said the counter-argument last week was that if you were on those projects, you would have had more senses. You would have been in the meetings. You would have been hearing. You would have been seeing, not just staring at the project through the XER file.
00:16:40
Speaker
If you look at it through that lens that like, okay, maybe an AI has experienced 700,000 projects through the XCR file, it's kind of like they've experienced it, but had like reduced set of sensors to experience those projects.
00:16:57
Speaker
Is there like a case for AI giving, to use another analogy, I'm just stacking analogies on analogies, but to use another analogy, like if I wanted something beautifully written about me or about my company or about Carlos or something, I could go pay a very experienced writer to write something. It would be great. It'd be expensive. It would take some time.
00:17:15
Speaker
go on something like average written, I can go ask a large language model, write something average, it'll cost me nothing and it will be done instantly. And maybe that's useful in lots of use cases, or maybe there's like a hybrid. Is there like a case that you can see for projects where maybe they get 80% or 70% of the value for one one hundredth of the time or cost?
00:17:41
Speaker
So I think where this is kind of heading to is this sort of automated schedule generation. You know, you're essentially saying here's a model of what we're building generate the schedule for me based on predefined steps libraries that say yeah, I was more thinking almost the opposite. We stay in the like risk assessment world.
00:18:01
Speaker
There's, okay, we could have this periodic QSRA process with a team of people or one person or an expert or whatever. But maybe between then and the next time we do that, we're making all sorts of decisions about our schedule. And maybe we could use some sort of gut check of how we're affecting things.
00:18:23
Speaker
It's almost like having a taxonomy or a categorization of potential risks that may affect your project, right? So you're saying you're on a railway project, your potential risks are getting access to track, you know, as the weather is, you know, specialized equipment, commissioning engineers, all that typical kind of stuff.
00:18:44
Speaker
Now, you could very well generate that sort of high-level categorization of potential risks. We'd have the design delays, you'd have approval delays, all that generic stuff. But my question would be, how practically useful is that? Because if you're on your project, it's for someone to say, design approvals are always a problem. Well, yes, we know that. But how is that going to affect my project specifically?
00:19:11
Speaker
and that's where you kind of need some information that essentially would say based on all these parameters, your risks would look like this. Now, my question to you is where is that knowledge coming from? It certainly isn't in schedules. That's the big problem that we have is when you think about what schedules are, they're conceptual models.
00:19:35
Speaker
You're essentially just a model. You may have heard that expression. A map is not the territory, right? So a schedule is not a factual statement of what will happen and should never be treated that way. It is just a guide.
00:19:51
Speaker
Yeah, if you want to be really really negative about you say it's a best guess right you need something that then says right to improve the schedule you need to understand what are all the things that could vary what are all the things that can that can happen that you don't expect.
00:20:09
Speaker
It would be fantastic to be able to pull something off a shelf or in the digital world, you tap into a database that says, here are the typical type of risks on your type of projects based on your specific project, based on some parametric sort of values. Here's how those risks could manifest. And I think that would be a spectacularly useful product or a useful tool set to have. I just don't think schedules are the source of that information.
00:20:37
Speaker
Yeah, right. So my optimistic view of Santos comes and helps us do a QSRA at this point on a project. And at the same time, we run our little AI model to look at the risk. And we consider that the baseline we compare. It says this much risk on the project. Santos has done a real world human version of it.
00:21:00
Speaker
And then in two months time, we try and make changes to the plan and okay, the end date might've come in a week, but like it's saying risk has gone through the roof. Maybe we should tread carefully. You know, that's kind of, you're saying like, that's lovely, Jason, but if you're just relying on the schedule data, you might as well just shake the magic eight ball and say, I don't know what the normal things of the magic eight ball said, but you get the idea. Yeah. Absolutely. All signs point to yes.
00:21:27
Speaker
I mean really if all you're just saying is let's adjust the durations and see where the answer is it goes back to understanding why now the beauty of the way QSRA has evolved is we are moving towards methods that are not just about just adjust durations. We always need to give an explanation as to why.
00:21:48
Speaker
When you're saying your 10 days may grow to be 15 days, let's not just put that as schedule uncertainty. Let's actually give you a reason for why that happens, and we can trace back to all those reasons. Now, obviously, we still have that issue where we're saying, well, have we captured all the right reasons and have we captured those risks with the right numbers?
00:22:11
Speaker
that is a significant step forward compared to what we used to do a decade ago where we would just put plus and minus values on durations. And certainly some industry bodies have moved into that, what we call a risk driver approach that says, you know, based on your analysis, the driving risks are the following items, right? And we can do that now with the technology that we have. That's not AI in any way.
00:22:37
Speaker
Where I think I could you could use a as a double double check or a sanity check but again it goes back to my point where if you're presenting those results of an AI analysis to a team of people responsible for delivering the work and they say AI is telling us that you should allow an extra three months.
00:22:54
Speaker
Their first question is going to be why yeah, you know that they will want to know why because it's they're the ones responsible for for doing everything they can to not extend that project by three months, right? So yeah, but if I if you're asking me to extend allow three months for unknown reasons, I might as well run the gauntlet and see if it is going to speak three months.
00:23:17
Speaker
Yeah, you know, you save you a lot of time and effort and just do the typical 12.5% contingency factor. And there's a number of I mean, even the way that I see even schedule analysis and this is not something new to AI. This has even happened. This has been happening for the last decade or so ever since someone looked at an XER file and said,
00:23:41
Speaker
There's thousands and hundreds and thousands of rows of data here that we can suck into these wonderful tools to pull out all this information. It's like fossicking for the golden nugget, hoping that something in there will reveal, this is the reason why your project will fail because you didn't put the right type of links in. That's not the case. It's just not true. I mean, the schedule is just a model. We use the schedule to help guide us make decisions.
00:24:09
Speaker
The quality of that information is where I think we have a problem, but it's better than nothing, you know, and the thing to keep in mind Jason, you're very aware of this that the benefit of planning and scheduling a project is not the schedule. It's the process you go through to develop that schedule, right?
00:24:30
Speaker
So it's getting the people to think about it. It's it's challenging those people's assumptions challenging then you do you realize that you can't do this because that over there is happening. Oh, I didn't know that. Oh, okay. Well now you do right. That's that's the actual benefit the schedule is just a byproduct really but yeah.
00:24:49
Speaker
If we think about touching back on your point of the information isn't in the schedule.

Debate on Scheduling Limitations and Needs

00:24:54
Speaker
So we want to know specifically like what the risk is and ideally what the best mitigation measures are based on history, which is that sort of experience aspect.
00:25:06
Speaker
Is it the fact that we just don't add that information to schedule? Is it a limitation of schedules? Is it the fact that these applications don't pull data from other sources as well as the schedule, which could be coded in a way to provide that sort of link? What do you think is that limiting factor at the moment as to why we can't do that?
00:25:23
Speaker
I think it's a bit of both definitely Carlos. I mean first of all when you think about the way develop schedules is we you know schedules consist of activities that represent scope of work right each activity or each bar or each activity that you see on a schedule represents effort.
00:25:41
Speaker
And for us to actually sit down and explain what that activity represents, you'd be writing an essay for almost a year. There'd be a work method statement for every activity. So we don't do that. That's why we install pipe. That's the level of information that we go to.
00:25:58
Speaker
But what we also then don't do is maintain that information very well. Like when we say install pipe, but we hit service while doing. We don't put that in type of information. A simple step might be to tag it to a delay register or tag it to something. And I think this is where technology can fit into it is to say that the schedule is not just the schedule. It needs to be this repository of connected information.
00:26:26
Speaker
One of my one of my classic examples that I often draw upon is someone might say look this 10-day task in your planned has actually turned into a 15-day task in your actual and there'll be this big hoorah about that's five days longer than you took this delay and it'll take someone to just go hang on it rained for five days.
00:26:46
Speaker
That information is not captured in the schedule. It doesn't specifically say five days was lost due to rain. But if you went through site diaries, if you went through the superintendents records, you might find Jesus whole week we lost because of rain. It still took us only 10 days to do the actual work. So we were actually as productive as we plan to be. And again, that's a very simple example of where schedules just don't tell you that information.
00:27:11
Speaker
But if we had the connected pieces of data, you could quite easily tease that information out.
00:27:17
Speaker
You need to be scraping everything to form this overall narrative of a project to then give that real context as to why the plan changed and what we did. And I think it's worth doing because when you think about what the schedule represents on a project, it's the only tool that we have for time phasing information. When you think about what a schedule gets used for, it gets used for resource loading, it gets used for cost loading, our sequencing of work, our communication.
00:27:45
Speaker
A schedule is rarely just one thing used for one purpose. So if there's anything that we should be spending a lot of effort to improve the quality of, it's the way that the schedules are generated. But that takes me to another angle, which I put on all this. And I touched upon it earlier that the value in the schedule is not
00:28:06
Speaker
the schedule itself, but the process that it generates, the thinking and the communication that it generates. And so to me, I mean, you guys may have heard of things like the DCMA 14 point checklist. It's like, yeah, we run all these metrics. Oh, it's Carlos' favorite thing. You love the 14 point.
00:28:28
Speaker
I mean, we've gotten to a point where I've seen instances where that is the sole measure of how good a schedule is. And that really shouldn't be the case. I've been exactly there. Yeah. And it's like, it's one of these... No lags.
00:28:49
Speaker
Your project will fail because you have 10 activities with high float. To me, and this is something I read by a guy called Murray Wolf a long time ago. He wrote a book called Faster Construction Projects Using CPM Scheduling. And to put it in a nutshell, he said, there's only one good measure of a schedule. There's only one measure of a good schedule, sorry. That is, is it being used? That's it.
00:29:17
Speaker
Simple as that, right? If people are using your schedule, then that's a good, then if that's a good outcome, if they're not using your schedule, it could be for a vast number of reasons, but that's not, yeah. And that, and that's kind of, it goes back to my point that, you know, the, the schedule as an, as an object is not as valuable as the method that, or the process that sits around it.
00:29:42
Speaker
It's a very loose abstraction of what actually happened on the project. And the other little thing that I like to always make fun of is very rarely does a project run on one schedule. You guys are across this very well. And you think about how many different, and I don't just mean there's one schedule that gets used to manage by the executive team and one schedule to use by the delivery team and one schedule.
00:30:13
Speaker
I don't mean in that sense either. There's also all is a given point in time. There's 15 different scenarios of the potential. Yeah, why is it we're going to every project. I think I think without a fail every project that.
00:30:28
Speaker
Like we on board as a company, there's always someone in the planning team that says, uh, we can't, we're, we need, we might need to hold off on a week before we import into, into affects because we're currently going through a re-baselining exercise. This is going through a re-baselining exercise.
00:30:48
Speaker
You know, when you're when you're using your typical, you know, office spreadsheet type documents, you know, Rev1 Rev1 AB Rev1 a draft X, you know, the schedules are exactly the same. And at any given point in time, there could be a multitude of schedules on a project. And so that's why I always say the joke that I always says, the first thing you need to do is identify which is the schedule that you're going to be checking, you know, using for this analysis of it. Come back to me when you've identified which is the one you want to use.
00:31:18
Speaker
I was just going to say, Carlos, that I wonder how well the, like, if you're consuming those schedules on mass as a model, how do you even pick apart the ones that are not versioned schedules of the real schedule and how many of them are just like, what if schedules that are sat in the P6 database that someone's gone and just like on a win made?
00:31:41
Speaker
Yeah. And the one thing, the extreme, you're going to be, uh, yeah. There's, I mean, if you look at it just as, if you looked at it as, as pure data, right? You'd have, you could potentially have two act, two schedules, exactly the same number of activities, exactly the same number of, uh, you know, activity IDs, even the same descriptions with slightly modified durations between the two, and they can produce vastly different outcomes. Right. But from a.
00:32:11
Speaker
a pure data difference, you'd probably look at the two and think they were exactly the same. All you'd have is the difference in durations, but behind it, there'd be a whole different set of reasoning of why those durations are different. I mean, here's a really simple one, wet versus dry programs or wet versus dry schedules. How would the machine know the difference other than being able to say, well, one calendar is different to another calendar?
00:32:40
Speaker
These are all problems that can be solved. They're not, you know, they're not, they're not challenging problems from a, from a data and analytics perspective, but it goes back to why and understanding why, you know, like, um, you, have you got, have you got the understanding of saying, well, there's a difference in calendars and here's the reason why, because I can tell you one thing XER files. I tell you why those calendars have days and days marked as work days and not work days. Yeah. Yeah.
00:33:08
Speaker
I have one last question, Carlos, if we can let me fit in.

AI's Influence on Risk Analysis Roles

00:33:13
Speaker
So there's a, there's a podcast that I listened to where they talk about how writers are scared about losing their jobs to AI and the
00:33:24
Speaker
The common phrase is said is that the writers are not going to lose their job to AI. They're going to lose their job to another writer that knows how to use AI. And so from the perspective of people doing, whether it's a risk manager or people doing scheduled risk analysis at the moment, you see a world where the AI is kind of like a co-part, assistant, that type of thing to the human process.
00:33:48
Speaker
I absolutely think so. I mean, we're doing it already in a way in lots of other aspects of our day-to-day jobs. There's so many ways that AI can be used to do things quicker and easier than already. I mean, maybe not in the sense of generating schedules and cost estimates and all that yet, but it will get there.
00:34:13
Speaker
The biggest fear I have with AI is not AI itself, but what it's going to do in terms of gaining that experience, you know, we're saying, you know, AI will do all the grunt work and the easy work for you. So you can focus on the value adding work, but and I think I posted something about this previously where I've said,
00:34:32
Speaker
But doing that grunt work and those menial tasks is how I believe I actually gained that experience to understand what's right and what's wrong. And if you're going to take that away from someone, they can't instantly become graduates with 20 years experience. What happens in 10 years time? Yeah, you go from an experienced person that's being assisted by AI to someone that is basically just like, what does the God box say?
00:35:01
Speaker
Yeah. And the thing is, we're assuming that, you know, the AI will be able to give the right answers. And the biggest risk we've seen with certainly with things like chat GPT at the moment is how confidently incorrect they can be. And if you're not experienced enough to understand that it's confidently incorrect, you will just look at the confidence and say, well, that must be the case. Confidently incorrect is my nickname internally, Santosh.
00:35:29
Speaker
That's a good spot to end it. Absolutely. I think that is time. Santos, thank you very much for coming on. I appreciate your time and thank you very much. Everyone for listening.