Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Episode 1: Understanding GenAI and its Impact with Dr. Balaji Srinivasan image

Episode 1: Understanding GenAI and its Impact with Dr. Balaji Srinivasan

S1 E1 · Observability Talk
Avatar
184 Plays9 months ago

In this episode of VuNet’s “Observability Talk," Bharat Joshi, co-founder of VuNet Systems speaks to Professor Balaji Srinivasan, a distinguished academic at IIT Madras in Wadhwani School of Data Science and AI to discuss through the latest advancements in Generative AI and peak into their evolution over the years. Dr. Balaji also highlights the emergence of multimodal models and their potential to revolutionize observability by enhancing incident management, root cause analysis, and real-time business analytics.   

The conversation further explores the dynamic interplay between Generative AI technologies and the domain of observability, emphasizing the significance of domain-centric approaches in refining the accuracy and efficiency of observability platforms. Dr Srinivasan elucidates how Generative AI can be leveraged to predict potential threats, monitor system health in real-time, and facilitate a deeper understanding of business analytics, thereby driving innovation and efficiency in enterprise IT operations.

Transcript

Introduction to Professor Balaji Srinivasan

00:00:11
Speaker
Welcome to a new episode of observability talk. Here we will discuss everything observability and peripheral topics that impact enterprise ID operations.
00:00:21
Speaker
Our topic for today is understanding generative AI and its impact on observability. Today we are glad to have Professor Balaji Srinivasan. Dr. Balaji is a professor at IIT Madras in Vadwani School of Data Science and AI. With a BTEC from IIT Madras, MS from Purdue University and a PhD from Stanford University, Dr. Srinivasan brings a wealth of academic expertise.
00:00:46
Speaker
Previously, he held positions at IIT Delhi and was a postdoctoral fellow at the University of Michigan Ann Arbor. His current research delves into the frontier of computational algorithms, employing a mix of probabilistic models, PDE-based approaches, and data-driven methods to address diverse engineering challenges. A warm welcome to everyone to Observability Talk.
00:01:11
Speaker
Let's start with a very simple question.

Trends in Generative AI: Multimodal Models and Open Source

00:01:14
Speaker
What are the latest advancements and trends you see in generative AI that you find most exciting? Hi Bharat, thanks for calling me to the podcast. Yeah, so coming to your question about the latest advancements and trends in Gen AI. So of course, Gen AI really became hot after chat GPD.
00:01:41
Speaker
So, last year, year and a half has seen a lot of developments.
00:01:45
Speaker
Of course, generative AI is older than that. But amongst the latest developments, if you come to technical advancements, the most exciting trends have been the idea of multimodal models, which is models that can take text, they can take video, they can take audio, they can take pictures. So, of course, amongst this, you would have seen just a few days ago,
00:02:12
Speaker
We had Sora from OpenAI which was the video generative model.
00:02:18
Speaker
Then Gemini Pro has been coming up with really really good stuff. Their video models are also exceedingly good. So these were kind of unimaginable just an year and a half ago and it's really exciting to see all these things come up because that makes so many more applications possible downstream and it looks like this is really accelerating. So that I would pin as the most exciting thing. Apart from that we have what I would call standard
00:02:46
Speaker
engineering developments, but they are nonetheless they are exciting because in AI they happen at about 10x or 100x the speed that they normally happen anywhere else. So, one really good development has been increased democratization, more and more open source models, this of course helps enterprise
00:03:06
Speaker
development and generally whether it was Transformers or anything else part of the reason AI has developed so fast is because all of us had the tools both hardware as well as software tools at least in their incipient sort of nascent form. In fact Google also amongst the Gemini came up with Gemma just yesterday
00:03:27
Speaker
Some of you might be aware of this. So again, just like Facebook had LAMA or Meta had LAMA. They have now open source. I think perhaps people pressure, but they have open source against 7 billion parameter models.

Evolution of AI: From Neural Networks to Deep Learning

00:03:41
Speaker
This is really, really good. So basically it means the field is going to accelerate immensely. There is competition amongst the larger enterprises to make these things accessible. Then of course, there is greater token length.
00:03:55
Speaker
larger things that we can take care of again in Gemini last week. It's ridiculous that if we talk about this perhaps next week the development would have changed. So it's like a million tokens, all of a sudden up from 200,000 tokens were there. Smaller models like
00:04:13
Speaker
So I think overall both the engineering end and the technical end, I think we are seeing an unprecedented growth, historically unprecedented growth in any kind of field. That's the kind of growth we are seeing in JNI, AI in general, but JNI
00:04:30
Speaker
Thank you so much Dr. Balaji. As you rightly pointed out, I think we are going into or living into a very interesting time with relative AI becoming much more useful for end users also.
00:04:44
Speaker
Right. Just wanted you to go a little bit deeper in terms of what is the history of ML and GenAI in particular, so that our podcast listeners at least get a view in terms of where all this is being coming from or where when it started developing. Right. So looking at AI machine learning and of course GenAI in particular,
00:05:10
Speaker
The history is really long, but if we focus on the last century, the term AI itself was born by a whole bunch of people, starting with neural networks, simple ideas about around 1940s, 50s, starting from Turing, Von Neumann, et cetera, and people working also partly in the Manhattan Project. There was increased interest in computation and in automation. And of course, people had started thinking about how can you automate thought
00:05:40
Speaker
And initial ideas were to simulate the brain, which is where the idea of artificial neural networks came somewhere in the 1950s or so, 1940s, 1950s.

Mechanics of AI: Simple Ideas to Complex Systems

00:05:51
Speaker
Now the term AI itself was born in 1957, really long back. So that's like a 70 year old term, but then we have had rebranding.
00:06:01
Speaker
multiple times we have had machine learning then we have had deep learning then we have had expert systems all these kind of ideas overall the idea is just to see what can automate cognition if you look at machine learning these are specific types of AI systems which learn
00:06:16
Speaker
Other AI systems were systems which you could give rules to or give broad directions to. Machine learning systems were systems that problems that you cannot solve using just by specifying rules, for example grammar, for example even face recognition. It's really hard to give a set of finite rules.
00:06:34
Speaker
in order to encompass all the ideas that are possible. So that's where machine learning took off. If you look at, so there was slow development then ups and downs, the usual peaks and troughs cycles were there till I say around a decade ago, around 2012.
00:06:52
Speaker
Then we had these convolutional networks and started by Dintan and lots of pioneers of the field and then AI took off or deep learning in the terminology from there in machine learning slash deep learning took off.
00:07:07
Speaker
The sort of seeds of Gen.A.I. were laid a long time ago and maybe briefly just talk about it technically shortly. But 2014 we had these things called Generative Adversarial Networks by Goodfellow. That's a famous book by him. It's from MIT then.
00:07:26
Speaker
Generative adversarial networks were the first picture generators. Really surprising things came out of that. So often what surprises people in the field including myself and of course the pioneers is that it seems to give more than what you have put in. It seems to give very few rules but what comes out is far more sophisticated.
00:07:48
Speaker
This search, then finally, of course, we had charge EPT. There were two schools, schools that felt that just this kind of simple approach, which I will tell you shortly, could not possibly capture the complexity of human language, whereas some people could bet that simple statistical ML approaches can capture the complexities of language and images and creativity, etc. And at least temporarily the other school, which says that simple rules are sufficient.
00:08:17
Speaker
to produce such complexity seems to have won over. I think almost nobody can deny the efficacy of these systems over the last year and a half. Coming to the simple idea, the idea is mathematically, if you will excuse me, I will just talk about this just for a few minutes mathematically, but the simple idea if you see sun outside right now, the sun outside right now and I ask you what do you think the chances that it will rain?

Generative AI vs. Newton's Laws: A Comparison

00:08:43
Speaker
over the next 30 minutes, you would probably take a number which is really low. You might even say it's impossible by looking at the clouds etc. Now in probabilistic language, you would simply say probability of rain given that there is sun right now is very low. Now it turns out if I can change such probabilities or in generative AI, you start looking at if we looked at let's say language models.
00:09:13
Speaker
The fundamental idea that generates it is, if I can assign a probability to every sentence, then I can predict sentences really, really long. So, this is like a very simple idea. In fact, it is at the heart of generative AI. If I can tell you the probability of a bunch of pixels.
00:09:33
Speaker
then I can generate any image. Right. So this is a very simple idea. So now this idea if you break it down to even simpler ideas it is if I can just predict the probability of the next word. So suppose I make up a sentence today is a good and somebody might say day that that's the most probable next word which exists. Now if you can continue this indefinitely you basically have generative AI.
00:10:02
Speaker
So like I said it seems impossible or at least implausible that you can generate really complex things using this just next word or next token prediction or next pixel prediction which is basically what image generators and video generators are doing. Just predicting one small thing and doing it continuously. But we have a history of this even in let's say physics. In physics before Newton we thought that apples that fall on earth are following
00:10:32
Speaker
completely different laws from planets that are going around the solar system. But essentially, according to Newton, it's all falling. The Earth is falling towards the Sun, the Moon is falling towards the Earth or vice versa. And just that one law repeated millions and millions and millions of times, basically covers the entire complexity of not just what happens on Earth, but on the solar system and outside.
00:10:57
Speaker
So this one single bet that if I predict the next word I can do any task is at the heart of Generative AI. So the history over the last two and a half years has been sort of explaining this one thing. Just predict the next thing and you are good to go provided you can somehow find out the probability of that next thing. All the tricks are basically to find that out. How do you find out the probability of the next thing that goes on. So that's at the heart of Gen AI.
00:11:26
Speaker
I think the future is very very promising because if a simple idea works out and it scales really well, that's at the heart of it. The fact is if you have a simple enough idea it can be made to scale computationally and that's what OpenAI and other companies have bet on and at that sense played out.
00:11:45
Speaker
No, very, very drastic way of explaining Gen AI in a very simple manner. Thank you so much. My next question actually was going towards where all this will go in future.

Generative AI's Societal Impact and Accessibility

00:11:57
Speaker
Like how do you oversee this AI and Gen AI particularly evolving in next few years, right? And what sort of impact do you see it having of the society as a whole, AI plus Gen AI?
00:12:14
Speaker
So yeah, so as we were discussing, basically if a simple idea, and perhaps because it's a simple idea, it scales so well, the effects that it can have are really, really profound. And usually even in slower moving times, we as a society have been poor at projecting what the downstream effects are. What we know is that the effects are going to be massive.
00:12:41
Speaker
the effects are going to be massive. I do not think there is going to be any field that is left untouched purely because if a simple idea works so well, you can basically make it both efficient as well as scalable. If it is a really complicated idea, you need a lot
00:12:58
Speaker
in order to make it scalable but the idea is simple and it is shown to be really profound even in its effect. So my view is the next 5-10 years of course you can play out multiple scenarios of what are the things that are possible but even keeping normal technical evolution
00:13:18
Speaker
and not even assuming like massive funding coming in. Even assuming normal technical revolution, we are looking at something that's at the scale of the industrial revolution. It's just going to transform society, hopefully in a good way. It's going to of course transform the specific fields we are looking at like observability etc. Increased security of course with any such rapid growth, there is always the danger of the flip side.
00:13:46
Speaker
which is what happens if you have unconstrained growth. So with that in mind I think it is going to affect science, technology, economics, health care, finance, all these industries are immediately downstream of governance of course. So even in India the Supreme Court is looking at using Gen AI it makes so much accessibility possible for common people if you can talk in the local language.
00:14:16
Speaker
So, I am particularly excited about democratization of resources amongst society because of this financial inclusivity etc. I think are all possible because now Genia is something which is really powerful.
00:14:31
Speaker
Yeah, I think the way the whole JNI has changed the game in the last two and a half years, a lot of people believe that it is going to be very impactful for almost all sort of sectors. So my next question is towards what ViewNet actually has been doing.

Observability and AI: Enhancing Incident Management

00:14:50
Speaker
In our context of observability for business journeys,
00:14:54
Speaker
how do you think Gen AI technology enhances observability for enterprises from incident management on one side to root cause analysis or to real-time business analytics? Right, so when you looked at observability and as you said you talked about three specific
00:15:14
Speaker
kind of verticals in some sense in observability, one is incident management, group cost analysis and then business analytics. If you think about what I discussed or what we were discussing about what gen AI is, so when you see incident management you have some such thing as either trying to predict
00:15:37
Speaker
you know trying to predict potential threats, trying to look at monitoring current problems having real time alerts. Now the fundamental technology that will underlie this is trying to predict the probability of a certain event happening. So once you can rank the probabilities of these events happening you actually have
00:16:04
Speaker
the basic technological roots or technological underpinning of what can function as an incident management platform. So, if you can predict what will happen, then you can say and if you can rank it through probability saying this event will happen, this event will happen, even all this, that is essentially Gen AI. If you think about it, you are trying to generate all possible future scenarios
00:16:28
Speaker
given what you are already observing about the current scenario. Within this, there are two parts. One is of course the mathematical prediction, which in fact the sort of roots of this we had within OPI, which is there at BlueNet. So this was the fundamental idea. We didn't call it GenAI, but it was essentially based on exactly the same idea, which is chain towards, chain together a series of events and actually assign a probability to that event.
00:16:55
Speaker
and then talk to the people on the ground in a language understandable to them. Now coming back to this language understandable to them of course currently if you see OPI it just goes as a score which is a little bit sort of opaque. What you require for human beings to function and Chad GPT showed this really really well is that if you have a good UI
00:17:19
Speaker
If you have a good UI, in fact people at OpenAI you would have read this. People at OpenAI did not expect it to be as successful as it was. It caught all of them unawares because really all they had was a fundamentally a UI layer of course with some instruct GPT etc thrown in. A UI layer thrown on top of what was already available in their API.
00:17:41
Speaker
So, now similarly for OPI or things like that, when you have an incident management system, when you throw a UI which either people can query or it throws up reports, which I think BUNET is already doing in a human readable format, you have dashboards which is the other portion of GenAI which we also can do, which is trying to throw up images.
00:18:06
Speaker
which are representative of what is going on in the system. So if you see Gen AI has both these layers, in some sense dealing with numbers or dealing with events and then dealing with the human interface, which makes this thing far more accessible. Similarly, if you look at RCA, root pass analysis, root pass analysis is essentially prediction in reverse. It is still trying to assign
00:18:32
Speaker
probability of a past series of events which have led to what has come right now. It is again gen AI but it is just reverse gen AI or you do multiple simulations, agent simulations and try and map the probability of if this was the sequence of events that led here.
00:18:50
Speaker
How do I trace it back in the past and identify the most probable root cause. So to test hypothesis you can actually have a gen AI layer, you can have you know what is called GoFi or good old fashioned AI layers plus whatever rules based systems that you have and indicators that you have. These can put together you know what really happened in the system.
00:19:16
Speaker
Business analytics is just in some sense in mathematical language as an academic in a mathematical language business analytics is essentially sort of the same problem except at the layer of the business. So you are still evaluating as people do in the market you are still evaluating possible future scenarios.
00:19:40
Speaker
and evaluating the probabilities of this, weighting the gain for each one of these events with this specific probability and that tells you what are the business decisions that you can take. You can of course do this real time if you have really good data. Basically you can do what if analysis of this or that happens, what is the chance of future business given this happens except you are now looking at a wider
00:20:04
Speaker
data and less within the details of the operating system that you know you are trying to monitor. So its observability at the level of the business rather than at the level of you know CPUs, GPUs etc that are running the system. So I think Gen AI is key both this to summarize both at the prediction level and at the UI UX level. So you can think of it as two different verticals there.
00:20:31
Speaker
So very, very true and it is a bit good to see that the JIRAI kind of technology is going to help observability further, enhance the capabilities like root cause analysis or bringing in the OPI part which you talked about, very, very interesting times. Can you just extend this application of these technologies like JIRAI or
00:20:57
Speaker
coming to the JDI with observability.

Domain-Centric AI: Accuracy, Speed, and Reliability

00:21:00
Speaker
What sort of applications do you see? Do you also believe that some sort of domain centricity when we are building some of these LLM models and so on really help ad user? Right, right. So I think you bring in an important point, the idea of domain centricities in these issues. One thing we know again from pure
00:21:26
Speaker
a probabilistic or a machine learning AI perspective is that you need to ground this probabilistic maps in a specific domain. If you think about you are basically your machine or your AI is trying to build a map of what reality looks like. It is trying to build a map of what reality looks like purely based on the data that you are giving it.
00:21:52
Speaker
Now, if I give specific knowledge that I have gained from the domain, the map A becomes more accurate, B it becomes faster to make a map and C it becomes more reliable, D it becomes more interpretable. All these four are general problems in Gen AI, which is having an interpretability layer, otherwise we cannot trust it. And of course, biases and all these other issues come in in generally Gen AI.
00:22:21
Speaker
And then you want, there is always a problem of how much data. Domain in some sense becomes a substitute for the existence of data because we simply understand the system above and beyond basically what the data can convey. So, these few things are really, really important in terms of building in specific applications. Let's say you are making, I think, you let us this, the anomaly detection network.
00:22:50
Speaker
Now something looks like an anomaly only within the context of a specific application. So if you have a normal just like our ECGs or EGs or whatever we measure for a human being you know you will see this spike not every spike is bad. There is a certain amount of undescript variance.
00:23:08
Speaker
that exists within any system. Amazon also backed on this for a lot of time. So there is a certain amount of variance in every enterprise and you know there is just a seasonal variation and some of it is just unexplained variation. There is variance which is expected. People behave in different ways, systems behave in different ways. Now when you talk about an anomaly detection, what is an anomalous event? It is always within the context of the domain.
00:23:32
Speaker
So domain centricity and having domain expertise is of great importance because of that of course RCA understanding what already a kind of what are the maps you can search through otherwise the search space for almost any root cause is infinite.
00:23:49
Speaker
You can keep on saying that could be the cause, this could be the cause. The domain expertise which actually kind of puts it down. Coming specifically to Gen AI, let's say when you are trying to generate reports or having a knowledge basis, you have of course this thing called retrieval augmented generation, which is RAG.
00:24:08
Speaker
Then you have these things or domain fine tuning. All these things are actually meant in order to ground your gen AI in your specific application. For example, chat GPT if you ask it generally something about unit, it won't know it or even though it doesn't know it, it will start generating something because it simply does not have the mechanism purely in a generative. Remember when I say that it's only going to give you the possibility of the next sentence.
00:24:37
Speaker
I mean it doesn't have the possibility that when I ask a question saying I don't know is actually a very low probability event. The more probability thing is if I ask what is unit it will start the answer with unit is.
00:24:49
Speaker
So it will keep on continuing there. It just looks at likely sentences rather than domain centered sentences. So to center it in a domain, you need these additional things. You can fine tune or usually what is now known as RAG. These things are developing technologies. You use, you ground it in a text and say, okay, only refer to that.
00:25:09
Speaker
but use your general language generation capacities but ground your knowledge in this. So these are some things that you can do. You can do anomaly detectors, all of these. You can do RCA, you can do business analytic predictions. In fact, you can do things like data quality improvement also. You can have Gen AI, see certain examples of good data.
00:25:34
Speaker
or good, nice, grounded logs. And you can ask it to generate, if we, given this bad log, make a good log. So that also is something that is possible. Lot of possibilities, even within the context of observability in general, in punitive particular. Lot is possible with Gen A. This is very interesting and this actually improves my confidence in the work we have been doing for our customers, where we have been talking about
00:26:04
Speaker
that a generic observability platform may not be very useful, but as soon as you make it a domain-centric observability platform, then like you said that you are basically getting the right data which will help our customers access their incidents or get their anomalies and so on.
00:26:22
Speaker
Thank you so much for explaining that, Dr. Balaji. My next question towards the professionals basically who are looking at starting their career or already working in some form of AI. What advice do you give to some of these professionals today?

Advice for AI Professionals: Collaboration and Expertise

00:26:40
Speaker
How do they enhance their skills in AI, ML or JAI particularly? What would you say? Yeah, sort of related to our previous question, I think
00:26:52
Speaker
depending on the person's age. So the older they are, the more they should not get rid of the domain expertise they have already gained. That is very hard to come by. It will be hard to come by even for AIs. Very true, very, very true. The earlier you are, of course, you can play around with coding, etc, etc, which I would encourage everybody to do, to play around with these systems. Like this is now quite popular. It's not AI that will beat humans.
00:27:20
Speaker
It is an AI enabled human who is going to beat other enterprises who are not trying to use AI. So basically it is AI plus human. Kasparov also talked about this a long time back. He has this version of chess where a human plus a machine. So you can actually take a mediocre human and a mediocre machine and pretty much beat either an expert machine or an expert human.
00:27:44
Speaker
So it's the process of how you interact with AI that tends to actually give you extra leg up both in your career and as an enterprise. So I would encourage everybody to play with these systems at various levels. If you see the system as a car you don't necessarily always have to become a car designer or a car engineer or a car mechanic in order to use a car.
00:28:08
Speaker
The purpose of the car is to get somewhere. So if you just need to, you need to know where you want to go with this. Some people are so excited about the technology, like me. We are long term academics. So we want to break this open and see what's going on. That's my job. But not everybody needs to do that. But everybody should use it. An old lady with a car will be to St. Bolt.
00:28:31
Speaker
So it doesn't matter who you are, how smart you are, you do need to use these systems because they just are phenomenal in terms of how much they know, how much data has been stored within them in a compact form, what access you have everywhere. So everybody should definitely play with them.
00:28:50
Speaker
Be generally aware of what is going on. It is impossible even for academics to keep up with developments. Like I said, weekly advancement is far more than what was there in decades in other systems. Try to use it as much as possible. Try to think about, in fact I think future-proofing yourself in terms of thinking about what is it that I do today that can be automated away.
00:29:14
Speaker
by Gen. AI is something everybody should think seriously about, not in terms of fear, but it tells you what the next level technology is. Because then you can work at that layer and you can also make yourself, skill yourself at that layer. Material is humongous now, you know, almost anything we want to learn about is available for free and in abundance. So yeah, I think it's exciting times rather than being afraid of it.
00:29:40
Speaker
It is actually a good idea to be fascinated by the field and see how much you can use it in the current field you are in, especially if you are older. If you think about it as an assistance, I think it functions better and actually get more career leg up also.
00:30:00
Speaker
No, you were very right when you said that it has democratized the field. In fact, my teenage kid had started using chakjibwe to get certain answers, or try to learn more. So that way, like you said, it should be treated as an assistant who will take you to the next level.
00:30:20
Speaker
Thank you so much Dr. Balaji for spending so much time with us today. It was very, very insightful to learn the history of Janai, then what exactly it is going to be in the next future and then a little bit about getting into how it basically helps on the observability side of things. We thank you for your time so much. Thank you very much.
00:30:43
Speaker
Thank you for joining us at Observability Talk. Please subscribe and rate us wherever you listen to your podcast. Also, if you think someone will find this purposeful and insightful, please share it with them. For more information, please visit us at www.viratesystems.com. Thank you.