Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Episode 6: Crafting Resilient Products - A Closer Look with Sarika Atri image

Episode 6: Crafting Resilient Products - A Closer Look with Sarika Atri

Observability Talk
Avatar
72 Plays3 months ago

We explore the rapidly changing world of modern application development and the critical role observability plays in navigating its complexities. Our guest, Sarika Atri, a seasoned tech leader with experience spanning across Google, Flipkart, and Hotstar, takes us through her fascinating journey from building monolithic applications to leading in microservices, cloud, and serverless architectures.

Sarika shares key insights into the challenges of managing deployments in a CI/CD environment, balancing speed with quality, and the evolution from traditional APM to advanced observability. We also dive into the ongoing 'build vs. buy' debate for observability tools, continuous monitoring best practices, and the essential architectural considerations for scaling observability in large systems.

In this conversation, Sarika offers her perspective on embedding observability from the very start of application development, and the future role of AI in shaping how we monitor and improve complex systems. Tune in for practical takeaways and thoughtful reflections on the future of observability

Also, check out

Transcript

Introduction to Sarika Atri

00:00:14
Speaker
Welcome to a new episode of Observability Talk. Through this podcast, we often aim to look at observability from different perspectives. Today, we are glad to introduce you to Sarika Atri. Sarika is a seasoned engineering and technology leader with over two decades of experience spanning diverse domains.
00:00:36
Speaker
During her tenure, she has worked with a wide spectrum of system types ranging from high throughput, low latency distributed systems to event-driven, high-volume data processing pipelines and machine learning solutions aimed at enhancing data quality. Sarika is one of the co-founders at exemplify.tech. Exemplify is a team of battle-hardened architects infusing their expertise in the new edge startups.
00:01:06
Speaker
As architects in residence, she and her co-founders get in the trenches and work side by side with teams to help them find and achieve their tech product goals. Hi,

Career Evolution from Adobe to Serverless

00:01:19
Speaker
Sarika. Welcome to you to observability talk. Hey, thank you so much for having me here. It's a pleasure. Yeah.
00:01:27
Speaker
so ah Given the wide experience you have had right over the years, you have seen application development, deployment have transformed beyond imagination. We started with way back a single server or a two-tier or three-tier server architectures to distributed deployments to serverless architecture now.
00:01:49
Speaker
I want to kick off this podcast with ah asking you to share your fascinating journey starting way back sometime in 2000 with ah this 2-year, 3-year based application the development and deployment to now a serverless architecture development and deployment.
00:02:11
Speaker
Yeah, oh that's that's a very interesting thing. when i look back at when we When I started in the industry, my first a couple of projects that I worked on were desktop applications. I started with Adobe. We were building tools for creative artists. And ah these tools, while they were monoliths, they were fairly complex, extremely modular, very ah good interfaces between modules and so on. But they were built on a CD and shipped quite literally.
00:02:38
Speaker
ah So I started my journey back then when we had to go on CDs and ship them out. So ah the development and the release cycles were fairly long. We were doing releases once in a year, once in two years, like that long, ah because you couldn't fix things once you have shipped it. So you had to be really, really, really careful about testing and making sure there are no bugs that we were shipping out.
00:03:00
Speaker
From there, I moved to chip design software in the EDS space, the electronic design automation space. These were softwares that were still running on-prem on-servers of our clients and a single ah application was running in a single server. right There were no multi-server support and things like that. Things were a little different. We were shipping patches frequently because we could ship them over the wire and their teams could install it on the boxes with the help of our application engineers and so on.
00:03:28
Speaker
Both these places, you have to note these are modular applications, but in no manner, you know less complex.

Google: Data Pipelines and Distributed Systems

00:03:35
Speaker
Then I moved to Google. ah Google, I or worked with the data engineering team for Google maps, and I got exposed to these massive data pipelines. We were doing data preparation for machine learning models. We were training models. We were deploying those models, and these these models were doing evaluations in real time. All of them were running on the cloud all over the world. We had three different DCs where our services were running in sort of a model where you have a primary DC and a secondary and a tertiary.
00:04:03
Speaker
were extremely resilient, extremely distributed, even ah different components in the same call path were running in different DCs altogether. And we were not even aware of it most of the times because Google being Google, we were shielded from all the complex complexity. There were dedicated teams who were taking care of those um those aspects of deployment and post-op deployment management.

Managing Microservices at Flipkart and Hotstar

00:04:26
Speaker
And then I moved to Flipkart and Hotstar. or Nothing could have prepared me for the scale and the complexity that I saw there. At Flipkart, especially, we had um like hundreds of microservices running tens of different types of data stores.
00:04:41
Speaker
each one of them being managed by the engineering team themselves, the developers themselves were managing them, extremely high complexity, a lot of overhead, a lot of learning. Hotstar was sort of a step jump because we were doing all the same things but we were not doing it.
00:04:57
Speaker
in our data centers. We were not managing them. We had these cloud ah providers, right AWS and GCP and others who were providing these sort of managed services. So things change a little, ah not very drastically, but yes, they did they they did change a little there. So when I look back, it's like it's it's it's like complete ah shift from ah where I started to what it was in between, to what it is today. It's gone through so many levels of changes.
00:05:25
Speaker
Very, very interesting journey, Sarika, you have had. And maybe ah you are some somebody who has been on the driver's seat of moving from one technology advancement to another technology and advancement ah every three year or four year looks like. right ah Being there, starting from ah some something like a monolithic application development and deployment to a microservices based architectures. right and now people are talking about CI CD where you talked about you are actually sending software in a ah compact disk that is the CD that time and now that CD has changed to continuous delivery. right ah What sort of changes, challenges, issues you have seen now?
00:06:13
Speaker
ah because obviously past nobody wants to know what sort of challenges were there but people everybody wants to know ah if you are managing or developing a highly complex distributed enterprise grade ah type of application where they are talking about hundreds of microservices. um What sort of ah challenges issues people are seeing with CICD, development, deployment and so on.
00:06:38
Speaker
I think there are two set of challenges that I have seen in last last few years right that I worked with

Challenges of Microservices in CI/CD

00:06:45
Speaker
microservices. One is ah there are so many moving pieces. eight ah When you have to do deployments, there are so many things you have to consider because you could have done the deployment of one piece, but there were others who were depending on you and your contracts have changed somehow, even minor changes will impact ah other systems and if you don't,
00:07:03
Speaker
ah go through the cycles of canary testing and integration testing and a lot of such things you are bound to run into problems. So the complex complexity of doing a deployment has grown multifold primarily because there are so many moving pieces. not The individual deployment has become easier, but the whole system you have to make sure it works altogether. right So that is ah that is an added complexity.
00:07:27
Speaker
The other aspect is because it's so easy to do the deployment, the cost, at least the notional cost of doing a deployment is so low that we have stopped thinking about quality like we used to those decades ago, right? At that point, as I said, you could not afford to make a mistake because if you do make a mistake, you have to ship CDs all over again. It's no longer the hurdle, it's no longer a barrier because you do a deployment within a matter of seconds and minutes at the most and that sort of makes people think that it's it's okay, right? If some issue comes in, I will figure it out, I will fix it. So that has become a part of our DNA, a part of our culture, ah which leads to ah production issues coming in every now and then, regressions coming in every now and then, no matter what team, what organization, in in what which space we are dealing with these day in and day out.
00:08:19
Speaker
No, yeah, very, very true, right? The kind of flexibility and the speed this particular technology chain has brought in. It has also created or brought in the other added issues, challenges with it. And one of the underlying challenges which you definitely ah I completely agreed to was the quality part. eight ah People have started feeling, yes, I can actually as soon as the problem comes, I will be able to fix it very fast and I can deliver it ah within few minutes or few seconds. So ah the aspect of quality has ah become an issue like that. ah Coming to this the same quality part right that
00:09:05
Speaker
ah Earlier around 2000 timeframe, there is a new sort of tool started coming out to figure out whether the quality of your application, ah infrastructure, deployment and so on, how it is and you can figure out how your application is performing and so on. And that's how the whole ABM tool came out in picture.
00:09:27
Speaker
right to really get a better understanding of how your application is performing. right ah But then slowly this got pushed to IT operations also because application was getting deployed and then people want to figure out at any given point of time how this application is performing and then if there are challenges and then they can go ahead and fix it. hey So from that perspective, ah starting from a siloed monitoring of APM, server monitoring, database monitoring, people have moved to something new called observability.

Evolution of Observability in Development

00:10:01
Speaker
And this has also happened like whatever you talked about, right starting from a monolithic based application architecture to microservices architecture, which requires more a distributed ah sort of tracing ah kind of monitoring and that's where observability is coming.
00:10:17
Speaker
how our engineers or developers for that matter have adopted various processes to make sure that their application when they are developing or deploying is actually more observable but or more monitorable. I mean, somebody is able to monitor them much easier and figure out, ah yeah, there are challenges happening and but where those challenges are. So what sort of changes developers and engineers are bringing in their day to day work?
00:10:47
Speaker
Yeah, I think you're very right in saying that the the world has changed, right? Because you've gone gone from monoliths to ah microservices, a side effect of that has been that it is not a single team that can do the monitoring anymore.
00:11:02
Speaker
because that single team doesn't even know what all services mean, what, ah what are your critical flows? They have no idea as to whether a service is running. It is a P0 service or a P1 service and or or whatever else, right? So the onus of making sure that your service is up and running and is behaving as it is expected is on the team that has built the service. So ah it is on the developers who have built the service. So more and more, what we see is every team that we work with, they have their own on-call on-call schedule.
00:11:33
Speaker
ah The on-call is primarily responsible for the maintaining the hygiene, making sure if there are any issues coming in, they're looking into it. So as a part of that, the APM tools, all different kinds of tooling that is there is becoming a part of their day-to-day life, right? Whether it is monitoring, whether it is alerting, whether it is looking at the logs, whether it is distributed tracing, when an issue comes in, you pretty much pull in every tool that you have in your RNS and you want to deploy it to get to the fix as soon as possible, right?
00:12:03
Speaker
So these have become a day-to-day part of every developer who's working on any production system. you You can no longer shy away and say, oh, my job is just to do the development and then I sort of throw it over the wall to the SRE or DevOps team. It just doesn't work anymore.
00:12:20
Speaker
Very true and the kind of complexity both underlying infrastructure and the application architecture is bringing again, it can definitely not be just given to some third party and say boss now you manage and maintain it.
00:12:34
Speaker
right so ah coming from there ah ah you have worked on multiple, almost like citizen scale solutions, right, where we are talking about Flipkart or Hotstar, right, and so on. ah What sort of key architectural considerations ah ah these ah architects, the technical or principal architects who are designing some of these applications ah would usually have from observability standpoint?
00:13:02
Speaker
So, um yeah if let's say I'm an architect building an observability tool, of course, there are going to be a ton of functional requirements that I need to take care of. Along with functional, there are going to be these non-functional aspects. Scale becomes very big. When you talk about citizen scale or when you talk about scale,
00:13:19
Speaker
like at at which products in India operate, it's it's very, very high. So, you have to be prepared for that kind of scale because as, as I mean, it's inherent that if you are an observable tool, you will be plugged into that system which is running at that high scale. So, you have to be able to manage that.
00:13:35
Speaker
Even apart from that, I would say um from the other side, when when I am building a service and I want to integrate with ah a tool, some of the key things that I want to make sure that the tool sort of serves for me is it has to be a minimal integration requirement. If it requires me to write a lot of code,
00:13:52
Speaker
to integrate with an observability tool, it becomes a blocker upfront because I can't have all of my teams write that piece of code, spend weeks or ah days doing the integration. The other one is a performance overhead, especially when we are running in user but services, the services which are serving a user request in sub-second latency is there, any overhead is not acceptable.
00:14:14
Speaker
and Even if you are going to give me all the insights I need to debug a production issue, but if it is going to slow down my systems for my users, it just doesn't work. So I think these are the key considerations from both sides that one has to consider.

Build vs. Buy: Observability Tools

00:14:30
Speaker
Yeah very true I mean um we have been talking to customers right and there is always ah ah a sort of debate which start ensuing right that whether and we should build locally and if we are building locally that whatever points you talked about becomes very very critical right and that the second part is should we buy it from ah of the self right.
00:14:53
Speaker
ah from an OEM vendor like us, un ah who has been actually creating a working on a business journey observability platform. So, when this built-in versus by debate is on, right and with so many other obviously observability tools also available,
00:15:10
Speaker
up what do you What would your recommendation be ah to some of these ah ah enterprises out there? that How do they decide? what What is the right one for a specific project or application or deployment?
00:15:25
Speaker
Yeah, thank you. I was wondering if you're going to ask me whether to build or why. I don't want to give an answer there. It is just so ah contextual, right? So when you're asking me for a decision framework that I can help with, definitely. And it is a debate that happens pretty much everywhere and it's very it becomes very, very passionate and heated as well. ah also So some of the things that ah we look at.
00:15:48
Speaker
right to decide whether but which way to go. First thing is we have to write our requirements. but We have to say, what is it that we are trying to accomplish? And whether I'm going to build something or buy something, is that going to accomplish what I'm trying to accomplish? And many a times I've seen people when they list down the requirements, they go into a far future of saying, oh, I will hit so much scale in next six months or next year, or and hence I'm going to build for it. And sometimes it's very, very optimistic.
00:16:18
Speaker
right And people say, oh, I may not hit it in a year, but I will hit it in two years, so I need to think about that scale today. My suggestion would be think about the requirements that you know today and maybe six months and and be optimistic, but not overly optimistic there. when the When the things change, you can also change these decisions.
00:16:37
Speaker
And you can then again shift. The other thing is or don't miss on the non-functional requirements. Like we just talked about ah what is going to be the cost of integration, what is going to be ah hit on my performance, what is, and and on top of that, what is going to be the cost of building and maintaining. When we say buy, its it's it's a cost which is very, very visible, right? Because you have to pay every month or every year.
00:17:04
Speaker
But when we talk about build, we kind of assume it comes for free. That is right. Building also has a lot of cost, not just of building it at that point in time, but also maintaining it. I've seen teams where somebody decided to do sort of a build using some open source tools, and then that person left, no no one else in the team knows what how that thing operates. Now what do you do? So those kinds of teams also have to ah sort of, you you have to keep in mind.
00:17:32
Speaker
If you're using open source, that's probably okay. If it's an open source library or a tool which people are aware of, it's fine. But if you're writing something custom, you should do it only if it's really, really necessary. So keeping these things in mind, I think laying down all of your requirements and your context, one can sort of try to make a decision. It's still going to be a sort of a little, um I would say, personally biased by the person who's going to do it because it also depends on what I've done in the past. it A lot depends on how comfortable I am with making a choice, but at the same time, by listing these things down, you can try and make the decision more objective.
00:18:11
Speaker
right actually ah yeah bit bang on I have seen both type of customers right where ah some customer believe that this is very critical for them they understand it very well and they start doing it very early on. So, those places I have seen it at least quite successful right.
00:18:31
Speaker
ah they when they have built something themselves. But then whatever they have built is very, very attached to ah their business processes or their infrastructure or their application. I have also seen people claiming that they want to go with building the solution internally using open source and they have tried for six months or melbourne almost nine months or a year and it has not gone anywhere.
00:18:57
Speaker
And that has happened because of various ah reasons, like you said. Some of them are skill problems, some of them are tool selection problems, some of them are integration problems, which you talked about, right? Which is very, very important to select. Am I able to integrate with all my other tools? Because you cannot go ahead and re replace all of them because you have selected something, right? So this is very true. Now, let's say tool has been selected. oh And if we are living in a CI-CD sort of world today.
00:19:25
Speaker
ah What do you think ah from the process perspective or best practices perspective ah should happen or recommend? for the continuous monitoring because now ah we basically ah the whole ah ah This complex environment which we have been talking about and it's quite voluntary things keep changing ah Like we discussed people are continuously patching their application networks are flapping ah things are ah ah being moved from one VM is moving from one group to another group and so on
00:19:57
Speaker
So, in that, ah what are the recommendation on the continuous monitoring part?

Continuous Monitoring and Alert Noise

00:20:02
Speaker
Yeah, continuous monitoring has to be there, right? You you can't survive without it. Most teams, um fortunately, unfortunately, they do realize it very early on because they run into production issues and then they realize it, some know it upfront and that's good.
00:20:15
Speaker
ah But whenever you are deploying a service, there has to be a standard set of matrices that you you're going to measure and you're going to put your alerts on. You have to identify what is important to you, what is important to your business, ah which services are P0s within those services, which APIs are P0s and build your ah alerts accordingly. But monitoring has to be pretty much on everything, right?
00:20:41
Speaker
um That is one part. Another part is while you have dashboards for maybe you have 10 microservices, you have 10 different dashboards, you should ideally have one single point which can give you at least red lights, right? ah If there is something which is sort of going wrong, all of you are alerting and ah to be able to sort of connect the dots as to where the problem might be happening, it's it's good to have a single dashboard that provides you that view.
00:21:08
Speaker
Another thing is um understanding where wherever you are doing these um sort of error handling, we tend to do that a lot. I don't expect it to fail, but if it fails, I will, you know, emit a matrix and then I will handle it and have sort of a degraded behavior. Most times we forget to handle, ah to um indicate that this is a failure because we didn't, like our customer didn't see the failure, right? But in the system, it's a failure. Unless I emit a matrix, I put an alert on it, I will never know.
00:21:40
Speaker
So these are what I call silent failures and that they are silent unless you actually put or triggers there. So that is also important. With all of this, another key thing comes is how many alert alerts are you getting? Are they really useful or not? I have been in teams where on-call will keep on getting paged every hour and then more than half the time they just um acknowledge the page and that's it.
00:22:06
Speaker
And when you ask them what happened is, oh, I know this comes every day, I know it subsides every day, so I'll just acknowledge it and I'll go forward. The effect of that is that you get desensitized to those alerts, right? You you start taking them lightly. ah We have had a situation where because we used to get so many pages, the on-call will ah put their phone on silent and go to sleep. In the middle of the night, we had a massive problem. We couldn't wake up the fellow because the phone was on silent. So ah be very cautious about how, like,
00:22:36
Speaker
actual value versus noise, that segregation. Yeah. No, I think this makes a lot of sense because I was talking to somebody sometime back and that person was telling me that ah we have like bits of alerts and thousands of ball alerts were getting generated every few minutes. And then he brought in a system that if we are not taking an action, that alert will be disabled.
00:23:00
Speaker
And after some time, they continuously disabled alerts out of 100, only I think 5 or 6 alerts remain, which they really take action on. Yeah, those are the ones you need in your system, right? Yeah.
00:23:13
Speaker
yeah tonight So, I mean this el alert noise or el alert ah ah blasting right from ah various monitoring tools has been creating a huge problem and that is where there are lot of new technology. We are also working on one of the ML feature called event correlation.
00:23:32
Speaker
can I get you one high fidelity alert out of this ah hundreds of alerts which is coming in so that you basically get only one actionable alert. So all the noise ah which you are getting ah is reduced. And 99% of the time people say that no no you please enable burning alert also because people miss alerts so at least if not warming critical they they will actually act.
00:23:55
Speaker
And that increases the number of alerts which are generated. So obviously, like you said, right, that people switch off stuff and then... Yeah, I would rather be on call where I get alerted maybe once in my stint of one week and maybe never, ah rather than having it alert every day with non-actionable alerts.
00:24:14
Speaker
got yeah they ah So we have been talking about observability and monitoring and application department. And every application developer, when when they are developing applications, they write a lot of logs. And in a lot of conversations, people tell me no no they have a lot of tools, but they finally go to logs to get the ultimate source of truth, whether things have happened, not happened, what really ah happened. right ah How do you see this logs basically playing a crucial role in improving this whole business resiliency and system performance? Do you still see it relevant? Do you still believe that ah even though we have so many other observability tools which are metric driven, logs will still ah basically play a role?
00:25:03
Speaker
Logs are definitely relevant. but there is There is no question about it. Logs are relevant. they are ah This is sort of a tool that saves your ass when things go wrong, ah because that's the only thing that can come in handy to actually see what had happened. But in my observation, like my experience, this is also a tool that is the most misused tool.
00:25:26
Speaker
and and I'll explain what I mean by that. so I've been in the teams where you will see barely any logs because we don't want too many logs. It becomes clutter. and Then there are times when you don't know what happened because the logs where they were supposed to be, they were not there at those points. They never got added. We only had logs which were very, very ah slim and didn't have enough information.
00:25:48
Speaker
I've also seen places where you have logline after logline after logline and they're generating like petabytes of data and it is useless because you can't query that data. You can't look into that data. It just is so humongous. And when you start looking at it, you can't make sense out of it because it's it's just all over the place.
00:26:06
Speaker
My hope is that, you know, with the new set of tech coming in, we'll hopefully be able to manage it much, much better because now we have the technology that can actually look into it and sort of summarize it for you or identify what is meaningful, what is not meaningful. ah So, it will become more useful hopefully with some new interventions coming in. But as of today, I feel like it's required, but there there has to be a discipline as to how to use it.
00:26:36
Speaker
Because more often than not, people end up misusing it. Very true. One of the thing what we have seen is that while application might be getting only maybe 100 transactions or 200 transactions, but the amount of logs it emits is just mongous, right? I mean and managing maintaining it.
00:26:56
Speaker
It even becomes ah difficult. In fact, RBI has directed banks to store logs almost for seven years. That is what I was told some time back that any critical application where user transactions are coming in, they have to store logs for so much time and it's more for compliance ah perspective, right?
00:27:19
Speaker
This is very interesting. Let's just switch gears a bit. ah Just wanted to get your thoughts on, ah you have seen three or four different ah advancements in application development and

AI in Automating Coding Tasks

00:27:33
Speaker
deployment. ah What future with AI, Gen AI and so many other things coming in, what future innovation and advancements do you foresee in both application development and deployment part of AI?
00:27:47
Speaker
Development and deployment, we are already seeing people coming up with all kinds of copilots, right, whether it is for writing code, whether it is for generating your yourw test cases. ah People are toying with the idea of generating logs automatically. They are toying with the idea of generating comments automatically and and so on. So, all of these are definitely going to help, right, because some of these things ah We don't think are important enough for me as a developer to do, ah so I would much rather sort of hand it over ah to an AI bot to do it, like writing documentation, writing test cases. Some of these things we don't think enough about, right especially when it comes to test cases. If I have to write my own unit test cases, I write two or three and then I'm done, ah but there might be many, many more error scenarios and corner cases and whatnot that can be auto-generated.
00:28:40
Speaker
So those kinds of things are definitely going to improve our productivity a lot more. When it comes to deployments, I really hope that you know we can like we have CI-CDs more or less standardized now. I'm hoping the continuous monitoring, like you mentioned, right that will also become a little standardized because even today we see things where it's not happening as it should be. or the um e ah one is just doing the monitoring the other is also understanding what has happened when a metric is off or when a trigger comes in. I think there as well we do have tons and tons of knowledge base um ah out in the open.
00:29:20
Speaker
but Or in some cases, in the teams, we have tons and tons of playbooks. But at the time when an alert comes in, when I have to debug, ah it takes me maybe sometimes minutes and hours to figure out the right document and the right procedure to see what has actually gone wrong and what do I need to do. So some um intervention there or some innovation there I feel is going to be really, really helpful. Correct.
00:29:48
Speaker
So, ah you talked about two things in this, right? That Gen AI for co-pilot, right? In fact, a few days back, I i was trying toying with chat GPT 4.0 and I was very sort of surprised by me giving a prompt for generating a Python code.
00:30:06
Speaker
And it generated a Python code very well. Not only Python code work, it basically gave exactly the same output which I was looking for. So that way for smaller complex problems, ah I believe generating code from chat GPT 4.0 will become a knob very soon. So eight ah so that that is very interesting from the ah co-pilot perspective. ah The second part which you mentioned is more interesting to me, which was on the continuous monitoring part.
00:30:35
Speaker
eight ah From the application development and deployment perspective, ah we have been at Vune and talking a lot about embedded observability.

Observability from the Start for Resilience

00:30:47
Speaker
What it really means is that when you have not even started writing the code for your application, but you also start thinking ah where it gets deployed in whatever form or fashion, how am I going to observe it, right?
00:31:02
Speaker
and yeah During the whole application development, ah you basically keep developing the observability part also. ah For example, we talked about logs. ah Like you said, right it should not be that a transaction transaction has come and a humongous amount of logs are being thrown out.
00:31:20
Speaker
ah So, the log format, log content, ah the traceability of your services, how do you press trace ID from ah one service to another service or maybe to another other application. right What sort of infrastructure you will have. So, so we we we we we are feeling that now with better understanding of, because I have to have monitoring and observability enabled, whatever application I'm creating, why not I start thinking it from day one. Do you have any thoughts around it?
00:31:54
Speaker
Well, that's very interesting. um Like I said, these things are going to become more and more critical. eight ah Even if, let's say, I ah start a new company today and I'm a novice engineer, I may not so think about these things, but the moment my um my system is out in the production, the first problem hits me, I am going to realize that this is important.
00:32:15
Speaker
And from that point onwards, it is going to stay important. So it it is becoming a large part of the work of all the developers because it's rarely, you know, or that happens that you start a company and the first hit comes. That is a very small window of time and it happens only once, but after that, it is always like, you know, that you have to be aware and you have to think about it.
00:32:35
Speaker
ah So, it is becoming a part of each and every developer's day-to-day life. It is something which has to be solved and it's it's great that you are are taking leaps so in in trying to solve that problem. I'm sure you'll do great and I am really excited to see ah how it comes out.
00:32:57
Speaker
ah Thank you so much, Sarika. Just one last question, which basically I felt, ah given your wide range of experiences as well as you have been in the industry for so long, ah ah it would make a lot of sense for our audience to know a specific book, which you always go back to, or which you would recommend book or podcast, whatever ah you always go back to and listen and then ah get

Recommended Reading for Soft Skills

00:33:22
Speaker
insights. ah Do you have any recommendations for the audience?
00:33:26
Speaker
Yeah, when it comes to books, ah I am a heavy reader, but I read all fiction books. When it comes to nonfiction, I pick them up only when I need it. So when at a time when I had to sort of, when I wanted to pick up about the design patterns, I picked that book. When I had to learn about any new tech stat that I was going in for. So for example, when I moved to the chip design, I started learning about synthesis and the digital electronics. So I am I don't really have or something that I can recommend on the tech side. On the non tech side, I i very formally believe that there are a set of skills that everybody should learn to be successful. They are going to become more and more important. And these are what we call softer skills, right? Understanding how to work with people, understanding what influences our behavior and other people's behavior.
00:34:16
Speaker
ah There is a very good book ah called Influence. ah It's a very, very popular book. That's what I'm reading these days and I'm really loving it. I know there are tons and tons of book recommendations. I try to stay away from it unless you ask me for fiction. For fiction, I'll give you tons of recommendations. Okay. I had pictured which particular author you believe is the best? oh I don't do that. I get into trouble with my daughter if I pick her favorite.
00:34:43
Speaker
Okay. so Okay. Okay. Thank you so much, Sarika. It has been a pleasure talking to you and getting tidbits and insights into your seemingly varied and deep experience. Thank you so much for joining us today. Thank you so much. It was a pleasure talking to you. It was a nice start to the morning. Thank you so much. If you enjoyed today's episode,
00:35:09
Speaker
Please consider sharing it with colleagues who have similar interests. It will help us spread the word. Discover what Sarika and her team are up to at exemplify.tech. For more information about ViewNet, please visit us at www.viewnetsystems.com.