Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Inside India’s AI Cloud Gold Rush | Sharad Sanghi(Neysa) on GPUs, LLMs & Startups image

Inside India’s AI Cloud Gold Rush | Sharad Sanghi(Neysa) on GPUs, LLMs & Startups

Founder Thesis
Avatar
72 Plays1 day ago

"Infrastructure always beats applications."   

This is the core philosophy that made Sharad Sanghi one of India's most successful tech entrepreneurs.   

While others built consumer apps, he built the data centers that powered them. Now at 56, he's doing it again with AI infrastructure, showing why betting on the plumbing beats betting on the applications.  

Sharad Sanghi is a pioneer who built India's technology infrastructure backbone. He founded NetMagic in 1998, India's first data center company, scaling it to ₹3,600 crores revenue and 19 data centers before selling to NTT Communications for $116 million. Under his leadership, the company grew to serve 1,500+ enterprise customers with 300MW of IT capacity. After a successful exit, he's now building his second unicorn - Neysa, an AI cloud platform that raised $50 million in just 6 months. With degrees from IIT Bombay and Columbia University, and experience building the early internet backbone in the US, Sharad brings 30+ years of infrastructure expertise to India's AI revolution.  

Key Insights from the Conversation:  

👉Infrastructure-First Strategy: Building foundational technology layers creates more defensible businesses than applications 

👉Crisis Management: Survived dot-com crash and 2008 financial crisis through disciplined focus on profitability over growth 

👉Zero Layoff Philosophy: Maintained employment through multiple economic downturns, proving empathy drives long-term success 

👉80-20 Cloud Strategy: Competed with hyperscalers by focusing on 20% of features that 80% of customers actually use 

👉AI Market Opportunity: India's transition from labor arbitrage to product innovation creates massive global opportunities 

👉Supply-Constrained Business: Data center and AI infrastructure businesses benefit from demand exceeding supply 

👉Second Innings Mentality: Starting a new venture at 56 proves age is irrelevant when solving transformative problems 

This insightful conversation was conducted by Akshay Datt, serial entrepreneur and host of Founder Thesis, India's leading startup-focused podcast featuring deep conversations with disruptive founders.

 #DataCenter #CloudComputing #AIInfrastructure #StartupPodcast #TechEntrepreneur #IndianStartups #AICloud #Infrastructure #Entrepreneurship #TechFounder #CloudAI #DataCenters #AIRevolution #TechLeadership #StartupJourney #IndianTech #AIAdoption #TechInnovation #CloudInfrastructure #aibusinessideas   

Disclaimer: The views expressed are those of the speaker, not necessarily the channel

Recommended
Transcript

Introduction to Hyperscalers and Data Center Essentials

00:00:00
Speaker
Why are these Amazon, Microsoft, Google, these companies called hyperscalers? Yeah, hyperscalers because they've got massive infrastructure across the world. The two lifelines of a data center are fiber and power.

Advice for Aspiring Founders: Think Global with AI

00:00:13
Speaker
What's your advice to the current generation of founders? First of all, it's dream big, right? And especially with AI, you've got this opportunity to create something from for the world. You just don't have to look at India alone.

Guest's Background: A Pioneer in Tech Founding

00:00:40
Speaker
Welcome the Founder Thesis podcast. You are ah pioneer, both as a tech founder and in the space of cloud computing.
00:00:51
Speaker
um I want to ah kind of take you back to 1998 when you started your first venture. um What was it like? What what had you gone through? how old were you when you thought that you should start a venture? And 1998 means pre-VC era, pre-tech startup era, and you were doing a tech startup at that time. So I just want to understand what that era was like and what you were like.

Building the Internet Backbone: NSFnet and Beyond

00:01:19
Speaker
First of all, Akshay, pleasure to be on your show. um So um I did my undergrad from IIT Bombay and then my master's from Columbia University, New York.
00:01:30
Speaker
At Columbia, I was fortunate to work on the first large backbone of the internet called the and NSFnet. So in the early days of the internet era, I was fortunate to work for this company called ANS, which ran the largest internet internet backbone, and which was funded by the National Science Foundation, hence called an and NSFnet.
00:01:50
Speaker
The main purpose of that ah backbone was to connect research and education institutions over a high speed network.

Commercializing the Internet in India: Government and Consultancy Roles

00:01:58
Speaker
right And of course, then we connected that to the commercial internet exchange and then it connected to the overall internet as well.
00:02:05
Speaker
um um I moved back to India in 95 when the government announced that they would offer commercial internet services. They would allow you know internet to be commercialized in India.
00:02:17
Speaker
It was still a monopoly of Videsh and Charnegam Limited. So from 95 to 98, it was a monopoly of BSNL. I was again lucky to be appointed as a consultant to vsnl i actually went there because there was some issues they were facing and i told them that i could fix some of those because had prior background in the us and they asked me to on the fly fix they said a lot of people come to us saying that we can fix things why don't you ah show us what you can do and i was able to you know fix some of that stuff so they gave me a consultant was it hardware stuff or software stuff
00:02:52
Speaker
Well, it was basically configuring of the routers. So it was, you know, yes they they used to use Cisco routers and they had some, not they were not configured properly. And so we were able to, I was able to fix those.
00:03:03
Speaker
And, um, So they gave me a consulting contract to to you know to help fix the the and help you know grow the internet backbone in the country. So that actually helped me. So I started a consulting practice where I consulted not only to VSNL, but to a few large corporates like Reliance Industries and others.

The Rise of ISPs in India and New Business Ventures

00:03:26
Speaker
um where um basically my aim was to help scale uh large networks right so it could be satellite based networks it will be fiber based networks et etc and um and um So while, um so that from 95 to 98, I was pretty much doing that.
00:03:42
Speaker
And so was doing consulting for various companies and for Videsh and Charnagam Limited. In 98, the government decided to allow private players to internet and to enter the internet area, right?
00:03:56
Speaker
so um So the ISP license was, they started- Internet service provider. Yeah, internet service provider license was issued in the year 1998. And that's when I said that, you know, it's a good idea for us to set up a company because we'll have, again, initially we didn't have capital. So we said, oh we'll set up a company and we'll, now the consulting practice, instead of me doing it alone for one or two customers, now there is a whole bunch of customers who are, know, because many people, there was Satyam Infoway, there was Hathaway Cable, there were many people asking for internet licenses, Tata internet, et cetera.
00:04:31
Speaker
so i we So a group of you know me and i two or three others that I found in the journey from 95 to 98 decided to set this up. And um while doing this consulting, so we had not raised any money, while doing this consulting for private players, we noticed that most of them were doing only dial-up access or broadband access and catering to end users, but nobody was really focusing on how internet could be used for mission critical business.

Data Center Ventures and the Impact of the Dot-Com Bubble

00:05:01
Speaker
and that's when I was also lucky that in one of the Thai conferences in Delhi, I met the founders of Exodus. and And since I was a consultant to VSNL at that time, they were looking to see what they could do with India between Exodus US and India. Exodus, by the way, was the the basically the first company that started internet data centers in the US.
00:05:22
Speaker
and and became at one point reached $25 billion dollars in market cap. So B.B. Jagdish I met in in a Thai conference and I told him my plans that look, I've seen all these um ah service providers are looking at oh you know focusing on retail users and broadband access, but nobody's looking at how internet can be leveraged for internet data for mission-ledical use.
00:05:45
Speaker
And you know is there a possibility that... I would like to figure out if I could raise money to actually set up Internet data centers, much like what you have done in the US. And given my background of NSFNet, etc., he was you know he he took a liking to what I had to say and invited me to the US.
00:06:05
Speaker
um So he showed me a few Exodus data centers and also agreed to Angel Invest in late 99, early 2000.
00:06:16
Speaker
And where once he decided to invest, by then, you know Exodus, as I told you, was at its said to peak. And everybody knew these founders, B.B. Jagdish and K.B. Chandrasekhar.
00:06:26
Speaker
So, by the moment other investors heard that Jagdish is investing in us as an angel, that we had no difficulty in raising venture capital. And we raised $4 million and set up our first data center in Mumbai.
00:06:41
Speaker
It was a very small, you know, 5,000 expandable to 15,000 square foot data center. And, ah but you know, what happened was but dinner within a year of setting it up, the entire industry collapsed because and the dot-com bubble went bust.
00:06:58
Speaker
And, you know, many clients in these third party data centers in the US were dot-com companies that, you know, basically ran out of business, ran out money. And so there were a lot of bad debts, et cetera.
00:07:10
Speaker
And many companies had leveraged a lot of money to set, because these data centers are very capital intensive. And so they had taken a lot of debt that they could not service. So many companies, including Exodus actually collapsed.
00:07:24
Speaker
And um yeah, but it was very exciting times. We fundamentally believed in the model, which is that, you know, you are actually, you know, providing a solution, you're making enterprises focus on their core business and you're actually taking over the infrastructure headache from them and you're sharing the costs across multiple enterprises. So me it was a win-win relationship.
00:07:46
Speaker
And so we just kept at it and, you know, toster yeah over yeah I want to pause this journey a bit here and ah understand a few of the terms that you used.
00:07:58
Speaker
um You spoke of the internet backbone. um What is the backbone of the internet? How does the internet run fundamentally? Is it fiber optic cables and servers or what is it? No, no. So what I meant, the main network that connects, so in the early 90s, the main,
00:08:18
Speaker
Basically, you can you can connect various um locations using either fiber or satellite or whatever, but obviously now it's mostly fiber. And at that time, I'm talking the early ninety s at that time, the speeds that we were running were about 45 megabits, which was called a T3 network or DS3 network, right?
00:08:40
Speaker
And so the backbone prior to ANS getting the contract to run the large NSFnet backbone was ah internet backbone was running at T1 speed, which was 1.5 megabits per second.
00:08:54
Speaker
Today we have... i Backbone refers to the pipe between... yeah So basically the backbone is... you Internet is a network of networks. So it's basically all ah multiple networks that are interconnected.
00:09:07
Speaker
using they all using the IP protocol, right? Internet protocol. So they are able to exchange ah data, um you know, and basically there's an addressing format called the IP address.
00:09:19
Speaker
And, ah you know, based on the source, it's like when you write a letter and you you have a postal system, think of the analogies to the postal system. You write an address, you have certain uniqueness in the address, like the pin code or whatever.
00:09:32
Speaker
Here you have IP addresses that are unique. And based on the address, it can go through multiple post offices across the world. So postal system is a network of networks. Similarly, internet is a network of networks. The post office here is equivalent to router on the internet, which looks at the destination IP addresses and figures out the next router to send the packet to until it reaches the destination.
00:09:53
Speaker
So, backbone is the is referring to the core network that connects all these routers together. And the backbone that we built in ninety eight ah nineteen early 90s was the backbone, we upgraded the backbone that connected all the research universities.
00:10:08
Speaker
So, it was the research and education network that we that we built and obviously that was connected to the other commercial part of the internet through internet exchanges. Okay, got it.
00:10:19
Speaker
ah And the internet still works like that today also, like there are routers which… Yeah, speeds have gone up ah several fold. We were talking about 45 megabits. Now there are several terabits per second that it's running on. giga you know Gigabits per second and terabits per second. Backbones have actually gone to several terabits per second.
00:10:39
Speaker
um ah and So think of the, when you're scaling a large network, you you have a core network and then you have local networks in a particular region. And then you can have a local area network in your office, right? So that connects to a regional network, which connects to the main backbone network.
00:10:55
Speaker
And there are ne multiple backpoints that are running that are all interconnected. They could be interconnected through private peering where they connect between each other or they could be connected through an internet exchange where you again could use either private peering or public peering.
00:11:10
Speaker
What's an internet exchange? Internet exchange is a neutral place where multiple service providers bring their equipment to routers together. So they co-locate the routers typically, right yeah bringing the routers together or the switches together.
00:11:26
Speaker
And it's a means to exchange traffic based on a particular policy that the exchange follows. Typically, the most common policies that people follow are what are known as private peering, where you have bilateral agreements between providers that based you know that this is how we'll exchange traffic.
00:11:44
Speaker
and But you also have public exchanges that are available where you know anybody can exchange traffic with anyone. So depending on the policy, you exchange traffic accordingly. Okay.
00:11:55
Speaker
ah What is the difference between a router and a switch? um ah So, again, when I met a ah switch, you know, if you look at the layers of TCP IP, right?
00:12:07
Speaker
ah um but and When you loosely use the word switch, you typically use something like an Ethernet switch. That is something that operates at layer 2. A router operates at layer 3. So, router actually recognizes IP addresses and routes based on IP addresses.
00:12:22
Speaker
But now you can use a layer 3 switches like a router. So, you can have... Actually, switches can be either at layer two or at layer three. i go to the What is layer one, layer two, layer three? Yes. So there's this seven layer of TCP IP where you have the physical layers layer one, you have the MAC and layer, which is layer two, you have the IP layer, which is layer three, you the transport layer, which is layer four, etc.
00:12:45
Speaker
And then the application layers on top, right? So... So when you're looking at Ethernet frames, when you're looking at the Ethernet layer, which is your typical local area network that you use in your offices, that is typically this the device that interconnects your, for example, your personal computers, etc. is an Ethernet switch, which is operating at layer two.
00:13:08
Speaker
Now, you can also have an Ethernet switch that operates at layer three. So it can actually route between different Ethernets. which operate at the end. So that's a separate session that we can go into. Yeah.
00:13:20
Speaker
but Who are the, so you said that at an internet exchange, ah internet companies will come and exchange data with each other. the internet companies will be they they they could actually put only, collocate only the ethernet switches, or they could even collocate the layer three devices like routers and switches that they could do and then in exchange traffic.
00:13:41
Speaker
Okay. And these companies are like Airtel, Gio, these are the internet companies. Yeah. in India, example, in an internet exchange, like we have an exchange in India called Nixie, National Internet Exchange of India, which is again a neutral exchange where you have people, service providers like Airtel, Tata, NTT, others that all interconnecting exchange traffic based on a particular policy.
00:14:06
Speaker
So what would be the journey for ah when I open my ah phone and open the Netflix app and I start streaming ah movie?
00:14:19
Speaker
Yeah, so what you're doing is when you open an app, yeah so there is a… there's There's on the internet or on the cloud, if that people commonly call, there will be a bunch of servers running somewhere in a data center.
00:14:34
Speaker
Okay. and when you when your app connects to the server, the there is a request that is sent. A packet is sent to the server saying that I would like to view this content.
00:14:45
Speaker
The server gets the request, ah maybe do some authentication to see if you're a valid user. And once does the authentication, sends the streams, the data to your device. and the But all the data is routed through the Internet protocol ah using the way that I mentioned.
00:15:02
Speaker
um That's the way it would route. So the the ah Netflix servers could be sitting... in maybe in a different country or hopefully in your local country. Now, typically, Netflix has distributed their setup across the world using a public hyperscaler provider like an Amazon and others.
00:15:23
Speaker
And so you typically and since Amazon is present in India and the US and most in many countries, at least in those countries, you'll have serving happening locally. You also have this concept of caching where you can cache the content.
00:15:36
Speaker
you know Caches are these repositories where with fast memory you have you know you you you you capture the content locally and you can then, based on the request, you can stream it from the look nearest um cash if you will right so um so that sort so so so uh when you when you when you click on your app netflix app on your iphone uh you basically ah that app will connect to the um server in the cloud netflix server in the cloud uh it'll do the authentication and then the request you to stream data
00:16:14
Speaker
Okay. um So you started a data center. What is the difference between ah data center and a cloud?
00:16:24
Speaker
Great question. So again, these terms are, you know, ah so a lot of loose definitions of these terms and there's no one correct definition, but, you know, typically a data center is a place where you house server infrastructure.
00:16:38
Speaker
So you have lesser people in that facility. you It could be a lights out facility and you have a lot of servers ah typically housed in these data centers. A data center's job is to make sure that the the two lifelines of a data center are fiber and power So you you need to make sure that power is available around the clock. You need to ensure that connectivity through multiple service providers is available around the clock.
00:17:03
Speaker
And of course you have other things like you have have proper cooling. So because these servers generate a lot of heat, you need to make sure the servers don't overheat. ah So you have to have you know temperature control and humidity control.
00:17:15
Speaker
And of course you have to have security so that nobody can just randomly come into a data center. and So there are many aspects of it. That's what is called a data center. cloud is you know typically cloud ah the cloud so if i talk about cloud there are multiple definitions again of cloud but you know when you say that okay this is running in the cloud it's running somewhere on the internet you don't know where it is running so that's why you call it cloud that's one way of looking at it most strict definition of cloud has multiple layers again you have something called infrastructure as a service so if i house some servers inside a data center
00:17:49
Speaker
which is connected to the internet and I provide that infrastructure as a service. so you don't actually buy the infrastructure, you rent it as a service. Then that's called infrastructure as a service or IaaS.
00:18:02
Speaker
The next layer is if I run some software on top and provide you a platform which abstracts the infrastructure underneath. So developer just needs to develop using certain APIs, but doesn't need to really worry about the underlying infrastructure, that's called platform as a service.
00:18:17
Speaker
And then the third is software as a service, where if I offer you the actual software as a service, like Netflix in this case or your Gmail in another case, Gmail is a classic example of software as a service, then that is called software as a service. So these are all different layers of cloud. So some people refer to cloud on infrastructure as service layer, some people refer to cloud as platform as a service, and some people refer to it as software as a service.
00:18:41
Speaker
um some people So a data center typically refers to the physical location where these servers are housed. So cloud is typically housed in a data center. um But again, these you know a lot of these terms are used interchangeably.
00:18:55
Speaker
And an internet data center, you could have a captive data center, which is not connected to the internet. and yeah But when I'm referring to as data centers, internet data center, where you have a lot of connectivity to the internet so that any application that's housed in a data center, whether it's Netflix or Gmail,
00:19:10
Speaker
can be accessed to the globe. Okay. Okay. So, a data center also buys the servers or is a data center like a real estate business where you have a building and you have cooling and electricity? yeah Great question. You have multiple models, right? So, you can have, there is something as a co-location provider. So, data center can be a pure co-location provider where they provide the mechanical and electrical infrastructure.
00:19:39
Speaker
They provide the connectivity infrastructure. They provide, and the connectivity, again, they don't need to own the fiber. They can just provide meet-me room where they bring in all the telcos. And so they can provide a cross-connect service to the to the person who puts their servers in and nothing else. So that could be pure co-location provider.
00:19:58
Speaker
A data center may also provide some services as you know by putting up a cloud their own cloud inside the data center like we did in NetMagic. ah we we In addition to providing co-location services to others, we also put our own servers.
00:20:14
Speaker
ah So lot of customers wanted ah not didn't want to buy the hardware and said, okay, can you actually provide me the infrastructure as a service or platform as a service, in which case we provided not only the interest infrastructure but also the platform on top.
00:20:27
Speaker
um but Again, majority of the data centers across the world are co-location providers. ah They're really large players. They provide data centers Facility where there is very high uptime, there is round-the-clock power, round-the-clock connectivity, and also very secure facility with proper cooling and humidity control.
00:20:51
Speaker
um Two clients and some of the anchor clients in the world are your hyperscalers like Amazon, Google, and Microsoft. And and then of course, you have a lot of enterprise clients like banks, financial institutions, manufacturing companies, retail companies, et cetera.
00:21:06
Speaker
Okay, got it. Infrastructure as a service is not this real estate business. Infrastructure as service includes the servers also. Yeah, infrastructure as a service. The underlying real estate with the with the with the mechanical and electrical equipment is the data centerpiece. Infrastructure as a service actually means I'm providing typically the servers for my cloud offering.
00:21:29
Speaker
Yeah. servers, network storage, etc. But you're not responsible for installing an operating system on the server and upgrading the software. We are. we are. So, if i'm see if i'm doing it if I'm just doing a co-location, then I'm not responsible for anything else.
00:21:45
Speaker
Then the client is bringing his own servers, he's responsible for everything else. I could be providing him the servers and be responsible for the all these things. um Again, there could be two flavors. I could do it as a public cloud offering, like much like what Amazon, Google and Microsoft do.
00:22:01
Speaker
Or I could do it as like a private cloud offering where I have a dedicated infrastructure that's not multi-tenanted. It's dedicated to one particular client. And I may be responsible for the ah setting up the operating system, setting up the web server, setting up the database servers, etc.
00:22:19
Speaker
Again, it depends. If the client is more comfortable setting it ah on his own, then the client does not outsource these. But typically, cloud provider will do every all of this. I'm not very clear on infrastructure as a service versus platform as a service because in infrastructure as service also, you are giving servers with the software installed and all of that. Platform is another layer of software on top where you abstract the infrastructure. So the client does not, then you just connect, a client connects through APIs.
00:22:47
Speaker
I could provide a service like a Kubernetes cluster and and a client just needs to connect to that you know through an API. Doesn't need to really... get yeah you know there's one additional layer of software that's provided to abstract the infrastructure below.
00:23:04
Speaker
So client doesn't need to know. So typically developers like to use platform as a service. and typical typical examples are these These are examples of what Amazon, Google and Microsoft provide as global hyperscalers.
00:23:17
Speaker
And lot of people just use their ah platform as a service to to you know without worrying about the underlying infrastructure. The platform takes care of managing the infrastructure. but So in infra as a service, your atomic unit would be number of servers, whereas in platform as a service, it would maybe be something like GB or something like that, where where you don't care about how many servers are being used, but you care about how many GB or something to that effect.
00:23:47
Speaker
but Typically, an infrastructure service will be virtual machines and containers because I will... ah take a physical server and virtualize it. And typically in platform as a service, it could be you know different ways that people can build it. In the ais in the AI compute, it could be just you know a number of...
00:24:11
Speaker
So there are different ways I can ah build a... If it's a serverless offering, then would be number of tokens. if it it a bit if it is a um you know If I'm using a platform, it could be number of API calls.
00:24:22
Speaker
Although now nobody you know API calls become very difficult to build. It could be just the cost of using the platform. ye Okay. Okay. Okay. Got it. ah but Typically, like ah not AI cloud, but like the regular cloud, how is that built? Like an AWS?
00:24:40
Speaker
So we you you have a whole set of servers. You have a virtualization layer on top. You could use something like a VMware or you could use a open source virtualization layer like KVM.
00:24:51
Speaker
And so that you can abstract the physical infrastructure. You can break it up into a lot of, you know, let's say I've got hundreds of servers. I've got storage. I've got network. I can virtualize the whole thing so that can create multiple virtual machines. Because what people realize is in a single server,
00:25:09
Speaker
you know, ah but once you have multiple servers, a lot of the compute was lying unutilized, right? And so that's why virtualization was born in the company that revolution revolutionized virtualization was VMware.
00:25:19
Speaker
And then of course there were open source versions of virtualization software or hypervisors that they were called. So once you have, you can abstract physical, and so you can create virtual infrastructure from physical infrastructure to improve the utilization of that infrastructure.
00:25:34
Speaker
and So i on one server, I can have 10 virtual servers. On a group of servers, I can have hundreds of virtual infrastructure. and that way And of course, there are a lot of other ah qualities of hyperwise like VMware and all. it gives you if If a physical server goes down, you can you can automatically migrate all your virtual infrastructure on a different physical server, etc.
00:25:56
Speaker
And so that's the real benefit of ah virtualization. So you have first the server layer, you have the hypervisor layer, you have the storage where you you you you know when store all your data.
00:26:07
Speaker
You have the network that connects yeah multiple things together, both east-west network to connect multiple servers within the data center and then north-south to connect it to the internet.
00:26:19
Speaker
And then you have security infrastructure, you have late like firewalls, you have load balancing infrastructure. When you have a lot of traffic coming to a particular, for a particular application, let's say Netflix, you want to load balance it across multiple servers. So you have load balancers, et cetera. So all that's a physical infrastructure that you deploy to set up a cloud offering or an infrastructure as a service.
00:26:43
Speaker
but okay Okay. And typically a developer using AWS will be paying per VM, per virtual machine. Yeah, again, you know many developers don't like to so many developers use their platforms.
00:26:56
Speaker
um you know Like in the AI space, their platforms are called SageMaker and Bedrock. So they use the platform itself. So they don't then need to worry about underlying infrastructure. They may still get exposed to it. So then it's just per token or something like that. No, no, not per token. No, that's when you do endpoint as a service or inference as a service.
00:27:16
Speaker
Here that you they adapt there'll be a fee, there'll be a platform fee that they'll pay, right? which will which there'll be um you know you you you When you're doing infrastructure as a service, you're only paying but you're paying a fee for the VM or for the container.
00:27:32
Speaker
Here you'll be paying in addition to a platform fee as well. Okay. Okay. Okay. Got it. Okay. Why are these Amazon, Microsoft, Google, these companies called hyperscalers?
00:27:43
Speaker
I noticed you use hyper in that hyperwise also. Yeah, hyperscalers because they've got massive infrastructure across the world. Okay. They've set up these... you know massive, massive server farms that they've they've they've either built themselves or leased from 30-party data center providers like NetMagic entity, and then um provide their services on that. And their scale is huge, which is why they're called hyperscalers.
00:28:08
Speaker
Okay. Okay. Got it. Got it. Okay. So when you started NetMagic, at that time, who were your competitors? ah i don't think AWS even existed at that stage. So when I started NetMagic in 1998, we were the first data center provider.
00:28:21
Speaker
Within a month of our starting, I think, Satyam InfoWay, which is now SIFI, started the data center. And then there were others like Tata, Reliance, all they all started the data center everywhere.
00:28:32
Speaker
Okay. And you were providing virtual machines to your clients or co-location, like one of these two? Initially, it was a combination of co-location and what we called as managed services. So we were mad we we would allow the customers to bring in their servers. We would also manage install the operating system, database, web servers, manage it for them.
00:28:51
Speaker
So it was combination of co-location and managed services. Later on in 2009, we set up our first cloud. offering. but The cloud business started in the US with through AWS in 2003, if I'm not mistaken, or 2003, 2004 timeframe. were the first ones to start cloud in India, indigenous cloud in India, called Simply Cloud, which was in the year 2009.
00:29:14
Speaker
2008 million. So ah you raised 4 million through an angel round to set up the data center business. By the time you were setting up the cloud business, had you raised more funds or just through the cash flows of the business, you were able to continue to expand? and That's a great question. So we raised the first 4 million. It was a combination of venture capital and angel.
00:29:35
Speaker
right, round. And then before we could raise the next round, as I said, there was a dot-com bubble that burst. And so we we then used the cash flows the business. Our business became cash flow positive within 13 months.
00:29:50
Speaker
And we... and we Obviously, but we did not waste money doing massive ads and all, which was the case in those days. ah People spend a lot of marketing and you could see the dot-com ads and billboards and stuff. We did not waste any money.
00:30:05
Speaker
but You were in the B2B space, you were the targeting only and not only and only enterprise clients, right? You didn't even need to go to mid-level companies.
00:30:18
Speaker
Yeah, yeah. So we, no, our initial customers, a lot of our venture capitalists, like for example, eVentures was our first investor and they had got some and dot comcast dot com investments who also so ended ended up hosting with us.
00:30:32
Speaker
So we also had of those bad debts, but we focused more on enterprise and we gradually built the business. we when When we knew that we couldn't expand our data center footprint till we got more capital, we started offering more value added services like um um know and the the managed services i was saying you know providing operating system management database management web server management security management network services etc selling more internet bandwidth e etc so we did all of that and um in 2000
00:31:04
Speaker
Eight, we actually raised our second round of funding through Fidelity and Nexus. Nexus bought over the stake from eVentures and then put in some more money. We raised $15 million dollars in 2008 and we set up two more data centers with the money that we raised.
00:31:18
Speaker
Of course, cash flows also are very ah positive. We were hugely profitable then when Nexus and Fidelity decided to invest in us. And then we raised our third round in 2010 through some chi strategic investors like Cisco and Nokia.
00:31:31
Speaker
So in addition to Fidelity and Nexus, we had Cisco and Nokia. And then finally, 2012.
00:31:38
Speaker
How much was the third? Pardon? The third round was $12 million. dollars And then in 2012, NTT acquired us, ah took majority stake.
00:31:50
Speaker
I still kept my shares, but NTT acquired all my other investors. They also gave an exit to not only my investors, but also to my employees. And in 2017, they acquired my balance stake because they were in the process. At that time, they were thinking of how they can merge their multiple businesses together.
00:32:08
Speaker
So that was the journey. Okay. Okay. um Why doesn't India have any Indian born hyperscalers? Like the way, like all the hyperscalers are all American or Chinese companies also, I guess probably would come in that, but there is no Indian hyperscalers.
00:32:27
Speaker
Yeah. um You know, ah hopefully we'll change that in the AI space. yeah Okay. Yeah. ah So in ah in the second venture, Nesa, which I started in July 23. So I continued in entity from until 23 July, at ah June 30th, 2023. And now I'm on the board of the India company, but I'm not an employee. And I start set up Nesa in 23.
00:32:51
Speaker
um And um so at Nesa, we're looking at building an AI cloud platform, which we want to scale across the world. So yes, you're right. So far we haven't had any major... But why is it because of demand? That the Indian demand was very small. So therefore... there no reason why There's no reason why an Indian company cannot set up... If if a US company can set up ah data center the infrastructure and other markets including India, Europe, Asia, everywhere in the world, why there's no reason why an Indian company can't do that.
00:33:28
Speaker
But, um you know, a lot of it is to do with the fact that, you know, the entire ecosystem, even if you look at it, Amazon started, Microsoft then caught up and then Google was a late entering, but even Google has managed to get up.
00:33:43
Speaker
the They've spent billions of dollars, such you know, in doing this. Of course, it's hugely profitable for them also, but I'm saying they've spent a lot of money in doing this. So it requires, and they've got,
00:33:54
Speaker
a few thousand engineers working on automation, e etc. So there's no reason why an Indian company cannot do it. But, you know, and hopefully that will change in the AI space.
00:34:05
Speaker
But um ah But you know after a point, when their when their footprint became, the barrier to entry kept on increasing as they started proliferating. Even for Google, it took a while before they could become a serious contender in the space.
00:34:21
Speaker
And the Chinese hyperscalers have done very well in China. I'm not sure how much success they've had outside China, but they've done, but you know obviously China is a huge market for them. An Indian hyperscaler just in India, yes, you may be right. For example, on the AI space, India's footprint of AI footprint is much smaller than the US and China currently.
00:34:42
Speaker
Hopefully that'll change over a period of time, but there's no reason why you can't replicate what you do in India across the world. it's just It's difficult to break into, requires lot of capital and also it's difficult to break into an already established ecosystem, which is why there's inertia. And people have rather said, okay, rather than me trying to fix something, there are already three global players or multiple global players doing that.
00:35:05
Speaker
Why I look at how I can leverage those players and build something else that is global inertia, right? Okay. Why did you decide to sell to NTT? That's a great question. So when, so I'll tell you what started happening in 2011-12,
00:35:21
Speaker
um There were a series of M&A transactions that happened in our space. So there was an acquisition that happened in Brazil. There was a company called 21 Vionet that went public in China.
00:35:34
Speaker
And so naturally, um as these people were looking, as these global players were looking to expand in other markets, India, Brazil, India, and China were three of the hottest markets.
00:35:45
Speaker
So people started looking at India. and in In India at that time, You either had the big players that were already public like your Tata and Airtel and Reliance or you had a small player mid-sized player like us that was, we were probably the only player of reasonable scale that was not public.
00:36:03
Speaker
ah So we were the only player that was of a reasonable scale that was, which had multiple data center locations, had a whole suite of managed services offerings, including our cloud offering, but that was not public.
00:36:15
Speaker
By the way, we were looking to go public. Okay. So we had had board meetings to say that, okay, in two, three years, we should go public. Once we reach a certain revenue threshold, et cetera. But suddenly we started seeing inbound offers.
00:36:27
Speaker
So we had multiple companies approach us saying that would we be interested in doing something with them. And I met one company, I met two companies and then I went to my board and said, you know, look, I'm getting all these guys interest, but I just don't have time to meet everybody, go through all the song and dance and then, you know, and also run my business.
00:36:46
Speaker
So what the investor, at that time, as I said, we were internally, we were discussing when we could take this company public. So investors said that, look, why don't we hire a banker, let them run a process so your time is saved.
00:36:58
Speaker
And if you can get a valuation, that is that we would get if you were to go public, then um then why not, right? And so- Yeah, it's safer.
00:37:10
Speaker
Yeah, so but why don't we do that? So that's what we did. We hired Credit Suisse as a banker. and And then they ran a process, they had about 12 parties interested in us.
00:37:22
Speaker
Finally, after some discussion, we shortlisted it to two. There was a US large private equity player and entity of Japan. And I think what we liked, why we selected entity was um we found a you know a lot of similarities in our aspirations. Like I didn't want to exit the business at that time. So I found with entity, I could continue to grow the business myself.
00:37:43
Speaker
Plus, um you know Entity was giving me a platform to take some of our services global. ah By the way, in 22, I was running the global business of Entity as well. um and And we just liked the cultural match. right I still remember there was a ah the day I met Entity for the first time, there was ah in Bombay, you know in our data center, that there was a series of bomb blasts that happened you know bomb blas that happen on local um you know And I thought they'll cancel the meeting, but they still came. right And they were not faced with that. right So a lot such things. And even when I spoke to my co-founders and I spoke to the people when we interacted with the entity, they all know I think there was a lot of comfort working with the entity, which is why we selected entity over the US s private equity player.
00:38:34
Speaker
Okay. Okay. You said the hyperscalers, it's hard to compete with them. There are lot barriers to entry. You need to spend billions of dollars. um it Is this spent ah on the infra? Like ah you need to have real estate and hardware.
00:38:53
Speaker
No, no, it's infra, it's people, right? They've got like, you know, they've got an army of close to 10,000 plus people working on this automation that they're doing on cloud and they've been doing it for so many years. So, you know, how do you just overnight catch up to that? that's number of one. So they've built these barriers over a period of time. There's also infra, the kind of infra that they've scaled across the world.
00:39:11
Speaker
So you need a lot of investment. I mean, it's not that you need billion dollar infrastructure on day one, you obviously will grow the business. interview But, you know, that is, it is difficult. Plus they have other, you know, over a period of time, Amazon, Google, Microsoft have other services that they cross subsidize.
00:39:26
Speaker
Microsoft's cross subsidize the licensing. google Google will trust ah give you access to YouTube and search engine optimization, et cetera. um You know, Amazon will also give you a whole bunch of other benefits other than just the infrastructure. So um it's a non-trivial,
00:39:45
Speaker
I personally felt NTD could have taken our cloud, Simply Cloud in India and taken it to other parts of the world. it They decided not to and they decided to resell while our data center a business do did really well, but that cloud business was limited just to India.
00:40:01
Speaker
I personally felt NTD should have attempted to take that across the world. OK, OK. What are the automations that cloud providers provide?
00:40:13
Speaker
Couldn't you just get a white-labeled software solution to ah ah put it on top of your servers and be able to offer the same thing as an um aws offers? like No, no.
00:40:28
Speaker
So there are, for example, the orchestration that is done where you do the, you know, where you provision a virtual machine or a container and then allocate resources to that. Yes, there are providers ah that provide that software that you can directly deploy in the servers, but that's not all, right? There's an entire platform layer on top that these guys have built over a period of time and not everything that has been built is available for third party software.
00:40:55
Speaker
So now, yeah you know, your Simply Cloud business, how did you manage to scale it when you didn't have the kind of technical workforce to build these automations which these hyperscalers offer? So the way we did was, but you know, there's 80-20 or there are If there are 100 features that the hyperscalers have, what we found is 90% of the customers only use 10% of the features or 80% customers only use 20%.
00:41:25
Speaker
So we focus only on those limited set of features. And obviously there were some customers we could not target because we didn't have certain features. So we also had alliances with Amazon, Google, and Microsoft. So if we found that there were some features that the customer wanted was not in our portfolio, we would resell that.
00:41:40
Speaker
Otherwise, our objective, obviously, was to sell our own first. so but A typical customer didn't care where you hosted it as long as you did the job right and you gave service level agreement.
00:41:54
Speaker
So a lot of cases we were able to fulfill customer requirements with just 10% of the features that an Amazon, Google or of Microsoft would have. And that's how we scaled that business. We had a lot of customers, enterprises, startups still using our cloud till date.
00:42:11
Speaker
So at 2012, you were roughly 150 CR kind of a revenue number. How much of that was the cloud business? How much was data center? Data center was the, I would say, bulk of the revenue. I would say, if I were to split that revenue into four buckets, data center, cloud, managed services, and um data center, cloud managers and network, right? Roughly.
00:42:38
Speaker
Network would be 10%. um I think data center would be 50%. Managed services would be 25% and the balance would be cloud. Okay, 15% percent. Got it. Okay.
00:42:50
Speaker
So once the entity investment came in, then how did it evolve? You focused more on- Then we started building multiple location data centers. We used investment.
00:43:00
Speaker
First, we had used the Fidelity investment to build Chennai and Bangalore. Then we used entity investment to build larger data centers. Because you know as as the footprint grows larger, some of the overhead gets split over larger area. And so it becomes more cost effective.
00:43:13
Speaker
ah more energy efficient data centers, larger data centers. So when I left entity, we had 19 data centers, close to 300 megawatts of IT load in production. Clearly India's number one out of 1.2 gigawatts. When I left, we were close to 300 megawatts. it's almost like 25% of India's capacity.
00:43:34
Speaker
ah nineteen data centers And um so we used the money mainly to build our data center capability. Of course, we grew the team as well. We launched a whole bunch of new services.
00:43:46
Speaker
We also offered our managed services. We offered remote managed services to other entity companies across the world. So entity at clients like Sony and others across the world that we were managing from India.
00:43:57
Speaker
So we did a, you know, there was a lot of synergy. We used some of the best practices when we were building our data center at Noida. NTT gave us a best practice of how they build data centers in an earthquake prone region like Japan.
00:44:09
Speaker
So used some of those best practices. We used dampers to to, you know, ah to, you know, to mitigate the risk of an earthquake so that the whole building can kind of shift if there's an earthquake.
00:44:20
Speaker
So they shared some best practices with us. We shared a lot of our knowledge with them and we built a very sizable business in India where I think when I left, they combined the business of network, cloud, data center and managed services. It was north of close to between 400 and 450 million dollars.
00:44:41
Speaker
Wow. Okay. That's amazing. ah That's a massive growth from 2012. Okay. Amazing. ah Okay. So, ah and Entity is largely a data center company. They didn't want to be a cloud company.
00:44:55
Speaker
No. So, Entity, if you look at Entity in the Japan, started out as a telecom company. They were the equivalent of BSNL and BSNL in India. So, they were the national domestic ah carrier as but as the national international so as well as the international carrier of Japan.
00:45:11
Speaker
and then they got into data centers and then of course in the now they're a full stack company they've got entity data that does in software development and application development including in ai they've got this entity global data centers that runs ah the data center infrastructure they've got one of the last second or third largest submarine cable network for the network services they've got one of the they've got they're a tier one isp one of the largest isps in the world They also have um ah system integration business. They acquired a company called Dimension Data. So they also you know provide hardware and manage the hardware for clients as well. So there's a whole managed services and hardware business as well.
00:45:51
Speaker
Okay. Okay. Okay. Got it. or Okay. ah Okay. And ah so, you know, what was your itch that you could not scratch with an entity that made you want to step out and...
00:46:07
Speaker
So I'll tell you, um ah you know ah Entity was very nice to me, not only in terms of the acquisition, we got a fair price, but we also, I think it was a win-win because even Entity benefited a lot. So it was a very good relationship. i still i'm on the i've still continued on the board because of the great relationship with Entity.
00:46:25
Speaker
And by the way, Entity is also invested in my new venture. So um but i but so so my last role in Entity, as I said, was the... So what one of the things that and Entity did was that the data... They tried to try and kind of separated the asset-heavy business from the asset-light business. So the managed services, the integration, the software development became one piece. Although everything came under... ah global head but the asset heavy which is a data center co-location business the submarine cable business so they made me in charge of the data center and submarine cable business of the for the entire world right so should report to abhijit dubai who's overall entity global head so he was running entity outside japan he was run by abhijit and i was running the data center division and the submarine cable division under abhijit
00:47:20
Speaker
But the managed services, the so till about 2021, I was running everything, including the data center, including the managed service and all. But then later, and from 22, 23, I ran only the asset heavy business. Now, around late 22, early 23 is when ChatGPT became popular and AI just took the world by storm, right?
00:47:43
Speaker
um And we were getting requests on our Simply Cloud offering. ah but for customers for AI workloads. And they were saying, can you provide a GPU infrastructure as well?
00:47:54
Speaker
Right. And, um, at that time i tried to socialize this with an entity that, you know, and, ah you know, there were emergence of providers like core weave that came out in 23 and i tried to socialize this with an entity saying that should we look at this, doing this, uh, they felt, um,
00:48:13
Speaker
They weren't sure, right? even Even today, the world is divided as to whether that's a good business model or not, right? And only time will tell how good a model is. But maybe because of the conservative nature, they decided not to do that.
00:48:26
Speaker
And that's one of the reasons I said, okay, maybe I can come out of entity and ah start something. on Also, what happened is ah my because my role was only colo and company,
00:48:37
Speaker
submarine cable, um a lot of this IT stuff and all the happenings in AI, I was not able to participate in. And I didn't want to leave that part of, because that's, you know, something that was close to my heart.
00:48:50
Speaker
And therefore I said, okay, maybe I can come out and start this on my own. Fortunately, when I decided to start something on based on the success of Netmagic entity, I had investors willing to back me without anything and on the ground.
00:49:04
Speaker
And so Matrix India, which is now Z47, Nexus, which was my first investor, early investor in Netmagic, and entity VC, which is the venture capital of an entity backed us.
00:49:15
Speaker
And we raised, we did two rounds, one in March, 24, seed round of 20 million. That time we were focusing more on the software piece, the orchestration, the platforms, et cetera.
00:49:27
Speaker
and when we And we bought some GPUs with that money. And when we saw good traction, we raised another round from the same investors in October and raised another 30 million from them. and set up larger GPU infrastructure.
00:49:40
Speaker
We actually, our our servers, our GPU servers are actually housed in the entity data center itself. Okay. Okay. Okay. ah ah Is ah the biggest customer for data centers would typically be cloud companies, right?
00:49:54
Speaker
ah They would have the maximum demand for data centers. Yeah. Hyperscalers are the anchor customers. Biggest customers are hyperscalers, but ah So even for Netmagic in India, the biggest customers were these cloud hyperscalers.
00:50:11
Speaker
and So hyperscalers came to India in 2015. That's when data centers became mainstream in India. where Till 2015, there were largely five companies that were driving data center business in India.
00:50:22
Speaker
We were focusing more on enterprise customers. So we had banks, we had financial institutions, insurance companies, broking houses, manufacturing companies, pharma companies as our clients. media companies and some of our clients, lot of startups for our clients, but when the hyperscalers came, the floodgates opened. So from 2015 on when Amazon, Google, Microsoft, between 2015 and 17, they all entered India. And then the ah data center became and one of the hottest areas, industries in India, growing very fast.
00:50:51
Speaker
um And, um, one of the probably the fastest one of the fastest growing markets in the top five markets in the world, right? In terms of growth. Do these hyperscalers also set up their captive data centers or they typically work with? Yes, they do. So they do both.
00:51:07
Speaker
They do look their own captive. They have the teams that can build that they are totally capable of building their own, but they also use third party data centers because the rate at which the services are growing, and they can't if they can't fully do it captive because ah you know the speed is important. So sometimes let's say they're building a new facility and suddenly they need a large amount of compute infrastructure to be deployed for their clients. So they need space for that.
00:51:32
Speaker
And so then they would contract with a third party provider like NTT. I think the internet traffic is growing exponentially, right? I remember. sort Data and traffic both are growing exponentially. Okay. Okay.
00:51:45
Speaker
Okay. Interesting. So what is the difference between ah regular cloud hyperscaler and an ei cloud? ah No. So, ah so, okay. The hyperscalers offer both the, in that cloud offering, they offer, for example, Amazon, in addition to the platform that I described earlier, they also offer something called SageMaker and Bedrock, where you can build your models, train them and an inference them. You can also get an inference endpoints. You can get like, for example, a Lama model or any open source model, like Lama, DeepSeek, et cetera, available as a service that you can then directly ah use.
00:52:21
Speaker
Right. um So, um, So they are, so an Amazon, Google and Microsoft offer both traditional compute as well as AI compute.
00:52:32
Speaker
And they offer AI training, fine tuning, influencing everything, right? They are andro platforms to enable all all of these. um What we're doing in NASA is only focusing on the AI workloads.
00:52:44
Speaker
So we're not trying to do everything. We're trying to focus on the AI workloads and we have a flexible model where we offer private clusters for high performance computing, but we also offer public offerings so that people can collaborate and startups can use it, et cetera.
00:52:59
Speaker
and So like Netmagic didn't have ah the AI offering. We had the cloud offering, but we didn't have the GPU compute. So now what we've done is one an interesting thing that we've done in India is our GPU compute now, we've connected it through APIs to the old Netmagic Simply Cloud.
00:53:21
Speaker
So now Simply Cloud can leverage our GPUs. So if Simply Cloud now wants to white label us and offer that as a service to their clients, they can leverage our GPUs. We are the same data center.
00:53:32
Speaker
And so we've done a cross connect between the two clouds. And so if Simply Cloud now, if a Simply Cloud customer wants GPUs, they can just on their portal, show, orchestrate our GPUs as well.
00:53:46
Speaker
Got it. Okay. Okay. The difference between regular cloud and AI cloud is essentially just the GPU versus CPU. Regular cloud has CPU. many air cloud cp So on the infrastructure side, it is the GPU infrastructure.
00:53:59
Speaker
Also the networking for that is different. you you may use... Either InfiniBand or even if you use Ethernet, you can use ah you know it's a slightly different way of doing networking. ah The other difference is, so that's on the infrastructure side. Also, the storage that we use is slightly different in the case of, ah typically we use the storage like Vast and VEKA as popular vendors that we use in our storage infrastructure.
00:54:24
Speaker
But... um But keep the infrastructure side, you're right. the lie I mean, I could use the same storages even for my traditional cloud. the The main difference is the GPU from an infrastructure side perspective.
00:54:36
Speaker
But then the platforms are different as well because the orchestration platform is different. You have these machine learning, entire machine learning operations or MLOps pipeline can be offered as a service where a person can ingest data, then select a model, train a model, fine tune the model, and then...
00:54:54
Speaker
do inferencing. So that entire process is automated. And then you have ah another platform, which is a low-code, no-code platform, where I can offer, without ah but where a business user, without writing any code, can leverage an LLM of his choice and actually build a Gen AI app. So the platform will be different. And from an infrastructure perspective, you're right. The main difference is providing the GPU compute.
00:55:21
Speaker
OK. um Let's take the example of ChatGPT. ah How does ChatGPT manage their hardware? Do they have their own GPUs or do they ah ah work with, like say, Azure? I'm assuming Microsoft is their investor. So Azure provides them.
00:55:40
Speaker
they They have a combination. They work with Azure and they work with now. They also have GPUs with CoreV. where they rent these large number of GPUs from CoreWeave, which is a NeoCloud provider like us in the US. Of course, they're the largest in our space.
00:55:57
Speaker
And so they use a combination of Azure and CoreWeave to offer to to to run their, their, their, JGPD application on top of, they do, obviously they have their own ah machine learning engineers. They have their own people who do the fine tuning, the training.
00:56:18
Speaker
um And these are tens of thousands of GPUs, if not hundreds of thousands of GPUs that they've deployed over a period of time that they use to train their various models. And then they, and then they, they do the inferencing also on Azure slash CoreView.
00:56:33
Speaker
I think in some cases Microsoft has taken space from CoreWeave and put the OpenAI infrastructure on top of that. And in some cases CoreWeave, I think OpenAI hasil directly contracted with CoreWeave if I'm not mistaken.
00:56:47
Speaker
Okay. um What is the ah life cycle of building and an AI model? Like you said, you use a bunch of terms like machine learning operations, training, fine tuning, inference, just help me understand these terms. When you build a machine learning model, right? um ah you you you in First of all, you need data. You need to ingest data. You need to then clean and normalize the data.
00:57:10
Speaker
So you do what is, and then you need to a do feature selection. So there are different features. What features are going to make the model relevant? Then you do model selection, which model to you use. There are some well-defined models that you can use. and um Or you can build a model from scratch.
00:57:29
Speaker
And then you do a training run. And then you do a... um ah You may have to have multiple training runs because depending on the accuracy that you get, you may have to do multiple. And then finally, you get an inference endpoint, which you then run as a service. So you take that endpoint.
00:57:46
Speaker
i can Again, inference endpoints can be offered two ways. I can do a dedicated inference endpoint or I can do a shared inference endpoint, which... where where I offer, for example, DeepSeek as a service and anybody can use it, right?
00:58:00
Speaker
Or I could do a DeepSeek for an HDFC bank or a Lama model just for an HDFC bank. So that people can fine tune it there. People can put their own data and and then you know run their own app.
00:58:14
Speaker
And it's not so that the advantage of that is twofold. One is it doesn't it satisfies the the regulatory requirement because it's captive if the data is not going somewhere else. And secondly, because it's open source, it you know the the it becomes very cost effective as well.
00:58:28
Speaker
We are not paying the huge cost per tip token that you would pay. for example, to an open AI, etc. But obviously, since open AI a larger model, um the accuracy of open AI may be higher. So some cases people use open AI, some cases people prefer open source. My gut feel is that over a period of time, 60 to 70% of the training is going to be on open source model because open source models are also becoming very, very good.
00:58:54
Speaker
So whether it's DeepSeek, whether it's Lama or whether it's Quen, they're all doing really well. Okay. um What is an inference endpoint?
00:59:05
Speaker
So what happens is when once you train a model, you get an executable, right? And that is something that I can deploy on some GPU cluster and then make it available. So people can access that using an API. So for example, in ChatGPT, ChatGPT, it says the application runs um in a cluster somewhere and my app actually connects to that cluster, sends a prompt and gets a response.
00:59:31
Speaker
So the output of the training is executable that I can run as an entrance endpoint. End of the it's a prediction. Inference means prediction. So you give a prompt and you get a response back, right?
00:59:43
Speaker
So I run that on some infrastructure and that's what I mean by an entrance endpoint. So what is the ah cost differential for a business to, so if I'm a, like say an HDFC, okay, now let me not take the example of banks because there is a data localization regulatory requirement, but let's say if I'm running a Zomato and for customer service, we are replacing ah customer service agents with a chatbot.
01:00:16
Speaker
So what is the cost implication for just using an open API to build my chatbot versus an open source, ah like training something on Lama and then using?
01:00:28
Speaker
Great, great, great question. so what Again, this will change over a period of time because cost per token is coming down, e etc. But what we've noticed is um when you scale. So if i have just a few queries, then it doesn't matter why I have the headache of ah setting a renting infrastructure, deploying LAMA endpoint and then querying, you know, building our app, etc. just straight away.
01:00:53
Speaker
send an API call to OpenAI and just do it, do your job. As you scale, right? The cost per token and token could be a full word or it could be a partial word depending on different people define token differently.
01:01:07
Speaker
The cost per token adds up to a lot. When you have billions of tokens, right? Your cost per token, and I don't know what the current pricing is because it keeps coming down. Anything I tell you, it may may have already changed last night, um but the the cost per token ah becomes pretty significant if you use a paid model like OpenAI and then also Gemini cheaper than OpenAI, but even that's pretty significant.
01:01:34
Speaker
I'm using open source model, then I'm just paying for the infrastructure. you know the The cost per token will be typically lower because that the OpenAI charge or the um Gemini charge is not there, right? So it's just the infrastructure cost.
01:01:48
Speaker
ah So, when when the number of when the number of requests or number of tokens is not huge, then the it just doesn't make sense for people to necessarily build everything. But if they believe that over a period time they're going to use billions of tokens, then it's probably better to use you know an open source model with ah some dedicated infrastructure.
01:02:14
Speaker
So if in this example, like say Zomato for their customer service was ah ah felt that they will need to use billions of tokens. So what would they do?
01:02:25
Speaker
how does but we What we would ask them to do is we would provide them... um some infrastructure with an open source model, if they found that that they they could they may have to change their prompts a little bit because they could, there is a way you can, ah this exact same prompt that works really well on OpenAI may not exactly work as well as Lama, for example. So there are actually these prompt translators that Meta provides, for example. and If they got reasonable output from the open source model, then they could they could just ah rent um the open source endpoints from us, either as a dedicated or as a shared endpoint, and they would find that the cost of per token would be significantly lower than what they would get from an OpenAI or ah even a Gemini.
01:03:14
Speaker
gemin For the developing the the development team at Zomato, it's the same effort. like it's just like With OpenAI, there's just an API call.
01:03:25
Speaker
yeah Similarly, there will be just an API call. Yeah. yeah they could ah they They could do API calls or they could just do prompts. It depends on okay ah but what kind of... Do they need to know how to train a model and have that?
01:03:40
Speaker
like is there that additional one-time investment? like No, they don't have to train a model, but using OpenAI, you can fine tune models. They don't do any foundational model training, but they they they can do fine tuning.
01:03:52
Speaker
So they could move where they may change and the fine tuning. They could use something like RAG where they add their own data contextual to bring the right context. here Rather than training on the entire universe, you train on your contextual data.
01:04:11
Speaker
so So a combination of techniques like RAG and fine tuning is something that they could use too. And they could do that with OpenAI or with an open source model. What is a SLM, a small language model?
01:04:26
Speaker
The number of parameters is much smaller. that you know Today, OpenAI's latest margins are being trained on trillions of parameters, right? Or 671 billion, 700 billion parameters, etc.
01:04:38
Speaker
Whereas you could have but a small model ah language model that's trained on 3 billion parameters, 1 billion parameters, you know, and they may be good for a particular task.
01:04:49
Speaker
They may not necessarily have to go through everything, the entire universe of data and have multiple parameters. It could be, you know, they they could do a small task really well, right?
01:05:03
Speaker
But the size of the number of parameters that the model uses is what determines whether this SLM or an LLM. Okay. Okay. Okay. Got it. Okay. Understood.
01:05:14
Speaker
So like at Nesa, what all do you abstract away for your customers? and Like a lot of what cloud businesses do is abstract away complexity, make things easy. So I'm sure. The complexity of network, the storage, the GPU infrastructure, configuring it, getting it up, et cetera, is completely abstracted. It can be all...
01:05:36
Speaker
can you can provision a virtual machine or a container or even a bare metal GPU by it's completely the entire provisioning of that is automated. You can scale it up, you can scale it down, right? You can add you can add more storage, reduce the number of storage, the amount of storage, et cetera. All that can be completely orchestrated. Then from a platform perspective, we can give you a shared workspace.
01:05:59
Speaker
So if you want to do a collaborative project with multiple AI researchers that can be done very easily. we can We can give you Jupyter Notebook so that you can write your Python code. Again, in a collaborative workspace environment, ah we can automatically, once you finish training, you can just press a button, it creates the endpoint and you can completely use with a simple user interface, you can publish that inference endpoint so that can start doing the inferencing.
01:06:26
Speaker
So the entire journey is automated. And we have ah we have two developer platforms, one for the entire machine learning operations process, right from data ingestion to feature selection, to model selection, to training, to fine tuning, as well as the low-code, no-code platform that allows you to build GenAI apps by selecting the right LLM.
01:06:48
Speaker
We have a series of LLMs that you can choose, some hosted by us, which are open source, and some through an API called OpenAI that you can you can use to build GenIA apps. so so So the entire underlying infrastructure complexity is abstracted. i be Okay. So one product is this, which you can use to build a GenIA app. What's the second product?
01:07:10
Speaker
The machine learning. If you want to build a machine learning model of your own, and like for example, we have a customer building a machine learning model for um fraud detection. Now they may or may not use an LLM for that. They may use their own model for doing that.
01:07:25
Speaker
right there ah there are There's a client that is using ah one of our models for drug discovery. Again, they're not necessarily using a standard algorithm. They may use something like a BERT and they may do some fine tuning by changing the last layer of BERT, et cetera, to do that. So these are these are examples of ah people using our platform not only and then we have people using bare metal some research and education institutions are just using bare metal infrastructure which again we provision ah in an automated fashion very easy for us to provision a whole bunch of gpus and create a cluster and give it to a client
01:08:00
Speaker
you know, so that they can then use it to run some high performance computing workload. It could be a training of a foundational model. Got it. Okay. So, ah you know, the Netmagic business was and like an asset heavy business. So you would probably be spending a lot of time in procurement like be it real estate be it hardware etc ah what is the Nesa business like is this also asset heavy so the Nesa the software piece the platform that we are building etc is obviously not as capital intensive but the GPU infrastructure that we build that is capital intensive GPUs are not so you're ah working with third-party data centers or you have your own data center
01:08:46
Speaker
No, ah we are working with Entity, so third party data center like Entity. But we yeah we we could use any data center that we want. we're not We don't have to use Entity, but obviously given the relationship and given the yes comfort, because i also know the entire team that's doing this, it's the team that i built. ah audition And we believe they're the best in India, so we've used Entity as our data center partner.
01:09:08
Speaker
Okay, so you're using their co-location service? Yes. Okay. Yes. Okay. Okay. Got it. Got it. So here the procurement that you have to do is not the real estate part of it because you're using a co-location. We're not doing any real estate for procurement. We're not doing any mechanical electrical.
01:09:23
Speaker
We're not building any but facility with air conditioning, etc. We're just renting that from entity. We're procuring the server, GPU servers, the storage, the network, the load balancers, the firewalls.
01:09:37
Speaker
setting it up together and then putting our software layers on top and then offering it as a service. Okay. it It seems to me like the data center business is a supply first business. Like it's a business where the founder needs to solve for supply rather than solving for demand. Do you agree with that?
01:09:56
Speaker
that That you build supply, the demand is there. Yeah, yeah ah true. It is the the demand as of this stage in certain areas, like especially in cities like Bombay is more than the supply that is there.
01:10:10
Speaker
because it takes 15 to 18 months to build a large facility. And know so it's you know even though there are now multiple players, um you know that is the demand outweighs the supply.
01:10:25
Speaker
May not be across India, maybe in some markets there's not so much demand, but in markets like Mumbai, Bangalore, Chennai, the demand is more than the supply. um Yeah, that's absolutely true.
01:10:39
Speaker
So what skills does a founder develop in that kind of a business, which is supply constraint rather than demand constraint? um Well, um don you know it's efficiency in executing, building the right team.
01:10:54
Speaker
The finance is important because you need to raise a lot of capital so do it most efficiently. Then execution, we need to execute on a timely execution is important. So the project management skills, et cetera, become very important.
01:11:07
Speaker
So it's a combination of HR, finance, execution. Operations is important. you know It's a very, very machine-driven business. A single second of downtime can bring down the entire business. You can't have a data center go down.
01:11:19
Speaker
So the the attention to detail, the operational, focusing on making sure that the operational processes are a streamlined and are...
01:11:29
Speaker
um you know top-notch has become very important. So yeah, all of these are very, very important in a, even though it's a supply constrained business, it's a very important business. And um you know how do you, you know identifying, have to constantly be looking out for the latest because you know technology changes.
01:11:52
Speaker
ah So and you want to make it as sustainable, as energy efficient as possible, designs change. So you need to also do some, um R&D, if you will, where you constantly testing new technologies so that you can then at the same time to make it sustainable, we built be in the data center business, we had our lot of built a lot of captive solar and hybrid wind solar plants.
01:12:12
Speaker
So that's something that we also did with partners, but because that was not our core competence, but we did it with partners. So just knowing, I mean, there's a fair amount of work in all of the above. There's the entire M&E piece.
01:12:28
Speaker
then Then, of course, there's a network. How do you scale the network? How do you interconnect your data centers in the most efficient manner so that you can offer services across data centers as well? There may be some services I'm offering from one data center, but I can seamlessly make it available across multiple data centers.
01:12:44
Speaker
So many such things that we have to do. um Yes, but I agree with you. Currently, the data center the business is supply constrained. i mean, the demand is more than the supply. I wouldn't say supply constrained.
01:12:56
Speaker
There is supply also, but that I think the the demand outstrips the supply in certain pockets. Okay, interesting. out Is it same for the cloud AI data center business?
01:13:07
Speaker
um So I think cloud, the I believe it will become there. I think it will very soon become that. I think if I look at it and I compare what my journey with Nesa with my journey with Netmagic, it's very similar. you know Initially when we said set set up data centers, it took a while before there was widespread adoption.
01:13:26
Speaker
At that time, we had to deal with different things. People were scared of outsourcing to a data center. They wanted everything inside their premises. And so we had you know, completely shift the mindset and that happened over a period of time.
01:13:38
Speaker
So banks were not um were building their own captive data centers. They were not comfortable outsourcing their core banking infrastructure outside. But that I saw that shift happening over the years that I did. And then once that happened and once hyperscalers came, it became um you know huge and it became a supply constraint or demand of shipping the supply business.
01:14:01
Speaker
In the AI space also, I believe this will happen. I think it's early days. I think there's awareness. you know People in enterprises are still doing proof of concept. 70% of all enterprise workloads are POCs.
01:14:13
Speaker
They've not yet started. There is some amount of widespread adoption when it comes to things like fraud detection, things like customer service, et cetera. But There is still a long way to go.
01:14:27
Speaker
And I think once the awareness, once the but you know models become better, the there's more of awareness, I think this business will also become... um If you look at it the US,
01:14:41
Speaker
um you know service providers like Corby, demand is outstripping the supply. And ah so I suspect that something similar will happen in the next 18 to 24 months in India.
01:14:54
Speaker
and But ah right now, i would not say it's... supply constraint right now. Okay. Okay. So, you said ah you said like fraud detection, customer service, these are like low-hanging fruits where enterprises are adopting yeah AI. What are the next not-so-low-hanging fruits where you think there can be massive AI workload? So, there are two broad categories. won't go into specificity. I'll talk about two broad categories. one is One is, you know, to automate their internal operations, right? So, you you want to use...
01:15:28
Speaker
instead of hiring agents, you want to do document summarization. These are things that people have already started doing. right But when they want to do something which is,
01:15:39
Speaker
where they want to change the way they're doing business or they want to fundamentally do innovation, which gives them an edge over the competitors. That is something that I'm not seeing happen as much. I think people are piloting that, but they're not doing it as much. Even the voice bots, right? to to call Let's say there was a client that did a pilot with us, which where they were calling and were doing loan collection using a voice bot.
01:16:03
Speaker
but they didn't take it to production yet. Even though it saved them significant 67% of the cost from 15 rupees a you minute, they came down to five rupees a minute, but they haven't yet taken to predict production because they need to make sure the accuracy is high. In some cases, maybe a regulatory thing that they need to overcome.
01:16:22
Speaker
So, you know, but it's only a matter of time. I think it will start happening. You've already started seeing it happening in the West. So it'll start happening in the u and India as well.
01:16:32
Speaker
So we are seeing a lot of training happening with the research and education institutes. We are seeing startups use it a lot. We are seeing ah the early adopters in insurance and banking already use it.
01:16:47
Speaker
But you know widespread adoption, i think also what happens is you need to have a proper ah team within the within the financial institution or the enterprise that understands what they can do with AI.
01:16:59
Speaker
So i think a lot of people are going through that journey, they're doing pilots, but I think the pilot to production will probably happen the next 18 to 24 months. And once that happens, then I won't be surprised if the demand outstrips the supply.
01:17:13
Speaker
Okay. Okay. Okay. Interesting. um So, this ah ai cloud business is also then, i mean, capital is a moat here, right? Like, the the more capital you're able to raise, the more you will have ah capacity and more capacity means lower overheads or your overheads get distributed. So your costs, like you're able to offer at a better price. Yeah, but you to be careful, right? Because GPUs become obsolete first. So yes, capital becomes important because if you have access to capital, you can build larger infrastructure. You have larger infrastructure. You can get larger clients coming to you.
01:17:46
Speaker
But at the same time, it shouldn't be the case that you build so much inventory that you don't have enough usage. And then like in a data center, even if went a couple of years without usage it didn't matter because you know that infrastructure didn't depreciate it was more mechanical electrical here this is high-end gpus in in two years nvidia would have probably launched four generations of gpus right so you have to be a little careful here uh in terms of um you need to make sure utilization is there so pure speculative build-out of gpu infrastructure um
01:18:20
Speaker
you can't do blindly. You need to have some anchor again here. So think the common thing in both businesses is it's very capital intensive and both require anchor clients. ah But but i I would be a little bit more cautious on the way I deployed GPUs than I would have you know built data centers because in the case of data centers, but look, real estate in India only appreciates, it doesn't appreciate.
01:18:46
Speaker
And the mechanical electrical equipment actually has a huge lifetime, right? So it doesn't, yeah you know, it's not something like, so, but when you're building, In server infrastructure, that has a, you know I would, dep pre you typically data centers can depreciate mechanical electrical equipment in 15 years, but the server infrastructure is depreciated typically within five years.
01:19:08
Speaker
right so So you you know you would you would have to be a little careful So you don't want to be too ahead of the curve and build out so much that by the time the demand comes, the GPUs are obsolete. but But, you know, I think that's where the past experience has, because we built see a compute cloud as well, which which which had similar characteristics. So we that past experience comes in handy in making sure that we know when to order new equipment. Of course, the one more difference is that it takes longer for the lead times for GPUs is
01:19:40
Speaker
longer than it was for traditional compute for cpu compute cpu compute would come in two to four weeks yeah gpu compute depending on the model that you order it can take anywhere from six to six weeks to maybe a few months so so your planning has to be accordingly what does the economics of a cloud ai business look like how much of your spend is on gpu A very significant percentage of our spend.
01:20:09
Speaker
Typically, in a in know almost I would say close to 80% of our spend is on GPU and associated infrastructure. and about So in our pricing, when you're amateai when you're doing the pricing for a client, roughly 70% to 80% of the cost is GPU and associated infrastructure and you know, coal-owned power and maybe network, right? Wow.
01:20:36
Speaker
Okay. ah So the data center business this is measured by gigawatts, I believe, megawatts, gigawatts, like how much electricity is being consumed. ah Is it the same way to measure cloud AI capacities?
01:20:50
Speaker
You can do it in number of GPUs or you could even do it in terms of kilowatts and megawatts you consume. But you could do it in... Typically, the parameter I've seen using... like People say CoreView has deployed 250,000 GPUs.
01:21:05
Speaker
okay That may translate to 250 megawatts, but people... so It's more GPU is the way to measure. More based on

Nvidia's Export Challenges and Market Dynamics

01:21:12
Speaker
than... Okay. So, there are some... so it there are some Like, Nvidia's latest chips are not easy to access, right? Like like the top… end I believe China cannot… ah their export… China cannot access even their earlier Hopper, event entire series of Hopper.
01:21:30
Speaker
So, what happened is there are some countries ah where where the US has banned sale of these chips. So, for example, China, Russia, Syria, you know, there some countries that are blacklisted.
01:21:44
Speaker
Earlier, they would allow lower generation chips of Nvidia to go there. That I think, I'm not sure what the current status is because when the Biden administration before they left, one week before they left, they came out with a diffusion policy rule where they divided the world into three buckets.
01:22:03
Speaker
yeah Some 18 to 20 countries were tier one where they there was no restriction on GPU sales there. Is India tier one? No, then there was tier two, which was, ah there was tier three, which was the Russia, China and all these where there was nothing allowed.
01:22:17
Speaker
And then there was 150 countries roughly including India were in tier two where there was a GPU cap. This was supposed to come into law on 15th May, but Trump removed it and said he will replace it with something.
01:22:28
Speaker
So to my knowledge, right now there's no restriction on India, but we don't know what, maybe Trump will use it in this ah trade negotiations with us and with other countries. This could be one of the bargaining levers that he has.
01:22:44
Speaker
um But... Yeah, so we don't know what what the restriction, if at all, will come in the future. But even ah other than that, there's so much demand for Blackwell chips that NVIDIA has that if I were to order something today, it would take four at least four months before I get allocated some Blackwell chips.
01:23:05
Speaker
Whereas Hopper series, I can get between four to six weeks. So an H200, I can get within six weeks, but ah the Blackwell chips were taken there. right now so Right now, there's even a huge lead time in the networking gear.
01:23:18
Speaker
So whether order the, um you know, InfiniBand Mellanox switches or whether order, you know, Ethernet switches and that are the right switches for AI, there there's sometimes there could be a four to six month lead time there also.
01:23:34
Speaker
So you have to plan this really well. because the demand is so huge in the US. Okay. ah Chips come from China? These switches, sorry, chips come from Taiwan. The the switches come from China or like?
01:23:47
Speaker
No, no, switches are, where Mellanox or Cisco, these guys make the chips, I think there are multiple places across the world that they make them. and um I don't think there's, there doesn't need to be any dependence on China per se.
01:24:03
Speaker
You don't have to depend on China. There is Taiwan, there's Malaysia, there different parts where these these things are manufactured. do Does AMD offer chips comparable to NVIDIA? Yeah. So AMD has offered the MI300. We bought some MI300X also and we bought the now I think the current model is the latest model is MI350. On they're under raw silicon performance, they are second to none, the underlying hardware performance. But the challenge is...
01:24:34
Speaker
ah Where Nvidia's moat is is ah by and why they own almost 90 to 95% of the market is because of the CUDA libraries and the CUDA software. So that to do parallel processing, they have this entire libraries that people use.
01:24:50
Speaker
And AMD's equivalent is called Rockum. right now Rockum is, so if I use something like PyTorch to do my development, then the model typically would work on both AMD and NVIDIA because NVIDIA has something that translates to Rockum.
01:25:10
Speaker
But if I use If I use native CUDA in my application, CUDA library in my application, then I would have to rewrite the code and on AMD. And that is where Nvidia scores huge. And that is why people don't want to bother with all that. And they just go ahead with ah using Nvidia.
01:25:28
Speaker
And so that's why Nvidia has got roughly 90, 95%. I don't know what the exact market share, but over 90% market share is what Nvidia has. But

Specialized Chips and Shifting Focus to Product Development

01:25:37
Speaker
there are, AMD is making big progress. So I think expect AMD to,
01:25:42
Speaker
you know make some strides in the coming years. There are also some companies that have got specific chips for, let's say, just for doing large language models. So rather of rather than a general purpose machine learning chip, which you can do any task, they focus their attention on specific tasks. right So even the chips are designed that way.
01:26:07
Speaker
so interesting companies to track are um samba nova grok cerebrus so we are in discussions with multiple of these companies to see their future roadmaps to see how we could deploy some of these some of these are good for influencing for example samba nova is specialized in influencing right so we're looking at that as well yeah But right now, the bulk of deployment, but I mean, almost all our GPUs are NVIDIA GPUs and across the world, I think that's the case.
01:26:39
Speaker
So there's this massive ah FOMO, that fear of missing out in India, that India will miss the AI wave. ah We will not be able to create a large language model,
01:26:50
Speaker
ah What do you think is the India opportunity in the AI space? Where do you think we can actually compete? We probably cannot create a large language model because of the billions of dollars of investment needed, but what can we compete effectively in?
01:27:04
Speaker
So, for example, there are several companies and even the India AI mission is funding companies who want to build LLMs for Indian languages. right So, that is something that India is doing. And that's very important. Although I must say that even OpenAI and others do a great job on Indian languages.
01:27:20
Speaker
But there are so many different languages. There are so many different dialects. There are so many different contextual meaning that may not get… But that's not like a global opportunity, right? Like… ah No, I'm saying for sovereign AI, that is an important opportunity because, you know... But if you want to catch this bus, the AI bus, what what can India realistically...
01:27:44
Speaker
So, today, for example, you know, typically India was doing back office work, right? We were doing outsourcing. People would write software for people here, e etc. Now, for two reasons. Now, we can… Because we have access to AI tools locally, we can actually build for the world. We don't actually usually have to just outsource stuff for the world, right?
01:28:03
Speaker
And… Build as in you're saying products. Like India could do more… Products. Software products. We can build from here. so vi be from Another reason why we have we have to do this program were to think the product mindset is because the labor arbitrage opportunities, that the the negative thing for AI in India is that we used to count all these large software companies like Infosys, TCS, e etc., used to mainly rely on labor arbitrage. right That labor arbitrage, now if you have
01:28:34
Speaker
access to LLMs and you have access to tools like Cursor and others, you can that labor arbitrage is pretty much gone. It will go in things. Maybe there's still some labor arbitrage, but it'll go in the next step. Now we don't need a team of 100 people developing. We need a team of two or three people developing using these tools right because you can do much faster development.
01:28:52
Speaker
So that is a risk that we face because then you suddenly don't need that much outsourcing done. But at the same time, it also is an opportunity because now we have access to the same tools and we can actually build for the world.
01:29:06
Speaker
So I think we should need to move our mindset from an outsourcing mindset to a product development mindset. That's where the opportunity. ah So ah

Key Entrepreneurial Traits: Insights from Sarath

01:29:15
Speaker
let me end with this. What's your advice to the current generation of founders?
01:29:20
Speaker
So the you know first of all, it's dream big, right? And especially with AI, you've got this opportunity to create something from for the world. You just don't have to look at India alone. set But you know I'm going to give you the advice based on the lessons I've learned.
01:29:37
Speaker
And so one of them is empathy towards all the key stakeholders, whether it's your employees. So one of the things I'm proud of is I've never laid off an employee in my 25, 30 years of entrepreneurship.
01:29:49
Speaker
There have been tough times. We went through the dot-com burst. We went through the 2008 financial crisis, but I don't believe in you know laying off people just because there's a blip in the business.
01:30:03
Speaker
um and then You need to be empathic towards your investors, completely transparent so that you know things are going well. Sometimes you obviously will share that news, but but a lot of time entrepreneurs try to hide stuff from investors if things are not going well. Obviously, you to be transparent.
01:30:20
Speaker
You have to be empathic towards customers. I do still remember we had an example of a very large financial institution today, which was struggling to stay alive in 2000, 2001 when the dot-com bust happened.
01:30:33
Speaker
Today is one of India's largest financial companies. And we were, you know, we they the founder came to me and said, look, I can't pay you, but I can, you know, pay you over a period of time. And, you know, we worked it out with them.
01:30:44
Speaker
He also couldn't ah retain some of his employees. So I took over some of his IT employees and they became part of my team. It was a win-win for both. right So to that's the one. then i you know Surround yourself with people that are smarter than you.
01:30:59
Speaker
You don't want yes-men in the business. Never compromise your core values and ethics. ah In India, there are lot of temptations. you know But just stick to your... you know look Think long-term. Don't think short-term.
01:31:12
Speaker
um you have to collaborate, especially with startups. You cannot do everything yourself. There's some incredible startups that you can collaborate with. Secondly, agility is key. yeah You can't take very long to make decisions. You have to take quick decisions. You may be wrong sometimes, but that's okay. Make quick decisions.
01:31:30
Speaker
Obviously, you have to have self-belief, passion and hunger. you If you don't have self-belief, you're not passionate about what you're doing, you can never how will you ever convey that to your customers or how will you ever convey that to your stakeholders?
01:31:41
Speaker
um This is a cliche, but there's no substitute for hard work. Nothing comes easy. um ah constantly learn and upskill yourself and your team. um So, you know, it's very easy to find faults in others, but find the strengths in your team and, you know, and help them overcome their faults.
01:32:01
Speaker
You know, very often you say, oh, this guy is always doing this or he's always negative. you But there are some good qualities in a person also. So try and amplify the good qualities. uh take risks um you know but obviously uh fail but even if you fail it's okay fail fast but don't repeat the same mistake again and obviously when taking the risk you look at the upside and the downside if the upside is huge and the downside is limited it's a no-brainer right but you can't take a risk where upside may not be that huge but the downside could sink your company right obviously
01:32:33
Speaker
then um i believe in progress over perfection i won't wait for something to be perfect before i launch it i will take version one launch it get feedback improve it etc um i believe you should find a mentor right like in my case is mr bb jagdish who's been a great mentor to me even my some of my board members are very good mentors don't be at you know afraid to embrace the unknown. right So, sometimes you are youre in a situation, you don't know if something will work or not, but and you know be fearless and as I said, dream big.
01:33:12
Speaker
So, those those are my advice to entrepreneurs. ah How old are you now, Sarath? 58. ah fifty eight Wow. How do you still find the energy?
01:33:23
Speaker
I started Nesa at the age of 56. Wow. That's amazing. That is amazing. How do you still find the energy, the drive to you know take on such a large… It's such an exciting field. It's so much to learn. right I'm like a dinosaur in this field. right The team that I have is so amazing.
01:33:42
Speaker
The amount of knowledge that they had. um you know is and there's so much to learn right um that's what keeps me going amazing just interacting with my team is uh um you know learning from clients running from partners learning from my team is constant learning and if there was no you know there's no learning i would probably not have the energy to do this but there's so much learning every day you learn right so ah Thank you so much for your time, sir. I truly hope that by the time I'm 58, I'm like you.
01:34:15
Speaker
Thank you so much. Thank you, Akshay, for your patience and well for the lovely discussion.