Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
India’s IT Infra Leap: Data Centers, Cloud, AI & What’s Next | A S Rajgopal (NxtGen) image

India’s IT Infra Leap: Data Centers, Cloud, AI & What’s Next | A S Rajgopal (NxtGen)

Founder Thesis
Avatar
0 Playsin 15 hours

"Each rack can deliver 2 crores of revenue versus 6 lakh rupees from data centers."  

This striking comparison from A.S. Rajgopal reveals why cloud services generate 10x more revenue per rack than traditional data center hosting - a fundamental shift that's reshaping India's digital infrastructure landscape. 

A.S. Rajgopal is the Managing Director & CEO of NxtGen Cloud Technologies, one of India's leading sovereign cloud providers. With nearly 30 years of corporate experience at Dell, Microsoft, and Reliance Communications, he built and led Reliance's enterprise division to over ₹1000 crores in revenue. Since founding NxtGen in 2012, he has grown it to serve 1000+ organizations with a 44% EBITDA margin and 24% year-on-year growth from existing customers alone. An alumnus of Kellogg and Yale executive programs, Rajgopal is now raising $400 million to build India's largest AI GPU infrastructure, positioning NxtGen at the forefront of India's digital sovereignty movement.  

Key Insights from the Conversation:  

👉Data Sovereignty: Indian enterprises are shifting from cost-focused to compliance-focused cloud adoption due to data localization requirements and foreign jurisdiction risks 

👉Infrastructure Evolution: AI workloads require 10x more power density (100-600 kilowatts per rack) compared to traditional servers, demanding complete data center redesign 

👉Business Model Transformation: Cloud services generate 10x revenue per rack compared to traditional data center hosting while requiring higher but more profitable investments 

👉Open Source Strategy: Leveraging Red Hat OpenStack/OpenShift reduces customer costs by 10x compared to proprietary platforms while maintaining enterprise-grade capabilities 

👉India AI Opportunity: The government's ₹10,000 crore AI initiative creates a ₹20,000 crore market opportunity for sovereign GPU cloud providers

#DataCenter #CloudComputing #AIInfrastructure #DigitalIndia #Entrepreneurship #TechFounder #SovereignCloud #GPUCloud #IndiaAI #Datacenter #CloudServices #TechEntrepreneur #DigitalTransformation #AIRevolution #TechInfrastructure #IndianStartups #EnterpriseCloud #TechLeadership #FounderStory #CloudStrategy #AIComputing #TechInnovation #DigitalSovereignty #CloudInfrastructure #TechPodcast

Recommended
Transcript

AWS and Cloud Service Leaders

00:00:00
Speaker
If you take AWS for Amazon, I think it is 17% their revenue, but 69% of their profit. Yes, yes, yes. AWS is their profit-making machine. Yeah. AI is also changing their dynamics. So the three guys who dominate cloud, right, globally, AWS, Azure, which is Microsoft, and Google. AWS has lost the race.
00:00:22
Speaker
I mean, they're not there. Clearly.

Guest's Career Journey Begins

00:00:27
Speaker
We are raising about 400 million now, primarily for
00:00:37
Speaker
welcome to the Founder Thesis Podcast. I want to understand your pre-entrepreneurial journey. founder thees podcastcast i want to understand your pretty entrepreneurial journey Basically, I'm an engineer. I passed out in 91, but I started this entrepreneurial journey in 90.
00:00:58
Speaker
um So it's quite a long time back. but But kind of in my third year, I kind of, my cousin was building some PCs, so I kind of learned it. um We did some good stuff. mean, we did some antivirus cards and a whole lot of things. But, you know, running a business is much more than just knowing how to build products. I mean, so kind of messed up. I had to, you know, stop doing it because and, you know, I actually ended with a debt of about one and a half lakhs.
00:01:32
Speaker
At that time, it was a lot of money. I mean, and So I started working at PCL, Pertech Computers.

Dell and Microsoft Experiences

00:01:40
Speaker
That's how we, I mean, I started building a career in in peace in PC industry.
00:01:49
Speaker
There were not too many servers at that time, you know, same PC board in a larger cabinet was a server. I mean, that's how we were doing it and From PCL, I got an opportunity to get into Dell.
00:02:01
Speaker
So Dell was a liaison office, so I was second employee and you know we kind of built Dell from scratch in India. um We took Dell Direct, we set up a local entity and post-Dell I joined Microsoft. I was running Windows for SAP.
00:02:19
Speaker
ah How many years in each row? So Dell you were doing like sales basically. Yeah, um it was all sales. I mean, we were four of us plus one country manager in India.
00:02:34
Speaker
um And I was handling the southern region part of it. i mean And oh one was a ah coordinator. i mean and ah three of us sales guys, that's how we started Dell. I mean, today Dell is thousands of people.
00:02:50
Speaker
um In Dell, I spent five years. I think apart from being successful as a salesperson, Dell gave me an opportunity to run server and storage business in India.
00:03:03
Speaker
um And that's how I started getting into servers and started playing with bigger machines. Then we we got an opportunity to see what was happening in Malaysia. You know, Dell was making pieces in Malaysia.
00:03:20
Speaker
We also saw an opportunity for Malaysia to take some back-end support, um which we wanted to, do which I thought was an opportunity to fight back and get something into India. So me and a couple of others in the US, we actually worked hard for about a couple of years and brought...
00:03:39
Speaker
first 84 people headcount into India for global backend support. um Today, i mean, when I quit Dell, I mean, it was about 8,000, but finally it grew to about 16,000 people.
00:03:53
Speaker
So India started delivering global support. So I think apart from the thousands of servers we sold, I think what is more um you know satisfying is the work we did in getting backend into India.
00:04:07
Speaker
uh and from dell i moved to microsoft i was running windows for sharp countries so it was a short stint i really couldn't do much in in microsoft um because microsoft is windows runs on its own i mean an individual cannot really make a dent in it so we we thought um So I thought it is very, I mean, I felt suffocated because I couldn't really

Technical Insights on Servers

00:04:36
Speaker
do much. I and it it moves on its own. And it's a fairly large business, but it moves on its own. And I was not making an in impact.
00:04:44
Speaker
like Like you don't need to sell Windows. It sells on its own. I mean, yeah but by there is no other choice. No, our job was actually at that time, primarily to convert pirated Windows to a licensed Windows. i mean so there was So Windows was already being used. i mean It's all about you know how do we push you know a legal version into it.
00:05:09
Speaker
Desktops and all mostly were pre-bundled at the OEM. But servers was a major challenge, and and we did some work around that. um from Microsoft, yeah you know I and perhaps would have worked in Intel. In Dell also, you were on the server side of the business, like selling servers? Yeah, obviously I did PCs and and then Dell launched servers. I mean, so I launched servers in Dell in India. And at that time, Dell kind of aligned with yeah EMC for storage.
00:05:45
Speaker
So servers and storage is what I was taking care of for the country. A very basic question. What is the difference between a PC and a server?
00:05:54
Speaker
ah I mean, today there's a lot of difference. At that time, there was not too much of a difference. But I think one of the options that we had was we could put multiple processors in a server so that you can actually run much more um in in terms of compute than a desktop could do.
00:06:15
Speaker
um So um what we had in Dell as a portfolio, we had four processors to make peak offering. I mean, four server processors in one box.
00:06:28
Speaker
They were of course other machines from Unisys and all, which would go to 32. But primarily, you know and we we had a very good machine. um I sold my first four processor machines to Wipro actually to run their exchange email and all that stuff. so So it's basically, you know, lots of more storage, more memory and more compute is is what a server is.
00:06:52
Speaker
Okay. and It's like the difference between a car and a 16-wheel truck, for example. ah Yeah, it is like that. There are a few more things. In server, you avoid certain more number of single point failures. Like you have two power supplies at a minimum.
00:07:09
Speaker
The newer servers i mean that we are doing now, GPU ones have got six power supplies. um So and normally PCs and laptops have one. so that yeah i mean So it's basically built for... um Typically, a server, once it's switched on, it is never switched off in its lifetime.
00:07:32
Speaker
Doesn't that cause a lot of heat? what How do you manage the heat? um It is. I mean, so the problems when I was in Dell were actually pretty small in terms of heat. i mean, but um these days it is a massive issue because, you know, servers consume up to 10 kilowatts by themselves. i mean, so that's like a geyser sitting there.
00:07:54
Speaker
And There are two um ways you can cool a server. Either you put at least 18 fans in it, which run at very high speed and make a lot of racket. And the other way to do it is you use liquid cooling. So we we are deploying liquid cooling for um managing the heat. So um yeah, you can cool it with air, which is inefficient, or you could cool it with liquid.
00:08:22
Speaker
but A server is something that is typically bought off the shelf. Like you would just go to a Dell and place an order and you get a server or is it custom built? Like for example, you said that you decided to deploy liquid cooling. So that means you bought a model which had liquid cooling or is this like a custom built server?
00:08:43
Speaker
Oh no, there are country i mean lot of choices on the server. So today processors come with up to 192 cores. i mean that is 192 mini processors and one physical die, right? A processor is like say like an Intel chip would be a processor.
00:09:02
Speaker
Okay. So what happened was, processors started as you know one unit. um But as people wanted more compute, they they started putting multiple units in one die.
00:09:16
Speaker
Die is just a packaging, and and you call that whole die as a processor. um And each processor inside is a core. um So today, you get multiple processors in one die. So you call it a processor with multiple cores.
00:09:31
Speaker
um and So yeah the the best ones that are available today, you can get 192 cores, that 192 processors in one packaging. And you can put two of them in one server. So um you can have 350 plus. and So we we have a lot of processing power available on processors today.
00:09:52
Speaker
For a desktop, for like a laptop, how many cores are there? Like a comparative number? um So, you can buy up to, so it's all about heat, right? I mean, so typically when it's a laptop, I mean, normally you buy two cores.
00:10:10
Speaker
There are models available with with four cores, but the more cores you buy, I mean, um you will run off your battery much faster. Okay. Okay. okay So, coming back to my question, these custom built or off the shelf?
00:10:25
Speaker
Yeah, so in a server, so you can start from, let's say, eight cores to 192 cores. So you have lots of options. So first choice you will make is the number of cores you want. Typically, you make that decision based on the application you are running.
00:10:41
Speaker
And then you have a choice of memory. Like today, you can go up to 6 dB of memory, which is typically 6,000 GB of memory. Normally, a laptop.
00:10:54
Speaker
The RAM or the ROM? like there two types of memory. RAM of 6 TB. Wow. I mean, but my laptop probably has like 16 GB but compared to that 6 TB.
00:11:09
Speaker
Most laptops have 8 TB. And so you go up to 6 TB. We don't do 6 TB mostly. I mean, we do between 2 to 3 TB.
00:11:22
Speaker
um because so the more denser the memory is, the more expensive it is. So you want to balance out the cost versus performance. um So you kind of buy that. Then the third level is number of disks that you can put.
00:11:38
Speaker
um Most of the servers today, you can put 24 disks in it. So just to give you a perspective, we buy 15 drives now, which are NVMe's. 15 into 24... MEME? fifteen into twenty four and mi i So NVMe is a non-volatile memory. it is um So if you remember earlier, there were hard drives where there was a magnetic disk which was rotating.
00:12:03
Speaker
They are not so reliable and they are slow. ah Now that has been replaced with memory itself. ah Why is it non-volatile? RAM is volatile. It will forget if you switch off the power.
00:12:15
Speaker
ah Whereas this non-volatile memory doesn't switch off. i so This is also called SSD, right? Solid state something. I don't know what it is. SSD was the one generation previous.
00:12:27
Speaker
um And yeah now it is. Yeah, it is solid state disk only now. But NVMe is how it is done today. Okay. ah yeah so So just to give you a perspective, I mean, you do 15 into 24 about, let's say, 360 GB of but yeah tv of memory in it.
00:12:50
Speaker
so And that fits into about ah three and a half inches in a rack. um So if you take my monitor size, you can put a petabyte of storage in servers these days.
00:13:02
Speaker
You have much bigger options available, but this is what we play with. So um so there's a lot of... Petabyte is 1000 TB. 1000 TB. 1000 TB. 1000 TB. 1000 TB is 1 TB.
00:13:15
Speaker
even tv of eight thousand tv wow okay thousand gp is one tb um And then 1000 TB

Linux vs Windows in Server Market

00:13:23
Speaker
is 1 petabyte. Okay.
00:13:25
Speaker
Okay. Okay. Crazy. Okay. We have 140 petabytes in production. Okay. Okay. Okay. but Got it. So, yeah. So, you make these choices.
00:13:36
Speaker
Then what? Yeah. You make the choices, then you also make a choice on what type of resiliency you want. There is something called RAID. I mean, i mean we're basically, do you keep two copies of the data that is there or you put a parity and then try and recover in case of disk failures.
00:13:56
Speaker
and That's another choice you make. And then you make a choice on network. um Typically, I mean, our laptops and desktops have got one gig network. um we are we know We are now for AI, we're using phone to GPS network.
00:14:11
Speaker
so Okay. oh What is RAID? You said RAID is for resiliency. What does it mean? It's basically what it does is that um it's a controller which sits before the, above the disks between the memory and the disk. And what it does is when you're writing, you have multiple choices. I mean, you can configure it to write twice.
00:14:35
Speaker
in two sets of disks. Let's say you have 24 disks, 12 being as one set and another 12 being as one set. Then if you lose one of the disks, I mean, you still have your data intact. There are other ways of doing it. that That's called RAID 10. I mean, there is um better efficiency ways of doing it where you generate a parity bit so that you recover the data in case of failure of a disk through parity.
00:14:59
Speaker
um So these are some techniques that you use. normally um The best way to do it is you know, rate 10 where you maintain two coffees. ah What do you mean recover through parity? What what is the parity bit?
00:15:14
Speaker
so um So what happens in that? No, no. It is just a mathematical calculation. So basically, um you know when you count all the bits, I mean, whether it's an odd or an even.
00:15:27
Speaker
um So if you lose one disk, I mean, and um you know if it was earlier even, then you lose one disk, then you know what to put back because it is, um um if it is odd, I mean, you add one so that it becomes even.
00:15:43
Speaker
um So, it kind of mathematically calculates what is missing. Okay. So, it's an algorithmic way of creating a backup of the data in such a way that it's more efficient than 100% duplication.
00:15:59
Speaker
Yeah. So, I mean… rude way would be 100% duplication. Everything exists in two copies. and That's the best way because… um You see, in your laptop, you one disk. If it fails, I mean, you lose your data. That's why you keep a backup somewhere in iCloud or whatever.
00:16:15
Speaker
But in a server, I mean, you need to have much more. You have two sets of data. That's the best way to do it. If the data is not very critical, yeah you can do this parity-based thing.
00:16:27
Speaker
But the problem with parity thing is if you lose two disks, you can't recover. Okay. Whereas in a... 90% safe, it's not 99% safe, something like that. The closer you want to get to 99% safety, the more you have to do the brute force way.
00:16:46
Speaker
Okay. Got it. Okay. Understood. Right. Okay. So you make these choices. Yeah. You make these choices and then um and then you have a server operating system. That's another choice you make, whether you make a line.
00:17:01
Speaker
Typically, large systems are run on Linux. um yeah you know Some enterprise applications run on Windows. But most of our large customers, like election commission, all run on Linux only.
00:17:13
Speaker
And Linux is free to use. And it powers most of the internet, i mean like internet. OK. ninety nine percent of the good
00:17:27
Speaker
Why did, okay, let's finish this theme first, then I'll ask you about Linux. Yeah. Okay. so So you decide which operating system then? Yeah. And then the server is deployed and configured and you can put the application. No, but you get a vendor to build this for you or like this is just you go on a website, click, click, click and order and they'll deliver like.
00:17:50
Speaker
um ah In India, very rarely people buy it online. You can do, but typically people rarely buy online. So Dell would build the full configuration or an HP but would build the full configuration and that full configuration is tested.
00:18:06
Speaker
um So they will be, depending on your quality norms, I mean, it's tested for three days or something. It's called burn and test. i mean So you see whether any components fail.
00:18:16
Speaker
Once it's all put together. If it doesn't fail, it comes out of the quality control, gets packaged and shipped. Today, these kind of servers are made made in India. The GPU servers are not made in India, but regular application servers are made in India.
00:18:32
Speaker
um When I was in Dell, it was made in Tana in Malaysia. Okay. ah When you say made in India, is that assembled in India? or I'm sure that the chips would not be.
00:18:44
Speaker
Like, so, no, if if you take any of the OEMs, I mean, um the processor comes from an Intel or an AMD. and The memory comes from likes of Samsung, Micron or, you know, multiple others, Hynix.
00:19:00
Speaker
There are a few options. This, like we buy from Micron or Samsung or Western and Digital. um edie Typically, motherboard is built by one of the big guys who builds motherboards. i mean so that's not It's made for a spec for an OEM, but it is not really a high-value item, but it's it's not made by the OEMs at all.
00:19:24
Speaker
And there is a chassis, which is the aluminum piece, which which houses the whole thing. So in a factory, typically, people put all these together. They take a chassis, put the board, put processor memory in.
00:19:37
Speaker
So, I mean, these things together and then you just put them together. So to answer your question, yeah, it is assembly only. And the players who sell servers in India, these are like domestic companies or global companies? Like whom do you typically buy from them?
00:19:55
Speaker
No, it's ah all Indian. So when I was in PCL, um and there was PCL, Wipro, there was HCL. There was, i mean, these were the domestic companies which were actually majority of the market.
00:20:11
Speaker
um When I joined Dell, the PCs became twice as expensive. i mean, and I never thought... MNCs would dominate. But if you see today, no Indian manufacturer makes PCs and servers.
00:20:27
Speaker
It's all fully dominated by global players. i mean So that market has got wiped out. and there is Today, some people are trying to come up, but I don't think you can match the volumes that these large players produce. And you don't produce in volume, you don't get the a cost benefit. So you might kind of miss out.
00:20:47
Speaker
But that entire industry died. So um they just vanished. And from a point where I thought people would not be able to um buy these MNC PCs, I was surprised to know that you um even in MNCs, there were a lot of turmoil. I mean, when I was in Dell, there was a company called Compact, which was the largest provider. And and Compact died. So it doesn't exist. NHP acquired. Yeah.
00:21:15
Speaker
yeah Okay, okay. So the the server business is essentially about volumes, like to negotiate best prices for all of these components, like the hard disk, etc. You need have volumes, otherwise you will be priced out of the market.
00:21:32
Speaker
So the more volumes you have, the better price you are able to offer to your buyers eventually. and And people who buy servers are typically data centers? No, no, all enterprises buy. so Every company has got some small portion of their compute sitting on their premise, um especially when it comes to control systems. Let us say if you take a bank, their core banking application usually sits with the bank, very rarely in the cloud.
00:22:04
Speaker
And then because the number of compliances that are there are pretty complex. um And then you have ERP systems a lot of times. um So the asset could be owned by the customer. They would sit in a data center like ours or it sits on customer premises, but the asset is owned by the end customer.
00:22:24
Speaker
In case of cloud, the asset is owned by us. I mean, so that's the so not now i'm indexed the way people operate.

Converting Pirated Software Users

00:22:33
Speaker
okay okay ah I had that question in between about why Linux dominates. You were selling Microsoft Windows Server.
00:22:43
Speaker
It looked like you were not able to make a dent in the market. Why is that?
00:22:51
Speaker
No, that's you know primarily because the strategy, but perhaps, i I mean, I won't know the actual strategy behind it. But at that time, parrate software was pirated big time.
00:23:03
Speaker
um like um Like in Microsoft, my our office was in a building called Great Eastern and Center. I mean, and we were in there. So you know the place. fine So you go one floor down, went to the ground floor and you walk around.
00:23:19
Speaker
Either you get Chole Baturiya or you get SAP, Adobe. I mean, you name it. You get everything there. yeah And all that. um so yeah So, the whole challenge was, you know, how do we make people convert from pirated software to... yeah That was the common challenge between us orqqui everybody who was selling software.
00:23:42
Speaker
um But, yeah you know, people just convert and because there were no very big legal hurdles in using at that time. Today it is much more firm and it's all matured. But when i was there, um we couldn't really take people to um i mean go on a path where we can we go and do something ah legal, threatened with legal action and all that.
00:24:05
Speaker
So it was only praying. i mean So you would go to large customers and then you pray that, you know please convert it. That's how it was. you know My question is that for businesses asking them to pay for software is much easier than asking consumers to pay for software i would have thought that the server world is pure b2b kind of a world businesses are the ones who will buy the software which runs servers ah so microsoft had a fair opportunity to have a significant presence there but you're saying that 99 of the internet runs on linux why did that happen
00:24:46
Speaker
So, but as you said, I mean, I think consumer wouldn't buy. So what Microsoft did was primarily bundled it. When you buy the PC, you get Windows. I mean, so it's they actually worked with the OE.
00:24:59
Speaker
On the server side, till the servers were primarily connected to internet, um people were generally fine. I mean, but the moment servers had to connect to internet for security updates or patches or something like that,
00:25:13
Speaker
um and All the software guys could actually go and verify whether that's a legal version and only update where if it's legal version. Otherwise, you know, keep giving warnings and stuff like that. So I think the change happened because more applications started connecting to internet and we could realize what was happening underneath with whether the operating system was legal or not.
00:25:37
Speaker
So if you take the original thought process, see, there were these bigger systems which IBM and all made. They were called the RISC systems. I mean, most of the x86 machines that we work with, which was actually invented by Intel, they are called complex instruction set. And RISC is reduced instruction set.
00:25:58
Speaker
So there was an operating system called Unix running there. So the guy who did Linux was actually brought what Unix offered in larger systems um on a x86 platform. So the moment he wrote that, um it became it brought in a lot of those legacy thought processes in terms of making it very robust and transaction and performance and stuff like that.
00:26:25
Speaker
That aspect came into the x86. So when that happened, i mean, it was the preferred choice. um And Linux was free. um So, um you know, um you had much more control. It's open source. You have code available if you want to alter it.
00:26:43
Speaker
um If you want support, there were people supporting it like, oh, there is open to this Red Hat. And, you know, kind of took off. Okay. Interesting.
00:26:54
Speaker
Interesting. So the the need for more customization, backward compatibility, some of those things were also responsible for Linux dominating. with Yeah. And...
00:27:05
Speaker
Yeah, and and Windows was actually an upgrade from a desktop, whereas Linux has a lineage to larger machines like mainframes and all that.
00:27:17
Speaker
So traditionally, Linux outperformed Windows. and So that early adoption meant more support, more developers building applications on it.
00:27:29
Speaker
or What happened with Android? right like Because the more apps are built on a platform, the powerful more powerful that platform becomes. So something similar happened here. OK.
00:27:53
Speaker
Yeah, it's basically how you process a set of instructions. I mean, if you see the original computers were actually built for mathematical purposes. So the instruction set was not really complex in the sense that they just had to understand few characters.
00:28:12
Speaker
It was not really, you know, they were not really working on understanding more than the alphabets and the numbers.

Instruction Sets and Mainframe Transition

00:28:20
Speaker
i mean So it's zero to nine, and then you have a set of alphabets and that's what was required. So it's a reduction set when that's how you can look at it. It's a little bit more and depth is required, but that's how it was.
00:28:34
Speaker
But when complex instructions had started, people looked at 256 character base. I mean, so the character base was, um we had many more um symbols that were available.
00:28:47
Speaker
um So the machine had to represent and understand more number of characters. I mean, and and that's how, so that it is much more relatable to a end consumer.
00:29:00
Speaker
um So, so the Intel Intel is not the original designer. I mean, actually IBM invented this again on the desktop side also.
00:29:13
Speaker
um but But at that time, they I mean, two processors were competing. One was Intel, other was from Motorola. um Those were the choices available, but Intel prevailed and and then Intel took that.
00:29:27
Speaker
When I started working, it was 4-bit processors. Now it's... The Intel processor was the x86.
00:29:35
Speaker
Yeah, so we so we we I tried with 8084. So the first processor, which was basically commercial, was 8088. ah zero eight eight um And you know that understood the um the full character base that that was working on these those operating systems.
00:29:55
Speaker
ah When it was mathematical, it was not very good. So you had an option to buy 8087, which could offload complex mathematical complex mathematical um requirements.
00:30:07
Speaker
So the way the current process work, they can do a lot more things, but they are not good at math. um So ah that's why the GPUs came up, right? Which are good at math.
00:30:20
Speaker
um so no So it was, a I think, conscious giveaway because, you know, we were trying to make PCs more versatile, doing many more tasks like graphics, you know, create documents and stuff like that.
00:30:34
Speaker
and not just do mathematical functions. So that's how um the x86 architecture reward. What was the difference between Motorola's option and Intel's option?
00:30:47
Speaker
No, not. I mean, so see, finally you had to add to this x86 architecture because the chip has to talk to many other outside elements. So the the architecture on the outside is the same.
00:31:03
Speaker
It's just that one set was made by Motorola, the other set was made by Intel. But i think Intel took quick steps from 8086. It made 80286, then 386, then 486.
00:31:18
Speaker
Then there was Pentium, then you know like that. My first computer was a 486. So
00:31:26
Speaker
so till 486, you had an option to add 487 if you had a lot of mathematical calculations, like you're running Fortran or something. OK. And where does this risk and CSC come in? in this so deep now to the So i told you the risk was the original mainframe thought process. you Those still are in some form available.
00:31:54
Speaker
So IBM still has a range of machines called power systems, which are the lineage from there. So they still have mainframes, which run very large banks or, you know,
00:32:05
Speaker
But you don't need now, mean, because so in what you can do with one large machine in a mainframe, today you do it with few of these inexpensive, comparatively inexpensive servers that we have.
00:32:21
Speaker
Okay, so you told me PC and server, what is mainframe? So in our analogy, like a PC is a car, a server is a truck, what is the mainframe? ah It's like A380, I mean,
00:32:36
Speaker
Airbus A380. Okay, Airbus A380. Okay, okay. So like it's a lot bigger, a lot more processors or cores, a lot more memory, lot more storage. And expensive run and it is not efficient. mean, so um I think it's not efficient because it uses the RISC architecture.
00:33:00
Speaker
and and No, not really that. i mean so Basically, you see, the volumes are not so many, and the priority is not about... See, efficiency comes also in the amount, I mean, the talent that you require to run it.
00:33:15
Speaker
um its So you need much more um closer to machine language kind of experience to actually run these things. So, um of course, today you can compile any language on mainframe, but and But those are very, very differently done.
00:33:33
Speaker
um Today we use more systems. more systems So how does it happen when we want to run? Let's say we run elections today in India. We put um maybe a...
00:33:45
Speaker
100 200 servers and there is something called load balancer in front I get a request from you send it to the first server the second guy a senator second so the load balancer manages the load across hundreds of servers in the mainframe it's one guy who's got 30 32 or 64 process takes all the loads it crunches on its own so very good for a banking application ah hit It could even do UPI very well, but UPI runs on x86 only.
00:34:16
Speaker
and but But it's it's meant for a lot of transaction load, um but it's a single big machine versus multiple small machines. Okay. Okay. And so it can handle peaks very well, but it is inefficient because when there is no peak situation, then there's a lot of idle capacity. It's not.
00:34:37
Speaker
It's not agile, you you cannot like redeploy it for something else when you don't have peak capacity. Whereas with servers, you can there's more agility to handle peaks of different natures from different sources, etc. So you can do that for agility. That is the the main difference here Yeah, and applications have changed a lot. So if you take elections, during log server elections, we need to scale almost 21 times more.
00:35:02
Speaker
um from every day to to to that day. So nine days last year, we worked at between 16 to 21x.
00:35:14
Speaker
But during smaller elections, let's say Maharashtra something, it'll be 2, 3x. I mean, but um so this is what this new architecture facilitates. You could expand.
00:35:27
Speaker
So it's called elasticity. So you could expand and contract. Okay. You can't do that in mainframes.
00:35:36
Speaker
So the but when there is this kind of passive like 25x additional load needed or additional capacity needed, how is that capacity generated?
00:35:50
Speaker
So as a cloud provider, you need to have capacity. i mean So it's just that customer pays you on consumption. So you keep the capacity idle. um So the way you manage it is that it's like a bank. I mean, yeah you go to a bank, if all of us withdraw all the money we have, the bank won't have the cash. I mean, so it's the same funda here. So we manage it smartly.
00:36:14
Speaker
um The anticipation is that um when BookMyShow has got a big movie launch, I'm not having elections. So, you know, some capacity is available for book my show because there is no election. But you have two, three events, then you need to have much more.
00:36:30
Speaker
um But that's how you manage it. You manage it. There are quite a lot of good tools available and, you know, it's pretty smartly done. Okay. Okay. Understood.
00:36:41
Speaker
Coming back to Intel a little bit more, there are, even in the processors, there are all these different processor wars happening

Processor Architectures and Industry Dynamics

00:36:52
Speaker
in a way. Apple has its own chip series of M1 and they moved out of Intel.
00:36:57
Speaker
And you also have ARM and ARM is what is the backbone of say a Qualcomm Snapdragon, I believe those are on the ARM architecture. What are these things? i mean, I read about these things very superficially. don't really have a deeper understanding of what all of this means. Can you just understand, like, what was the Intel architecture? What is the Apple? What is ARM?
00:37:21
Speaker
Are there other such options also which I should be aware of? No, only just, I think there are two architectures like x86 and there is ARM.
00:37:32
Speaker
So they've outsourced the, because you always create a standard and you leave it outside while you build the chips. I mean, everybody can follow the standard, but you can also take the standard and and you can use it. Like ah AMD has taken x86 architecture and and started building similar processes Intel.
00:37:52
Speaker
okay And original x86 was actually done by IBM. okay Okay, so IBM, Intel, AMD are all of the same lineage using the x86 architecture. Of of course, they are improving it each year and upgrading it.
00:38:07
Speaker
Got it. So that is one type of architecture. And what is ARM? How is it different from x86?
00:38:15
Speaker
it's So everybody has now gone to a stage where the processor is no longer one compute engine. What they've done is everybody has ah packaged multiple compute engines and then they've made a small um you know load balancer and a router inside which distributes the load and makes it work.
00:38:36
Speaker
um So one of the problems that we have in electronics today is how much can we pack? um Because see as the transistors get closer and closer, they also emit electromagnetic interferences on each other.
00:38:53
Speaker
so ah um So, what you do, you keep reducing the power that they consume so that the but electromagnetic interference also goes down because it's directly proportional to the power that you give it to that.
00:39:07
Speaker
So, processors becoming more efficient and it is when you make it more efficient, you can also package them much closer. uh the current thought process uh current uh manufacturing process the best is two nanometer that is there's just two nanometer of uh know space between two transistors so one of the things that happened was intel couldn't package much more they got stalled and their chips started becoming
00:39:38
Speaker
So for example, the speed at which they run kind of became two gigahertz around that and they were not really scaling. And also you couldn't pack too much. and um And that was because of the DAI architecture and all which Intel had adopted.
00:39:55
Speaker
And it was not as good as what um you know the Taiwanese were doing, like TSMC and all those guys. So now almost everybody, all the big guys are producing their own chip because the chip design is is possible now. There are very good tools to design and lot of talent available. So they design the chip and then you give it to TZ TSMC to produce.
00:40:19
Speaker
Since TSMC has got a better manufacturing process, they produce a much efficient much more efficient chip. So you package more because they are ah consume less electricity and and you package it more and then you can you also design it to meet your requirements better.
00:40:37
Speaker
um So it's always, you know, Intel and Microsoft had two big problems to solve. One is the downward compatibility in the sense that, you know, there were so many legacy systems in place that whatever they do, it had to be backward compatible.
00:40:54
Speaker
um But whereas if you see Apple and all, you can't run the current Mac OS on older Mac OS because the silicon is totally different.
00:41:06
Speaker
Okay, interesting. I'm just going to recap some of what you said. So TSMC is what's known as a foundry. manufacture chips based on customer specs. and they They don't design and they don't have their own brand. You and me cannot go and buy a TSMC chip for our laptop, but an Apple can go and place an order with TSMC for chips and TSMC will make it.
00:41:32
Speaker
So therefore, for larger companies, today the ecosystem exists for them to have their own chips because you have an efficient foundry in the form of TSMC and you have the chip design infrastructure, both in terms of enough talent and the enablers and therefore Apple has its own M series of chips or like Google has the Tensor and so on and so forth. Okay, understood.
00:41:59
Speaker
Where is the role of ARM in this? So the new people like Apple has taken ARM as the base design. You need to have some foundation design. ARM is the foundation design.
00:42:14
Speaker
There, of course, nowhere similar to ARM's own process. But you know the outside connectivity, all those other systems are compatible to ARM architecture. So just like x86, it's an outside.
00:42:32
Speaker
It is how you shake hand with other systems. It's like a protocol. It's a protocol. it's a program okay so So, ARM is selling like intellectual property to people who want to make their own chips. So, they have like a chip architecture and instructions.
00:42:48
Speaker
I don't think so. See, you can't win like that. So, see, they had, i mean, they had this very formidable x86, which Intel and AMD were leveraging.
00:42:59
Speaker
Now, if ARM had to compete, if they said, okay, you have to give me royalty, nobody would have taken it. So, it's an open source design. i mean, so people leverage it, use it however they want.
00:43:11
Speaker
How does ARM make money then? ARM makes chips. they They also make processors. They also make processors. I've never heard of like a laptop or a phone with an ARM processor.
00:43:24
Speaker
No, they make primarily processors. yeah oh Yeah, you get that. yeah But people mostly use them in servers, but they are very small. they are not like so their thought process is smaller unit, many of them.
00:43:38
Speaker
Whereas AMD is really putting lots of codes now and making the processor really powerful. That's because of this, you know, four nanometer, two nanometer. Most server processes are four nanometer.
00:43:53
Speaker
Most mobile processes and all are getting to two nanometer. Okay. Like like the snap dragons, they are like two nanometer. Okay. Yeah, because in mobile, you want to consume less power, generate less heat.
00:44:06
Speaker
In server, you are much more liberal. Miniaturization matters more for mobile devices. Okay. They need to be robust. I mean, so it's not like you have a cooling fan in your phone.
00:44:20
Speaker
So when the processor is running, I mean, it needs to cool by dissipating heat within the phone itself. Whereas in a server, you have a fan and there's a heat sink. um There's whole lot of stuff that works for heat removal.
00:44:36
Speaker
Okay, okay, okay. Got it. Fascinating. Okay. So we we took a long

Entrepreneurial Ventures and Achievements at Reliance

00:44:40
Speaker
detour. and Let's come back to you were at Microsoft, kind of frustrated in your job.
00:44:46
Speaker
You were responsible for Windows servers. So how did that lead to your entrepreneurial journey starting? Yeah, so no after Microsoft, I mean, I joined Reliance. I mean, of course, the opportunity was humongous. i mean, in the sense,
00:45:04
Speaker
um See, one of the things that I didn't want to do ever was that go to the same customer and sell a competing product. i mean, because you kind of lose credibility. So from Dell, I didn't join Compaq or Digital or somebody or HP, but at now um but I joined Microsoft. It was far easier to communicate to the same set of customers. then I moved to telecom. Again, you learn much more. It's new industry.
00:45:29
Speaker
um But the opportunity that Reliance showed was massive. At that time, I was paying 16 rupees for a call, i mean whether I receive it or make it.
00:45:41
Speaker
From there, Reliance had a thought process that we should make it 40 Baza. So you go and hear such audacious ah you know objectives. i mean you want to be part of it so maybe i'm the first 50 000 in reliance who started that company uh i actually joined the version one telecom the cdma uh yeah um i was there from cdma to gsm i mean but i was related to the enterprise larger networks i mean um that's what we were doing
00:46:16
Speaker
um But what happened, so whatever financial goals I had, um i kind of met them in Reliance. mean, so Reliance, see, obviously, I mean, know Dell paid me very well and then Microsoft paid me better. Reliance gave me much. So the option that we had at that time was very, very unique. I don't think people go through this kind of jumps these days, but, you know,
00:46:42
Speaker
um You know, our salaries perhaps were looked at in dollar terms and you know, they were very sympathetic towards us and they paid us too much. I mean, so... Okay. We had that strategy that we want best of the breed people and that's true and pay whatever is needed to attract them. Otherwise, they wouldn't have been able to attract someone from Microsoft, right? like Yeah, and not only that.
00:47:12
Speaker
Not only that, actually, I spent seven and half years in Reliance. um They doubled my salary twice for what we did. So, um you know, it's not just about acquisition.
00:47:24
Speaker
I think I found that at least my bosses, I mean, the people whom I have worked for, um and they were very generous and and and they were always generous in proportion to the contribution you're making. So it was very good.
00:47:40
Speaker
So that became a problem for me because... Just just one one question on this compensation strategy of Reliance. You wouldn't see this strategy even with, like, say, Google or a Microsoft, right? Like for them to double salary of people who perform well.
00:47:56
Speaker
What's your take on...
00:47:59
Speaker
Oh, they don't need to because all these big brands, so that's our biggest problem, right? The talent goes to them first. It's like, you know, you pass out of IIT, you want to go and work with Microsoft, Google, or, you know, Nvidia now and those kinds of, I mean, so they don't pay more. So if you see today in software industry, TCS pays the least, but I have to pay almost two times more than what TCS pays to a developer. I mean, it's, that's how, so big company doesn't mean big bucks.
00:48:27
Speaker
That was only during my time. Okay, got it. Reliance was still not as known and respected a brand as it is today with Jio. So therefore, they had that kind of compensation strategy. Okay, understood.
00:48:41
Speaker
Right. No, and there was this incoming salary, which was a benchmark. mean, so you had to, um yeah I mean, they had to match it. And so what happened is that, um you know, when I was in college, I thought I learned 20 lakh rupees. Okay.
00:48:58
Speaker
Then when I got married, I increased it to 32 lakhs. That was my lifetime goal. um But um you know I kept on increasing my goals as um you know my compensation improved.
00:49:10
Speaker
But in Reliance, one of the things that happened is that you know I had secured my kids' education. That is, they go abroad and study. have secured that. I had a house. I had two cars. there were I was debt free.
00:49:22
Speaker
And I was 38. I didn't know what to do. I mean, so one of the things that I had to do was actually set some goal. And, you know, around 40, I thought I'll get out and, you know, ah start something because no there was too much of comfort in my career.
00:49:41
Speaker
um you know i spent seven years there they won't sack me so easily unless you know i blatantly make some mistake but um so i had to just reinvent because otherwise you know there was my life was going nowhere that's how we started in reliance you were doing enterprise sales of uh connectivity i was head of western region i was in mumbai and selling them internet connectivity like enterprise-grade internet connectivity. Yeah, so they at that time we used to do enterprise mobile. and A lot of, we had few data center was one of the offering. Okay.
00:50:21
Speaker
Okay. Yeah. So I had, so there was an element of compute in that. But my, like SBI, these are lot like, like,
00:50:34
Speaker
but sbi we did yeah maybe ten thousand atm pan connectivity so it's like 10,000 connections. you know um Both SPSC and ICIC bank, thousands of branches. And you know they have branches going up.
00:50:48
Speaker
So it was pretty transaction oriented. And yeah and at that time, telecom was not as reliable as well. you know people would um See, earlier, people would steal telecom cables because they had copper in it.
00:51:05
Speaker
um I don't know why they were stealing fiber because it's plastic. i mean and you know But they would still steal it. i mean So, a lot of things go down and you know it was lot of firefighting.
00:51:18
Speaker
um yeah but But it was like routine. i mean You do it for seven and half years, it's the same thing. There was nothing new coming. yeah okay okay
00:51:29
Speaker
so Why did you choose to become a data center?

Starting a Data Center Business

00:51:35
Speaker
operator like when you decided you want to move on and become an entrepreneur Yeah, because we saw that opportunity. So at that time, it was telecom guys who were doing it. But for telecom, see, this business was so small ah that they really didn't invest in it.
00:51:55
Speaker
Because their priority, unfortunately, what happened when Reliance was born, um the way equipment was amortized was a seven-year plan. I mean, because this telecom equipment lasts longer.
00:52:06
Speaker
So you always thought that you had seven years to recover your capex. But um we launched CDMA, and then GSM took over, they got 3G. So you had 3G license to be paid for, and then you had to adopt 3G.
00:52:19
Speaker
Then 4G happened, then 5G happened. So the suddenly in telecom, the whole thought process changed that instead of seven-year cycle, you had to plan for three-year cycle. And the costs were going down dramatically.
00:52:33
Speaker
so the and So the challenges were... um pretty different in the mobile space, and it was becoming very, very competitive. Tata Docomo did this one second one paisa, and that was the end of making money in telecom.
00:52:48
Speaker
I mean, everybody followed that path. It's a downward spiral, and and they couldn't get it back. That's why so many companies died, right? i mean So in this whole battle, data center, while it was lucrative, my customers were demanding more space, but I never could.
00:53:05
Speaker
um Like I go and talk to Anil Ambani and I'll say that, look, we we want to do this thing. Anil would tell a person called Satish Seth and that man says, okay, i have a building here. You want to take it.
00:53:16
Speaker
But it's not like you know we are making a strategy for a business. And so we thought there was an opportunity there as you know companies, there were other companies like Netmagic trying to do the same thing.
00:53:28
Speaker
So we thought we should build a data center ah business. But I chose to do it in Bangalore because you know doing real estate in Mumbai is very, very expensive because data center, you need ah building and stuff like that. So we thought we'll do it in Bangalore.
00:53:46
Speaker
Even though, even from that time, Mumbai was the primary market. um We started in Bangalore. Yeah, that's how we started. Okay. Okay. ah You started alone or you had co-founders?
00:54:02
Speaker
Yeah, I had at least six people who who worked with me, friends. So couple of friends who were Charant accountants because i was, um even though, you know, I had very good idea of finance, but I couldn't run finance. I mean, so it was like business orientation.
00:54:22
Speaker
um I did prepare myself, so I went to Yale for um you know a CEO of course. i pretty Prior to that, I went to Kellogg, that is University of Chicago.
00:54:34
Speaker
um So we even though I prepared, I thought it was good to have. There a couple of... another friend, Baskar, who joined us, who was in Dell, who joined with me, and there is Viral who joined from Reliant.
00:54:50
Speaker
Founding team members or co-founders? Founding team members and a few of us, we've we've got chairs and all that. It's nice before you start, but when you start, it's a massive problem because you have to pay 80 people.
00:55:05
Speaker
Right. So, why did I meet 80 people on day one? and Because it was data center, we need one guy to design, one guy product managing. So we were designing everything in-house.
00:55:16
Speaker
And it's big project. mean, was.. VINCENT RUKASIRAJANI RAJANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARANIJARAN Was it that corporate baggage because of which you started with 80 people?
00:55:42
Speaker
and There is. i mean So I'll tell you, there's one guy who resigned in Reliance, just landed up in Bangalore and told me, boss, I've come. but look at at night like So it's not that you know all these guys pressurized me for salary and all.
00:55:58
Speaker
They worked with me for many years. So there are maybe 10, 12 guys who didn't take salary for two years. right So it's like, Yeah, so it it was ah so they are the founding team at Nixon.
00:56:12
Speaker
There are quite a few who couldn't, though their intent was not to take money, but they couldn't manage it because they had personal obligations to do. I mean, you a home loan and stuff like that.
00:56:24
Speaker
um and So, the 80 is because, um mean it's a larger project. So, the first dream was to build a 350 crore data center. So, it's not that you you must understand where I'm coming from.
00:56:38
Speaker
when you have a certain amount of salary and you want to come out and take a risk with everything that you have, the return should be much more than that. you know Otherwise, it was very, very comfortable in the corporate piece and the salaries were very good.
00:56:53
Speaker
you know Real estate was not so expensive. so it was you know things were We were ahead of the curve. Today, salary catches up with with e but inflation. but now We were way ahead, actually.
00:57:07
Speaker
um and so When you go through that, i mean you need to see what is your benchmark. so We were not building a startup which will generate 5 crores turnover because um you take that data as as you know one person's salary, i mean it's not going to work.
00:57:25
Speaker
so um If we were all there and we were getting salaries of reliance, um You kind of figure out, okay, this is let's say it's all ah amounts to 100 rupees.
00:57:37
Speaker
You needed to do a business which was like 1000 rupees so that you could take 100 rupees back home. um So you have to look at it and you want to build a larger business. You need so many people to operate. I mean, otherwise you won't get there.
00:57:52
Speaker
So... yeah um so it's about what you want to get out of it so i had to get something better than what corporate life paid me i mean um that was one benchmark of course you had a lot more control and all that but genuinely in reliance and dell had an absolute free hand and my bosses were like angels i mean they were too good so i was like prince in reliance mean so um so i had um So it was like, you know, you have to have some goal some benchmark to set yourself up.
00:58:27
Speaker
Right. No point putting the lines if you're not going to take a big swing. Yeah. So it was intended to be a bigger swing. Yeah. Okay. Understood. ah So how did you go about it? So you did you raise funds? How did you actually have your first data center launching? It sounds like capital intensive project.
00:58:48
Speaker
Yeah, so I mean, obviously, ah you know, I had some thoughts in terms of how we would make it different from what was there. So we we were putting that design together. And and then, of course, I had opportunity. I mean, I was from the industry. So I had a friend in Intel who referred to somebody from Intel Capital.
00:59:09
Speaker
um I went to meet them just after lunch on one day at about 2 o'clock. 7.30 we agreed and we got We got blessed.
00:59:20
Speaker
So Intel invested, Intel agreed to invest about $9 million, 8.8 was the number. um And you know, yeah that was the start. um You don't get the money immediately. I mean, then there is a whole lot of stuff that happens after that. So, all of that.
00:59:36
Speaker
love that There was nothing to do with it. It's like, you know, a paper idea. Everything is paper. I mean, so so we could we raised 8.8 of the strong. And then we thought the rest of it, we will get debt.
00:59:50
Speaker
But actually, what happened was that once we had this money, we got technical land because you know we got land outside. We never wanted to build the data center in an urban area.
01:00:02
Speaker
Nowhere in the world, except in India, you find data center in the midst of business districts. um so um So we realized that, okay, I mean, the rest of it will come as debt.
01:00:15
Speaker
And I started going to banks. I mean, the first thing that I realized is most banks wanted three years of balance sheet. um So in India, you kind of a bank wouldn't fund you um irrespective of your collateral if you're less than three years. I mean, that's it. You could be loss making, could be what? I mean, it didn't work.
01:00:36
Speaker
So luckily what happened? Like venture debt at that time? No, no, that concept wasn't there. I mean, there was... so and So we had... But we had started the project. I mean, got land and started and... mean and You purchased the land outright instead of leasing it or like...
01:00:55
Speaker
No, I got it outright. I mean, at that time, so Karnataka government had a single window scheme. I mean, so you go and apply saying that I want to build a data center. These are the specs.
01:01:06
Speaker
um you You have, there was industry's minister called Nirani. I mean, he had a meeting and he approved it. It's a single window, but you know, you go through window, you will find 30 doors.
01:01:19
Speaker
But... okay But yeah, also we got land at a very low cost. We got 10 acres of land. So idea was you um view offer the land as collateral and then you build the building and then you get the equipment and So it didn't work in terms of bank funding, but we got some temporary funding from a cooperative bank.
01:01:44
Speaker
There were a set of old people who came, looked at me and said that, I'll bless you, no problem. And then, you know, we got some 30, 40 growth and started. Subsequently, I had to raise one more round before we completed the project with IFC, which is World Bank.
01:02:01
Speaker
And Intel past participated again. And there was a Spanish company called Axon Partners. They funded us again and then we completed the project. But what happened with us was that we had to feed ourselves. I mean, even I was not prepared. I mean, by that time, your kids' schooling became the biggest expense that you you spend.
01:02:22
Speaker
um and And so I had to, I also was looking for some salary. um So after the first year, we thought we should start earning some money. it's It's not just that, you know, we build this project and you don't have debt and complications.
01:02:38
Speaker
So we started doing cloud. So went to Reliance Entertainment to make select friends. there are A lot of customers were my friends. So I went to the CIO. I told him, look, this is what I'm doing. I mean, what what can I do for you? He said, why don't you build a cloud for me?
01:02:53
Speaker
So we built a cloud and that's how we started our cloud journey. Okay. At this stage, I want to take a detour. What is the... Can you define what is the data center? What is cloud? Would you say that...
01:03:07
Speaker
You originally were building a data center business, but then you also built a cloud business. What are the differences between these things?

Cloud vs Data Center Services

01:03:15
Speaker
So so we we we were discussing about and data center versus cloud.
01:03:20
Speaker
See, but um the most important thing in electronics is you manage heat, and then you provide stable power. If you can do these, the electronics lasts for a long time.
01:03:33
Speaker
um So what happens in a data center is you give precision cooling and precision power to the servers. So what you build is basically um a building which is at least five times stronger than a commercial building because you are hosting a lot of steel and aluminum in in terms of equipment.
01:03:56
Speaker
And then the second thing that you do is You put certain layers of security. i mean, you put a minimum of three, but normally people put five layers of yourself physical it security. physical security?
01:04:10
Speaker
Yeah. yeah Physical security. Then um you have electrical system. If you take our data center, we get power at 11,000 volts. We step it down to 415 volts, which is our three phase power.
01:04:24
Speaker
ah So there's a transformer and there is redundancy there. If one fails, what happens, et cetera. And then you step down, you have, you distribute the power. So there is a lot of paneling, big, big panels. You will see large cables moving.
01:04:39
Speaker
One system goes to the cooling. The other system goes to the compute where you you go through a UPS systems. Again, UPS, you have one plus one, one fails, the other takes over and you have multiple of them.
01:04:52
Speaker
And then there is precision cooling. It's like a very good air conditioner. but very high capacity and it ensures temperature is maintained. We maintain 22 degrees plus or minus one degree.
01:05:06
Speaker
That's what we do and lot of other design aspects. So a data center is about a good environment to host compute. So the basic expectation there is that customer buys the compute, which is the server, brings it to data center and he hosts it there.
01:05:23
Speaker
I would provide ah good environment and I will also provide some connectivity to reach the server because you can't have free flow of people, you people actually operate it from outside. and That's the data center business. So it's a very real estate kind of business, but it's got certain precision characteristics to that.
01:05:42
Speaker
In a cloud, what happens is you're not only hosting a server, so you own the server asset as well, but you also split that server into, because the server is much larger than what a normal enterprise would what need, you split it into multiple pieces and offer it to multiple servers.
01:06:01
Speaker
uh in different use cases so that's so this is cloud business which is one layer above data center is the foundation of this whole thing um it's the environment and in the environment you put compute and it becomes cloud okay so in cloud you are creating virtual machines in that server and each virtual machine has a customer mapped to it basically Many customers, and like larger ones, I mean, have got 3,000, 4,000 virtual machines.
01:06:32
Speaker
It's not just, you know, no customer has one. So the lowest would be maybe three and, you know, the largest would be few thousands. How do you define a virtual machine?
01:06:45
Speaker
So it's the same specification as a typical server. and So you have certain cores, certain amount of memory and certain amount of storage. But it is all virtual in the sense you are not physically allocating to that person.
01:06:59
Speaker
um um Storage is more or less done physically, but everything else is virtually allocated you. It's your right to use. If you're not using, I have the right to give it to somebody else.
01:07:11
Speaker
Okay. Okay. Got it. Okay. Understood. Difference between cloud and data center. So when Reliance Entertainment asked you to set up their cloud, where did you host the servers then?
01:07:24
Speaker
So in my office, we created a small data center. This was a kind of first prototype. So in our design, we had one thought process of actually designing differently for two aspects. you know um ah Most data centers are designed for peak heat outside.
01:07:45
Speaker
um that is what is the maximum temperature that that city and humidity that that city has and then it's designed to handle it peak. um Our approach was to handle, I mean, not designed for peak, but between 40 to 80% of usage in the sense, when is the, so if you take Bangalore, it is hot for actually four hours in a year.
01:08:12
Speaker
So you don't design it for four hours. You actually design it for 80% of the time, what is the temperature? um And and and that's so our design thought process was different. And we had something called cold oil containment.
01:08:26
Speaker
That we built a prototype in our office. So we hosted the first cloud in the prototype. What is this cold oil containment? So and the way air conditioning works is when you have more heat, you can take out more heat.
01:08:43
Speaker
So um ah you the servers and typically take cold air from front and leave hot air at the back end. So that's fans which are sucking and pushing out.
01:08:54
Speaker
So if hot air and cold air mix, you don't get so much of difference in in the temperature. So the extraction is that much more difficult. So what you do is you contain the cold air, just cold air in front of the server, and take out the hotter and not let it mix with the cold air.
01:09:17
Speaker
So, the temperature is maintained at higher. So, the air conditioning systems work much more efficiently. So, that's why your ACs don't work well during winter.
01:09:28
Speaker
Okay. Okay. Okay. Got it. Okay. Understood. and You say ACs don't work well during winter in household or in a data center? Any AC. Yeah. Household also.
01:09:41
Speaker
Okay. The more the difference, the faster they will cool. OK.
01:09:49
Speaker
Got it. OK. Understood. So you built this ah prototype for a cloud. Now, for a data center, you told me the inputs, like you need electricity, and you need power backup, and you ah need real estate. What are the inputs you need for cloud?
01:10:06
Speaker
Just servers? No, yeah it's much more complex. so um So you need first a virtualization layer, which is um what happens is you have certain resources.
01:10:21
Speaker
That resources, how do we split it? i mean, that's what is virtualization layer. It is typically an operating system plus certain functionality in that. And then you have large networks. And this is off the shelf you can buy it from a software company.
01:10:36
Speaker
Yeah, so so commercial versions, there are companies like VMware, Nutanix, all these guys who do it. We use a lot of open source. We do OpenStack, OpenShift.
01:10:48
Speaker
so we So because we have that much more capability, we don't depend on a commercial version. So typically, if an enterprise wants to build a private cloud, they will take a commercial one.
01:10:59
Speaker
We are a service provider, so we would prefer to work with open source where you have more control. You can also differentiate. right You can create certain things which are different so that your product sells.
01:11:13
Speaker
OK. So for you were saying things needed as inputs. You need a virtualization layer. What else? Yeah, and you you need networks. I mean, that is how did how so you can't confine this application to be sitting within the data center. It needs to communicate to the world. So networks, whether it's internet or private networks and all that is critical.
01:11:34
Speaker
But these days, I think biggest thing is to having a great deal of security capability. because in very simple terms, if it is unsecured, less than a week, any website will be hacked.
01:11:50
Speaker
It's like, you know, there are systems which are probing all the time, looking for insecure items, and then they just act it. so So security layering and the talent around it and the technology around it is a critical element to it.
01:12:05
Speaker
So it's and then you work on aspects like reducing latency, improving performance and stuff like that. So it's a much more challenging to our job than data centers. Data centers, it's a globally prescribed design. You don't play too much with it in terms of um So the design fundamental is concurrent maintainability. That is, even for maintenance, if you want to take out an element, you can't bring down the production.
01:12:36
Speaker
The data center, once it's switched on, it's never switched okay off. OK. OK.
01:12:46
Speaker
OK. I understood the virtualization layer like a VMware kind of a solution or a custom built on open source. ah What is the security tool that you use? that I believe there's something like cloud Cloudflare, which is used. So is that what you're talking about or what are you talking about?
01:13:07
Speaker
Oh, no, I have many more layers. I mean, so Cloudflare is typically when you want to secure your internet. But and the way they came up as a company is they are a caching company. For example, especially, um and let's take a case of Netflix.
01:13:23
Speaker
i mean Netflix doesn't serve every user from Netflix data center. The movies and the series are actually closer to use. There are cached somewhere. When you pause the it, you can pause it and then you can play from there, but it's not being served from Netflix itself.
01:13:40
Speaker
That's what Cloudflare does. and They also offer now internet protection, but that's not what is actually robust. So number one, we have to secure the perimeter. That is, you have people guarding the entry points like internet, private networks, and all that.
01:13:57
Speaker
Typically, these are network firewalls that you deploy. um these Earlier, they were all rule-based, but nowadays you can't just manage with rules. So you have advanced threat protection, where you are constantly seeing what's happening globally.
01:14:13
Speaker
So everybody shares that input. And then you kind of know that, okay, this is one way people are exploiting. And then you quickly deploy that here. So this is the first layer. The second layer is at the application layer.
01:14:25
Speaker
They are called web application firewalls. Then you have, you know, ah another layer called XDR. Then you have antivirus doesn't work any longer. i mean, because signature-based ah is, nobody does it now. mean, you don't get virus infection nowadays.
01:14:42
Speaker
um You actually get hacked. um Then there are denial of service. A lot of things that we face today is denial of service. So basically what people do is they flood you with a lot of requests. I mean, it could be bandwidth or it could be just share sessions.
01:14:59
Speaker
um And then they won't let you work. I mean, yeah, these are things that we have to handle. How do you fight a denial of service attack?
01:15:12
Speaker
and So typically ah you come to know when somebody is... So these are no longer denial of attack. They distributed denials of attack. What they do is there are a lot of PCs which are sitting... there lot of people don't switch it off. Actually, we should switch off every PC.
01:15:28
Speaker
They are all sitting idle. and if And each PC could act as a legitimate user coming to our website. mean, so what you do is you keep looking at those patterns. If it's the same guy doing multiple...
01:15:41
Speaker
and doesn't look normal, you start ignoring him. That's how you, and and you do it at scale. It's not like you know you get attacked from one IP. I mean, you get attacked from thousands of IP. These are all, you know, they'll be from office.
01:15:58
Speaker
Yeah. so good Okay, okay. Like someone has hacked a lot of PCs and commanded them. Yeah. So earlier we used to think, you know, China is attempting to hack them. It doesn't happen like that.
01:16:11
Speaker
You will have a global attack which can come from Ukraine or Brazil or someplace. But today you will see that sixty seventy percent of the attacks come from internal.
01:16:23
Speaker
But that doesn't mean that Indians are attacking Indian websites. I mean, it's basically somebody who's hacked them and then they do this. ah Okay, okay and so okay ah okay so ah and understood. now Let's go back to your journey. You started with the cloud that probably got some revenue going in before your data center went live. This was which year when the cloud order came in

Growth Opportunities During COVID

01:16:47
Speaker
from Reliance?
01:16:47
Speaker
The cloud was 2014. 2012 we started, and but 12 is like setting up of company, going through that. I think the scheme was called UDA or something.
01:16:58
Speaker
ah No, the getting the land and all that. I mean, and then went through the fundraise. So 13 is when kind of things started taking shape. 2013, we started building the data center. We got the land and all that.
01:17:12
Speaker
14, we had to do because 2012 to 14, we were exhausting our savings and all that. You can't build data center business with your savings. mean, not from salary.
01:17:24
Speaker
um I mean, unless your dad has made a lot of money, i mean, you can attempt it. But, you know, I didn't have that comfort. um So so we we were making money.
01:17:35
Speaker
Then my second customer, then um since we were building a data center and our thought process was new, um we, Prime Focus was building a small facility for themselves.
01:17:48
Speaker
So we presented our thought process in terms of how we were doing it differently. and They liked it Also in Mumbai, I mean, they were in a building which had only nine feet and you had to design it totally differently. So Prime Focus was one time when I, we had to, do I mean, our design was actually deployed and like that. So we acquired some known customers. We knew a lot of enterprises.
01:18:14
Speaker
What is Prime Focus line of business? What do they do? Oh, they do all the VFX post-production for movies. Okay, got So they need a data center. Okay.
01:18:27
Speaker
but So you kind of... And it's ridiculously secure. See, because movies are the most... Because everybody wants parated content. um So post-production is the best place to get it.
01:18:39
Speaker
and So this is like a build, operate, transfer thing that you did for Prime Focus. Okay. So, yeah, these are the first two customers and then now we have a thousand plus.
01:18:51
Speaker
When did your own data center go live? 2015. It took couple of years to build it. We had... So actually what happened was we looked for land where Bangalore receives power. We didn't want to get into the urban infrastructure.
01:19:11
Speaker
So they were... Bangalore receives power from five locations from the national grid. But only two locations, there was spare capacity available. One was in Kolar, the other was in Birzi.
01:19:22
Speaker
So we went and asked the government to give us industrial land at around this place. In Biddi, that was a very mature industrial area. There was Toyota, which was operating for 20 years.
01:19:34
Speaker
And Toyota, typically these automotives come with a lot of ancillary units. So it was a very mature place. So there was a small hill available. They said, you can take this hill. And they gave that dirt cheap to us.
01:19:46
Speaker
So that was 42 meter hill. We chopped it off at 21 meters and built a facility on top. so Okay. Okay. So what did your revenue trajectory look like once the data center went live? What kind of annual revenues were you doing? How did the business evolve further in cloud versus data center? Which which side was growing faster?
01:20:10
Speaker
So data center never caught up. So yeah because cloud was what all the enterprises were asking because people didn't want to do KPEGs. Except in banking where you know cost of capital is very low, they would prefer to do their capex.
01:20:26
Speaker
Every other industry cost of capital was 16, 18%. They were always preferred to leverage it from as a service. i mean, so cloud took off big time. um I also had another opportunity. There was a company called Dimension Data.
01:20:41
Speaker
and The Dimension Data was actually built some a DECI business for BSNL. um I acquired that business um instead of building. So I acquired six more data centers across the country from um because dimension data.
01:20:58
Speaker
um Well, okay. Okay. Did you raise more funds to fund the acquisition? It's middle class mentality. We want to spend the money we earn.
01:21:09
Speaker
Now I'm raising, but I'm raising a very large amount of money. But, you see, we generate 100, 150 crores. We spend that on capex or on people.
01:21:22
Speaker
So my investors are happy. I mean, we are growing. It's a growing market. And every year we can put back 100 crores plus. I mean, so it's it's quite good. um See, unless, you know, ah we were looking for a quantum jump, there was no point raising money.
01:21:38
Speaker
um like what So we cleaned up the whole thing. We are debt free. That original debt I took, I paid off. um So it's kind of, I'm in a position where I was in Reliance now. um and over That's why I'm doing AI. Time for the next big swing.
01:21:58
Speaker
yeah so Just trying to consolidate. you know You should always have your foundation but rock solid. i mean That gives you Maybe, you know, it will take away some little bit of time, but it gives you much better platform to take off.
01:22:15
Speaker
Okay. Just to help me understand the the revenues, like like by 2018, 2019, what kind of annual revenues were you doing? We are privately held. Normally, we don't say revenues, but, you know, but you you must understand. See, the maybe maybe initially, so I think the first year itself, we did more than 10 million.
01:22:38
Speaker
um But the key is that we were EBITDA positive. first year is First year after Data Center Gola, like 2015, you're calling is first year. twenty fourteen um No, 2014. So we were EBITDA positive and from then onwards we were EBITDA positive.
01:22:55
Speaker
um We always never took out capital for paying salaries. um We always paid it from the earnings from the business and that helped me quite a lot because I didn't have to keep on looking back and see whether you know everything is normal. um mean So we had a good opportunity.
01:23:16
Speaker
It is true, um it questioned. During the same time, I've seen likes of Flipkarts and all that happening. And even now, a lot of companies lose money.
01:23:28
Speaker
But I keep telling myself the purpose of doing business is making making money. And we want to make money and you know deliver value to our customers and not really go in that trajectory of losing money and then gaining market share. by When did you cross 100 million revenue?
01:23:47
Speaker
By which year? About three years. So, um you know, COVID was actually pretty unique for us. So the the key part of that was On the private sector, we had tremendous stress. My biggest customer was Bokmai show and we shut down.
01:24:08
Speaker
There were no movies, no events, nothing. I mean, so private customers, everybody, including Mahindras, Toyota Kirloskar is another big customer, Godrej. I mean, they just, they were shut down.
01:24:20
Speaker
Factories, everything was. But at the same time, I had a great opportunity to serve Ministry of Health. government took off and i don't know whether it was by design or what even our income tax refunds came and you know um covet time our business grew we actually give increments to our to our teams and um we also housed most of the team in a five-star hotel about five months because of course
01:24:52
Speaker
I mean, and because it was far easier if I was going to allow them to go home or go to their PG and come back, they would bring back COVID. So um we hired Marriott in Whitefield. um For five months, we we all, i mean, most of the team stayed there.
01:25:09
Speaker
so it's So it was in a contained piece because we had to work 24 by 7. We did quite a lot of good things during COVID. so COVID was a time of growth. I mean, so to answer a question, that is like kind of pivoting point because that shows that you are a resilient business. You know, I was very surprised.
01:25:30
Speaker
um Very large companies started differing payments to us in like second month, third month onwards. Whereas we paid all the salaries and and, you know, so I think the character stood out during COVID.
01:25:44
Speaker
And um From COVID, of course, when the growth trajectory has been different. ah What are you today now? Like 300 million plus or like 200?
01:25:55
Speaker
hundred huh No, I can't tell you the number. mean, but basically, don't have 300 million revenue now. I mean, our EBITDA now works around 44%, which is a very healthy EBITDA to do.
01:26:16
Speaker
But we also have a lot of depreciation. So last year, our profit was small. um But this year, I mean, we could pay a lot more tax than what paid last year.
01:26:27
Speaker
um Because more or less, so actually, this business has got a lot of fixed cost. and It's not people dependent. It is a lot of automation and technology dependent. So you get to certain scale.
01:26:41
Speaker
You're not really dependent on people to scale this business. So you actually need to um just leverage technology. Amazing. Amazing. oh Okay. I want to understand you said that the data center business did not take off as much as the cloud business did.
01:27:00
Speaker
So essentially the data center business is like your internal vendor and the cloud business is the primary customer for that data center. Yeah. So we operate five centers now. We actually shut down Jaipur in Luthiana.
01:27:15
Speaker
um So ah it's the foundation of our cloud. But, you know, so no but the first failure level, I mean, so if DC fails, the whole cloud goes and thousand customers go down. So we we can't afford that. So it's a critical component of the business, but it's part of the offering, not the offering.
01:27:35
Speaker
ah but I mean, the the data center is like the base layer. for your internal supply chain. That's the way to look at it. It's not necessarily something you monetize by directly selling to end customers.
01:27:48
Speaker
What is being monetized more is the cloud business. So I'll tell you, when I do data center business, I will get on a per rack basis. I mean, I may get about 6 lakh rupees revenue per annum.
01:27:59
Speaker
And you need to invest ah maybe 18, 19 lakh rupees to build that racks, you know, infrastructure behind. So you will take three, four years to recover, including interest, you need four years to recover.
01:28:14
Speaker
But that interest infrastructure can last for quite a long time. But whereas if you go to cloud, each rack can deliver 2 crores of revenue. Wow. OK.
01:28:25
Speaker
Of course, the related investment is higher, but the margins are also higher. um So it's a better business to do. Doesn't demand so much of capital. um And we make money so we can put back the same money.
01:28:40
Speaker
Amazing.

Cloud Business Growth and Challenges

01:28:41
Speaker
oh Is this generally the model for other cloud providers also, like say, Azure or AWS? Do they own their own data centers as well?
01:28:55
Speaker
Is this generally how the industry works? like they They do, but in kind of countries like India, they don't. But they started, like Microsoft started owning pieces, I mean, but they will structure it differently.
01:29:08
Speaker
But I will give you the same perspective. So if you take AWS for Amazon, I think it is 17% of their revenue, but 69% of their profit.
01:29:19
Speaker
yeah Yes, yes, yes. AWS is their profit-making machine. yeah it's everywhere So it's a very good business to do.
01:29:28
Speaker
I'm assuming this is very low-churn business. Like once you acquire a customer, they stay with you. Yeah, very, very very rarely... um In fact, last year it was 100% voluntary from our side because somebody wouldn't pay or you know they've gone down as a business i mean and stuff like that.
01:29:47
Speaker
But people don't move In fact, the beauty about this business is existing customers give you 24% growth. Because... because their need for data is only going to actually increase it.
01:30:01
Speaker
So last five years, it has been a CAGR of 24% on my existing customer base. um So if you serve them well, you actually don't need to acquire new customers. i mean So if you we grow maybe 35, sometimes 40, 35%, 24% comes from existing customer base.
01:30:23
Speaker
So new acquisition only is smaller because it's always difficult to acquire a new customer. Okay. Okay. So like you're planning a big fundraise. Is that for more new customer acquisition? What is that for? And how much is the quantum you're planning to raise?
01:30:42
Speaker
We are raising about 400 million now, primarily for AI. The reason being that um this cloud is not a stationary business, that business is growing.
01:30:55
Speaker
I can't take out and then start AI from that. Then this business becomes weak. This is in a nice trajectory and I don't want to stress that business. So we were raising a new set of money for AI.
01:31:12
Speaker
So, AI an option. It's basically a very, very different thought process from the way people are building businesses now. Perhaps it's my age or upbringing or whatever. I mean, but, you know, this is our approach.
01:31:28
Speaker
AI is essentially like the industry is going through a change and AI is an opportunity to acquire customers from like an AWS because cloud customers don't churn easily. So either you go after to startups and be with them as they grow, or you look for these industry changing moments when the industry is changing and therefore you have an opportunity to acquire customers. So that is the the plan here. Like you want to acquire the next set of customers through the AI opportunity, which is coming in.

AI's Impact on Cloud Services

01:32:02
Speaker
Yeah, so AI is also changing the dynamics. So the three guys who dominate cloud, right, globally, AWS, Azure, which is Microsoft and Google. AWS has lost the race.
01:32:13
Speaker
I mean, they're not there. um Really? It's Microsoft. How did that? AWS was the pioneer, right? It was the company that... No, not in AI.
01:32:25
Speaker
It's Google. Not in AI. Okay. Yeah. good No, I mean, they're under severe stress. i mean, they're missing out this entire bus. Google is doing very well. I mean, because they have so the tools, the they started this tensor and that thought process. so But Google always struggles to succeed in a consumer business.
01:32:45
Speaker
um So Google won't ever... I mean, they have never showed that, you know, their Pixel phone or nobody buys. I mean, right. mean, they don't get to the only business they succeeded was search. I mean, beyond that, they never succeeded in any ventures.
01:32:59
Speaker
So Google is ahead of AWS, but Microsoft is in a rock solid place because because of MS Office, because Office is front end to most businesses. I mean,
01:33:14
Speaker
You know, Excel today is as good as a BI tool. And you add AI to Microsoft Office, they are at a significantly... So, ah and consumer play is gone because meta will take lead.
01:33:31
Speaker
ah We use a lot of meta as open source pieces. ah OpenAI, everybody knows, I mean, ChatGPT. So we are not going to compete in the meta AI space, but we build enterprise use cases for our customers and we run it for the long term. So um like we can do bot for your customers, we can analyze your corporate, I mean, your videos or things like that. So we we do various use cases.
01:33:58
Speaker
Why we are doing use cases that once you build a use case and then you you you put it, it runs for a long time and it consumes my infrastructure, which is my core business.
01:34:09
Speaker
Okay, so ah the data center business is like selling real estate in a way. It is a very infrastructure play. Cloud business is one level above that where you are, you abstracted the real estate and the upfront capital investment and made it available on tap through subscription.
01:34:32
Speaker
But it is still somewhat of an infrastructure, but more of a computer infrastructure. Now, in the AI business, you are going more towards applications. It is no longer infrastructure which you are selling.
01:34:44
Speaker
Yeah, because because if I wait for somebody to create these applications, I will lose the opportunity to host them as well. ah So um because the beauty about ais AI is AI creates AI.
01:35:01
Speaker
So I'll give you an example. There is a popular credit card aggreg aggregator, pay payments aggregator. um They wanted us to do a chatbot for their customers.
01:35:13
Speaker
But they didn't want to give us credit card bills of their customers because they can't. I mean, they are legally bound and they've committed to the end customers. They won't do. So what did we do? We asked ai to create credit card bills.
01:35:27
Speaker
So I created 2000 bills. AI created it with all the format, transactions, everything. gave that to the model, which is the chatbot, and said, these are the credit card bills.
01:35:39
Speaker
Understand the queries that customers will ask, and then respond. So in a week without data, were able to create a use case. That's beauty about AI.
01:35:53
Speaker
So man there is something called SDG, which is Synthetic Data Generation. That's what is catching up. the Really, you don't need a lot of data to create You need a little bit of data so that it understands what is it that you want to create.
01:36:09
Speaker
So for example, I gave my credit card bill to the and this thing saying that I want you to create a 2000 of this with different, different names and different, different transactions. So that's the beauty about AI. So the ai is accelerating so many use cases um and it's far easier to build applications these days. You don't need to code.
01:36:31
Speaker
um I have... BBA graduate who knows nothing about Python, but delivers Python code. um And this is all young generation. These guys work really hard and you know they are excited.
01:36:46
Speaker
So it's very easy to build these things. So we've already built 38 use cases for enterprises. So you build it and you host it and you run it for the long term. That's how we want to go. Amazing. Amazing.
01:36:58
Speaker
Okay. Let's take another detour. I want to understand the AI opportunity from a like a zoomed out perspective. like like One part of the AI opportunity is what we're seeing with Nvidia, like it becoming a trillion dollar market capitalization and and so on.

AI Applications and GPU Importance

01:37:18
Speaker
And then the on the other extreme, you have like AI wrapper SaaS products, but like SaaS products which have an AI behind it, they put a wrapper on it.
01:37:30
Speaker
And in in between, there would be other layers also. Could you take me through, you maybe let's start from the GPU opportunity here. ah What is a GPU? How is it different from ah CPU? Why is Nvidia a trillion dollar company?
01:37:45
Speaker
so If you see yeah what was game, a game is typically mathematical cognitive calculations at the behind. While you see the graphics, everything is is mathematics at the back end.
01:37:57
Speaker
So I told you about process not being able to do mathematics very well. So they took the older generation and created a mathematical processor. That is what is GPU. I mean, though it's graphical, but actually it is doing mathematics because of placing a pixel or everything is mathematically calculated.
01:38:19
Speaker
So, NVIDIA was doing a lot of these gaming cards where you put those cards and then so it was a very small company. AI is there since a long time, but people were doing something like deep learning, machine learning. Machine learning is you learn patterns and then you repeat the patterns.
01:38:40
Speaker
Deep learning is you really go deeper and understand what really happened and create patterns. So AI thought process was was happening.
01:38:51
Speaker
One of the beautiful things that they happened is they used GPUs to actually create what they call generational AI now, which is it it is generating content.
01:39:02
Speaker
So far it was not generating content, it was computing. It was just you know answering a formula. But the Gen ai is where people started generating something with it, like a text or image or something like that.
01:39:18
Speaker
When that happened, and and it happened on NVIDIA, NVIDIA saw the opportunity and started building better and better and better. So I think it um they were at the right time at right place, and then they took off.
01:39:34
Speaker
There are other guys who are doing graphics cards like AMD. They've got a range called Radeon and they are also doing this. They are second in line today. Intel lost the race, i mean, on both CPU and GPU.
01:39:47
Speaker
But what is happening on the GPU front? You will have, there is this training workloads today where everybody is looking at building models. There you need these big GPUs and then your large set of data and then you process it and then you make it learn.
01:40:05
Speaker
oh And so there are people who are buying these large GPUs. Today, 80% of the cub these large GPUs which are made are consumed by the top five, six guys, right? OpenAI, Meta, and all these guys, Twitter.
01:40:22
Speaker
All these guys are buying off most of the capacity. Then the rest of it comes to people like us. And, um but what is happening is once these models are done, you don't need so much compute capacity.
01:40:34
Speaker
So there are a lot of new guys coming up who are building different ways of actually querying these models. So that's, they call it inference. So the first set is training.
01:40:45
Speaker
The second thing is you infer the knowledge that the training it has, from the training it has acquired. So you're inferring and inferring doesn't require so much of compute.
01:40:56
Speaker
So there could be you know other people who will come in. So I think for the next three years, there is no question of NVIDIA losing ground. And they also realize that and they're doing a whole lot of work on the inferencing front.
01:41:11
Speaker
They have something called Dynamo. They also help people build use cases. There are tools which are building. NVIDIA is way ahead and they have at least three years of ramp up. But four five years hence, we'll have multiple ways of doing things, but not next three years.
01:41:29
Speaker
On the software side, there are these models. I mean, so there are closed models, which is ChatGPT. We don't know what the hell happens behind it, but ChatGPT learns from the user as well. It is closed, but whatever you are asking, whatever you are getting, whatever you're giving, it's learning from that so that it improves itself.
01:41:49
Speaker
Then there is likes of Meta. Meta processes the data that it has, which you have consented. That's what they say. And, you know, they build that model. But they are also giving away that technology open source. it They call it that models are called Lama 3, 4. Now we have four versions and all, which we all could use.
01:42:09
Speaker
So they are actually giving back something to the society at large with open source. There are other companies like Mistral who are building open source. That is the second set of people. um
01:42:22
Speaker
ah So and there are third set of people who are actually liberating these models and building use cases by fine tuning them, altering them because you have access to the source code. That's people like us.
01:42:37
Speaker
um That's the software side of things.
01:42:42
Speaker
Okay, and understood. and Does is open source mean free? When you're saying Lama is an open source model, it means that it's free. You can just download it on a server and have your own chat bot.
01:42:56
Speaker
Yeah, you you can do that. That's what it is. I mean, so there is a globally accepted license called Apache. I mean, there are two or three versions of it. Primarily, they say that in case you alter it and you do something better, please give it back to the community. This is what they request. It's not mandated, but you have full access to the core.
01:43:18
Speaker
So why is Meta doing this? How are they going to monetize? all this effort of... Oh, they are monetizing, right? I mean, so in your WhatsApp, there is... So today, I don't think these big companies are really worried about monetizing. They are worried about the knowledge that they are getting in terms of what their models can do.
01:43:38
Speaker
um But you just imagine what they've been able to do. You can ask a medical question, you can ask a physics question, they would still answer. So the amount of data they got trained, it is not related to what we are posting in Facebook and all.
01:43:54
Speaker
They've gone way ahead. And all the models are better lawyers, better doctors, better physicists than 95% of the people.
01:44:05
Speaker
and all combined into one. So that's the lethal combination that they are generating. ah I don't know what is the long-term thought process, but you know none of us can create anything close to that because we don't have access to that data, um nor have the resources to process that.
01:44:23
Speaker
but But, okay, maybe Meta has multiple sources of revenue, but say you said Mistral is also doing open source LLM. Why do an open source LLM if it's going away for free?
01:44:35
Speaker
What's the business model?

Open Source Contributions in AI

01:44:37
Speaker
ah So for any open source, it's basically there are smart people who want to showcase their alternative approach to things. i mean, Linux was the greatest example, right? i mean um It's one guy who built it and it powers most of the internet. mean, that's it.
01:44:56
Speaker
So that's the beauty of it. So it's a thriving community. There's nothing that's not available on open source. So if you want to build a drug, there are models available on open source to work with. Today people are also putting voluntary data sets for building models on open source.
01:45:15
Speaker
i mean even our Indian government has got AI course which is a voluntary whatever data that can be shared they're putting it there so that you can take it, leverage it and you know build models.
01:45:29
Speaker
Okay. Okay. Okay. but I mean, someone like a Mr. Rahl would have raised funds from investors, from VCs to build an open source model. The VC... No, it doesn't work like that. So in ah in an organization, what happens is there are 20 people. It's not very large teams which build it. Actually, it's like 20, 30 people because large teams, they are very difficult to manage.
01:45:53
Speaker
You won't get... So whereas when it comes to open source, these are like thousands of people improving it on every single minute step. step So the net result is so powerful that no organization can actually produce.
01:46:13
Speaker
Okay. Okay. okay Okay. Understood. So we spoke of business opportunities from GPU, which is the Nvidia opportunity to training, which is the chat GPT Gemini, they are chasing there to inference the and inference is what you are chasing.
01:46:31
Speaker
Who are the other competitors here in the inference layer? So but i don't have mean yeah I don't have competition in India. I mean, the inference essentially means you have a use case um and you run it for a particular ah requirement.
01:46:49
Speaker
um For example, we built one for one of the state parliaments. We've analyzed a thousand hours of footage of parliament proceedings.
01:47:00
Speaker
So today, if you want to query women and child welfare, what was the discussion? So it will go through all the thousand hours and it will give you snippets of who said what. I mean, so it's a, and you can get it as text, you can get it as a video.
01:47:13
Speaker
So there are, this is ah when we trained it, the model went through thousand hours of footage. But when you ask it this question, it doesn't go back
01:47:25
Speaker
It will only go back and pick those snippets from the original thing, but it knows where it is, who has spoken, what everything is known. So that is inference. So training is learning um that whole content and you know everything related to it.
01:47:40
Speaker
Inference is giving you a response. Okay. but There are a lot of companies which are into the enterprise AI domain. like like These are companies which are helping enterprises to adopt AI. Are they not your competitors?
01:47:56
Speaker
No. I mean, so in fact, you know a lot of them will have to come. So what is my advantage? Number one, I have 140 petabytes of production data. So companies need not bring data to me. I take AI to them because they are already inside my data center.
01:48:12
Speaker
That's my first confidence building exercise. right you know I don't need to really go fetch data. it'd say It's with me. All it requires is customer to ask for AI. We are not asking customer for data.
01:48:24
Speaker
We are giving customer AI. That's the first approach. Second approach is people are all looking at building models. where So their focus is on training.
01:48:35
Speaker
They want to train, train, train. We are not ah really worried about training. We want to infer, infer, infer. So um that's where we are trying to get there. This market is not big enough today.
01:48:48
Speaker
It will be big enough in two, three years. But the foundation has to be towards inference because I want to go after enterprise inference. There are a lot of people building use cases for consumers and all. We're not going after that also.
01:49:01
Speaker
We are going only towards an enterprise customer. why believe? This play has got a play globally. This is like how Y2K happened for India and you know the entire software industry happened.
01:49:16
Speaker
I think we can do the same thing with AI because it's no longer you need those scientists and you know those long hair and all that because AI does it.
01:49:28
Speaker
i mean, every day we have better tools to to do it. So smart kids do it. It's just about the understanding of domain that you need.
01:49:39
Speaker
ah
01:49:41
Speaker
it Like every marketing services agency, digital marketing agency, a lot of these digital agencies are now offering these enterprise AI solutions where they will help their plans to adopt AI.
01:49:55
Speaker
So you're saying that the difference between you and them is that you already are at a much deeper infrastructure level. It is in simple terms, it is very less. Using AI is very different from building AI.
01:50:09
Speaker
They're all making API calls to ChatGPT, Gemini, Barshini, or someplace, and then doing a

AI Model Development and Investments

01:50:17
Speaker
use case. Let's say somebody is doing translation.
01:50:20
Speaker
What do they do? They take English from you. They'll convert it into Malayalam. But who's actually doing it? It's Google or Barshini or somebody at the back end. So yeah not I'm not competing with This is like your wrapper. This is like PhonePay.
01:50:34
Speaker
Is PhonePay doing UPI? That's bullshit. i mean, it's the cheapest application you can build, actually, because it's just a wrapper on top of this thing with some frills around it. So Google Pay, PhonePay are not big application building challenges because the real muscle and the architecture is behind with UPI.
01:50:53
Speaker
So these are all rapper companies. What we are doing is very different. i mean We are saying that we build the UPI itself, but we can't build the UPI. We can build a small piece, which is relevant to that particular company. like I will not ah i mean i won't go to Dr. Reddy's and say, ah i'll I'll help you build a drug. No, I'll not do that.
01:51:15
Speaker
i and I can't do it. I can tell them that, look, there is a drug making model. How do you want to use it? I can make it useful for you and I will fine tune it for your requirement.
01:51:28
Speaker
So you don't do like an API call to a chat GPT to provide a solution to a client? No, no, I have a better model. You should use my Aurora. It's got Lama 4.0. It is beautiful. mean
01:51:42
Speaker
so So you have used open source to create your own model instead of yeah doing API calls, which increases your margins or are there other benefits also? I have no cost in building a model.
01:51:55
Speaker
Because you're using open source, right? I just fine tune the model for a specific use case. For example, we took some models, we built a developer assistant. And we use it internally, some 2000 developers use across the country.
01:52:08
Speaker
The good thing is I don't learn from the developers. I don't know what they're doing. It is not connected to internet. It doesn't hallucinate. It only answers whatever that you want to be answered.
01:52:19
Speaker
All my models are not connected to internet. They can't tell you what is today's date, for example, or today's weather, because they have no clue. they're all They've got certain knowledge when they got trained and they sit in isolation and they answer your queries.
01:52:35
Speaker
um And whenever it's upgraded, I upgrade the base model. oh So this in a way is like directly competing with the API, right? like Like I could either come to you or I could also directly use an API and build the same solution. Like you gave the example of reading a credit card bill, answering customer query that could be done through API. Like the credit card bill could be sent to chat GPT and then chat GPT can read the bill and then answer. Sorry.
01:53:03
Speaker
sorry right will be 10 times cheaper because okay the problem with OpenAI is OpenAI has to recover all the billions it spent in training the model.
01:53:16
Speaker
I mean, somebody has to pay for Right. I don't need to. I have to recover my transactional cost. It's power, GPUs and a little bit of manpower.
01:53:27
Speaker
Why do you need to raise the 300, 400 million dollars for doing this? Oh, I need GPUs. ah Okay. the The big cost to do this is the GPU cost. So you are going to build a GPU data center and a GPU cloud.
01:53:46
Speaker
Yeah. and And the dynamics are very different. Actually, what happened is that today, when I'm doing my cloud, I was yeah you know using certain parameters. Let's say I was using 10 racks.
01:54:02
Speaker
The same thing when I do yeah the 10 racks shrink into one rack. So I was consuming, let's say, 10 kilowatts per rack. Today, I need 100 kilowatts per rack. So the dynamics of AI hosting is very, very different.
01:54:17
Speaker
So we thought we were smart. We could get to 100 kilowatts per rack. Now they are demanding 300 kilowatts per rack. This is like a decent enterprise data center in one rack. um But by the time next year, this time, we will need 600 kilowatts per rack. So it is like 60 racks becoming one rack. That's the density of power and cooling required.
01:54:42
Speaker
So you need to invest in that. You need to invest in the new ah new age GPUs because um you can't buy 10,000 of one. It doesn't work like that in this. You buy what is the most modern today.
01:54:55
Speaker
Three months later, it is actually virtually three months. So you buy chunks of 512, 512, 512, and you end up with better and better GPUs. This is like you know how we went through 2.5G CDMA to GSM 3G, 4G. That's what is happening. And at a much rapid pace. um Like this time next year, i think it will be six times faster, more expensive. I don't know ah how it will go.
01:55:24
Speaker
So you need a lot of money to do it. Okay. Okay. That's fascinating. so So you need money a for GPU, B you also need to re-architect the the power supply, the cooling designs, all of those need to be made from scratch to accommodate and also new hardware for the power and the cooling and things like that. Yeah. And you know, simple things like I can't host it in my first floor because none of the buildings that exist in this country, i mean, including the data center buildings,
01:55:56
Speaker
can host this kind of density in the first floor itself. So all the data centers that you see multi-storey, can't get to here get to this place. They'll have to have empty space with one rack.
01:56:09
Speaker
um So you have to build it on on rock solid ground because they are heavy. So the density means more metal, more water, more this thing. So it's a full-fledged new investment that goes at the data center layer.
01:56:26
Speaker
OK. ah Do such ah GPU data centers exist in India? No, none. that so We have the first one deploying liquid cooling in in the country. That takes us to 100 kilowatts.
01:56:42
Speaker
We are figuring out how to do 300. We have no clue how we will do 600. We'll get there. we get there but good time So there is, so you also need money for the technology. No, I can't do it in the first floor of my data center. Okay. okay So there is a certain amount of technology risk also for which you need money. Like some of your bets may not pay off. You you need to experiment with multiple ways. That's one of the reasons why you should go after inference because if you go to training,
01:57:14
Speaker
The training guys, the key thing is you know time to market. you know How quickly do you want to get the model in the market? So they want the best in class ah processing so that the processing happens faster.
01:57:27
Speaker
So if you're ah chasing the training market, your previous and like current generation will be obsolete in three months. i mean, they will be demanding the better ones. But if you have inference,
01:57:40
Speaker
um It doesn't matter where you run. You know, customer wants the use case to be served. He's not really bothered where you run it. So it will not be idle. um We can always recover the money over a period of time.
01:57:54
Speaker
Okay. This GPU data center, GPU cloud, you are doing only for your own inference services or are you also planning to offer GPU cloud? ah we are doing lot of startups especially through india ai there are some startups building there are some government agencies agencies building use cases so first layer is obviously when people consume gpu as a service and pay on an hourly basis um we were able to bring down the cost significantly for example
01:58:29
Speaker
and the ah going rate for H200 GPU was like 500 rupees an hour and we brought it down to 188 rupees on retail. I mean, so it's like, you know, we were able to do that because of liquid cooling, not a lot of other ah aspects to it.
01:58:45
Speaker
um But so that's the first layer you sell. So there could be startups building use cases. um We are also taking some startups to our customers.
01:58:57
Speaker
So for a very large enterprise group, they have a lot of ah retail brands. um One of the startups built a very nice use case where you can change shirts and um you know you can put new shirt and all that.
01:59:13
Speaker
The beauty is that people think this is, people would have seen this everywhere. and But you will see the good thing about it when you take, let's say Jensen, who's wearing a leather jacket always, you take off his leather jacket and then put a T-shirt on him. He's complex.
01:59:31
Speaker
Okay, so you need to guess how the body below it. Those are the aspects that the startup brought in. So we are hosting them and we are exposing them to our customers. So there will be independent software guys.
01:59:45
Speaker
There will be our team, which will be did. But the whole idea is we get the inference on our interest infrastructure. When you collaborate with these startups, you also earn something from the sale?
01:59:57
Speaker
No, no, no. ah ah No, in the IndiAI case, IndiAI is giving a subsidy. Startup pays 60%, IndiAI pays 40%. I mean,
02:00:10
Speaker
the The subsidy could be 15% to 40%. I mean, that's IndiAI's choice. But there are some startups. For example, we are working with a startup to build a trilingual model.
02:00:22
Speaker
Because in India, for us to get language very um very precise, um it shouldn't be English to Malayalam. It should be English to Malayalam plus Hindi plus English.
02:00:37
Speaker
Because our languages, we mix many many more terms. So you need to be able to also talk in the same fashion. It's not that you speak Hindi like Narendra Modi does. I mean, you and don't understand two terms. I mean, so...
02:00:52
Speaker
um We should be able to use some English terms in the middle. I mean, so we are working with a startup to do some trilingual models um where actually we are working on a model with Hindi, English and Tamil.
02:01:05
Speaker
um So there we are co-investing. So our investment is the GPU. Their investment is the talent. So various models that we are working with. Very interesting. oh What is this India AI thing?
02:01:19
Speaker
So government has put up um a budget of about 10,000 crores to build, ah to encourage AI. And they've got seven pillars in it, including you know how to safeguard AI, how to share data.
02:01:32
Speaker
ah One of the things is to provide GPUs at a low subsidized cost to specific projects and government entities and also that the adoption increases. So they will spend roughly about 6,000 crores as subsidy.
02:01:47
Speaker
That's a 16,000 crore opportunity. That is, if 6,000 crore is 40% of the subsidy, then the whole market is about 16,000 crores. We are the lowest there. There are three companies currently de delivering services there.
02:02:01
Speaker
That's a massive opportunity for us. They're also doing subsidy not at 40% everywhere. So the opportunity could be about 20,000 crores for us to work with. Wow.
02:02:13
Speaker
Passive. Okay. Okay. ah What is the India AI opportunity? Is there, like lot of people feel like India doesn't have its own LLM. do Do you think that's a realistic goal to have or should we focus more on inference?
02:02:30
Speaker
um I think LLM is a one-time thing. And um so okay actually, we should have it for the sake of having it. The reason being DeepSeek happened.
02:02:43
Speaker
The beauty about DeepSeek, which China produced, is it thinks very differently from the way ChatGPT thinks. So, DeepSeek thinks in stages.
02:02:54
Speaker
ChatGPT thinks at it as a whole. You ask it a physics question, it will refer everything towards biology and all that, and then determine that certain things are not relevant, and then present you with an answer.
02:03:09
Speaker
Whereas, DeepSeek will analyze your question and come up with... So there is a political pressure to build our own model. yeah See, the beauty is we cannot beat. If you take meta, Hindi is ridiculously good on meta.
02:03:25
Speaker
I mean, you should... I mean, you you can... Even it can do all other languages, maybe not as good as these. One of the problems is this trialing wheel problem, right? i mean, so but they will get there.
02:03:37
Speaker
You can't beat a Gemini or a Meta in processing languages.

India's AI Ambitions

02:03:43
Speaker
They do 200 languages. um So India will build it for the sake of building it.
02:03:50
Speaker
um But what is useful for India? For example, we are training a model, which is a financial services, at financial professional assistant. This is trained as a CA.
02:04:02
Speaker
So we trained eight textbooks of CA. all the tax cases from 1969 to December of last year we trained that is interpretation of all these laws and GST and all these laws and and stuff like that so what is it it gives you precise accounting answers which are relevant to India it doesn't give generalized accounting answers There is a lot of benefit in doing domain specific, India specific models, which will be much smaller. So our model is roughly about 8 billion parameters.
02:04:34
Speaker
It is not a large language model. it is We are trying to make it work on a regular server. So a lot of these things will be more popular in the long run. But I think getting an LLM is a pure political India ego agenda.
02:04:52
Speaker
okay yeah Because you can't beat these larger models. It's not possible. like Even if somebody produces, they'll produce a model. To give you a distinction, there 605 billion parameter DeepSeq model.
02:05:08
Speaker
And when you host, I mean, it doesn't run in most systems. I mean, when you host it in our systems, I mean, you ask, I mean, the way it responds is totally different. Our best large language model will be eighteen billion model.
02:05:25
Speaker
okay What is the definition of a small language model and a large language model? There is no firm thing. But typically, you know if you have a large language model, you need large amount of memory on the GPU.
02:05:39
Speaker
If even that is not enough, you have to network a lot of memories of GPUs to actually be able to load like the chat DeepSeek 605 doesn't load on the largest of the GPUs one number.
02:05:54
Speaker
You need a minimum of 30 numbers to load it and just to load. um A small language model is the amount of memory it it will take. um In a way, you can say that 8 billion parameters and less becomes small language model or even much less so that it can load in small amount of memory on a GPU.
02:06:15
Speaker
OK. So typically a small language model would be like the CA ah that you created, the the the the CA model, which which is trained on a limited data set.
02:06:27
Speaker
And the 8 billion, 8 million parameter means that that is referring to the amount of data it is trained on. So 8 million means it is a smaller set of parameters.
02:06:38
Speaker
8 billion parameters it will inference on. So what do we do? We take an open source model. We retain the language capability and reasoning capability and ask it to forget every other knowledge it has.
02:06:52
Speaker
That is, I don't want your biology, nothing. Nothing. Only your reasoning capability and language capability. Then I give this domain-specific information. This process is called fine-tuning.
02:07:04
Speaker
So we strip all the other unwanted stuff and we add what we want. So it will behave only like this. So you can't ask it like, you know, how do you get rain? I mean, it can't answer.
02:07:18
Speaker
I mean, it doesn't have the knowledge. Original model had it, we stripped it out because it's open source. Okay. Okay. Okay. And like typically enterprise chatbots are,
02:07:32
Speaker
um They're not really small language model, right? They would be using some sort of an API call, but there would be some amount of guardrails built in so that it is only giving answers which are taken from the enterprise information set.
02:07:47
Speaker
So that that's the game. So why this domain specificity is important is in an enterprise or a professional use, the um response has to be precise.
02:07:59
Speaker
You can't guess. It's not like chat GPT. You can't say, if you if it is 2 plus 2, it is 4. You cannot say it could be 4, 5, 6. That's not an option there.
02:08:10
Speaker
um Like you do it in medical. you do it and So whenever it becomes domain specific, the accuracy has to be near 100%. And you cannot afford for any hallucination. It cannot think that, OK, I thought, I mean, this is what it is. So what happens in chat GPT is if it doesn't understand, it will assume and respond.
02:08:31
Speaker
In an enterprise case, you cannot assume.
02:08:36
Speaker
you You need to curtail that.
02:08:39
Speaker
Okay. Okay. Understood. So essentially at scale, every enterprise will need a small language model. um and Multiple. Because you have some customer use cases, you may like we are doing for an electronics factory in Bangalore.
02:08:57
Speaker
We are doing a model for um of finding out faults in PCBs. So it's normally a vision language model where you give fit pictures and you generate pictures. i mean, ah we're using the same thing, but here we're telling, okay, only look at this. Only these are the faults. mean, this is what you do. I mean, so so it could be every enterprise will have five, 10, as we mature, we'll have more use cases.
02:09:28
Speaker
And this really, And this really is the India opportunity to build small language models for the world. Globally, it is hundred times bigger opportunity than training.
02:09:41
Speaker
Because for consumer, you will always build one model which is smarter and smarter and smarter. But in enterprise, you will build one for manufacturing, one for accounting, one for HR, one. I mean, like it will go on and on and on.
02:09:56
Speaker
Okay, fascinating. Okay, so let me end now. I have taken a lot more time than we had on your calendar. ah But let me just ask you, do you want to leave our audience with any you advice? You know, people who are aspiring to be builders of the future, any advice you'd like to leave them with?
02:10:18
Speaker
Yeah, I mean, I think it's a the beautiful thing is that I mean, i in my lifetime, ah actually, I've come from a time when we had to wait for five years to get a telephone line.
02:10:32
Speaker
to an excess of i mean communication where you know you you could actually make a video call on the move. right I mean, um if you can extrapolate this kind of change for next 20 years, you could imagine what could happen.
02:10:48
Speaker
And that's what AI is going to fuel. So I think everybody should go and do something with AI. The the least we should do is not, I mean, use it. The best we should do is, you know, leverage it to create something new.
02:11:02
Speaker
and And we cannot miss this bus. There is no option. Thank you so much for your time, Raj. Thank you, Akshay.