Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#89 Kimmo Koskinen aka viesti image

#89 Kimmo Koskinen aka viesti

defn
Avatar
42 Plays1 year ago
Join us for another fun with Kimmo a Clojure hacker from the woods of Finland. Also, the first episode without Vijay .. finally :P https://github.com/viesti
Transcript

Co-host Change and Voice Synthesis Joke

00:00:16
Speaker
Holy shit. Here we go. Yeah, we got to swear and start with. Yeah. Slightly different setup this time. Slightly different setup. We've, we've got VJ out on urgent business. So swooping in to replace him. Well, or at least fill the the co host gap is Wouter. Hello, Wouter.
00:00:43
Speaker
Hey, so we ran the idea of basically transcribing whatever I say and then running it through a voice sense with VJ's voice. By this point, which episode number is this? We're running up to 90, 88, 89, I don't know.
00:01:04
Speaker
Somewhere around that number, yeah. Surely that's enough of audio to train an algorithm to reproduce VJ's voice, no? I think so, yeah. Might actually just be fun to do.

Introducing Kimo and Eurovision Insight

00:01:19
Speaker
Actually, just see if we can replace that. But the good thing is that we haven't just got me and Wouter, that we have got a guest as well, which is always a treat and always good to have on deafen.
00:01:34
Speaker
So hello, Kemo. Hello, Kemo. I say Kemo, but I'm thinking it might be Kimo or something like that. It might be a particular thing that I don't care. Yeah, you actually got it pretty well there. Kimo. Kimo is the Finnish word. Kimo. Yeah. Yeah. But you've got a Kroskinen. Is it Kimo Kroskinen? Yeah, Kroskinen.
00:02:02
Speaker
Korskinen. We also want to apologize for not getting the Eurovision victory you so deserved. I'm pretty sure it was set up, man. We were actually with my wife watching the Eurovision. Well, not that late.
00:02:29
Speaker
Well, the finished performance and so on here in the same like the terrace that I have here. I don't know. Yeah, so it's like, yeah, but but yeah, it was it. Yeah, I guess the yeah, I guess I have to say that the who wanted they have like a good. Well, good thing going on there, but.
00:03:00
Speaker
But it was Korea with a good try. The jury wasn't so easy to please. I guess. I was

Living in Finland and Remote Work

00:03:14
Speaker
talking with my brother about it. And essentially Finland got so many votes from the popular vote that every country must have given at least 10 points to Finland.
00:03:29
Speaker
So in order to get like 380, whatever. So like Finland almost scored like the maximum points you could get with the popular vote. So yeah, it kind of, I mean, it feels a bit set up, you know, like 50 years after Abide goes back to Sweden, like,
00:03:49
Speaker
But whatever. Anyway, I truly enjoyed the act. It was everything I wanted out of a Eurovision song act, you know, like way over the top, totally aware that it's way over the top. It was also like it kind of being like nice because you're very friendly with Sweden. So that's all good, isn't it?
00:04:10
Speaker
Yeah, yeah. Interesting awkward friends in this going on. But yeah, yeah, so yeah, it's so yeah, I just hope that the guy here who was like performing that he doesn't like, feel so bad about it, because he, he was, he was really
00:04:37
Speaker
Uh, he got like people to move and like got excited. And, uh, he was, he was also like saying that, uh, it's just like, uh, yeah, okay. It didn't get the place to read, but at least we tried. So I guess it's kind of like, maybe it's obvious now that you're from Finland, but maybe it's not, but so just to nail it, you're from Finland and you're in Finland right now. Yeah. Yeah.
00:05:04
Speaker
Yeah, I'm, I'm from Finland, I'm like Southern Finland, I live in Espo, so like in the woods. People here live in the woods. Well, the people on the people on the podcast can't see but you pointed the camera around and you were literally in the woods. Yeah, well, it's like 20 kilometers by like line of sight to Helsinki Center, where I live. That's not too bad. Yeah, it's not. Yeah, it's not too much. But like,
00:05:34
Speaker
But when doing commuting, it takes me an hour to go to the center. That's not a Finland exclusive. I live 20 kilometers away from Antwerp in Belgium. And if I can get there in an hour by car, it's a good day.
00:05:56
Speaker
Yeah, I think I guess nowadays, it's kind of, it's all kind of, it's all kind of remote work now. So you're remote in the woods, or do you still pop into the into the real office sometimes? Yeah, sometimes. It's, I kind of like, well, for me, when the corner stuff started,
00:06:21
Speaker
So having two kids and then skipping the commute to drive the kids to the daycare and then get out of the daycare. This remote work was actually a good thing. But yeah, so probably visit the office maybe a bit more.

Kimo's Work and the Impact of the Pandemic

00:06:44
Speaker
But the times that I go to the office, then they are more cherished. So we usually have
00:06:50
Speaker
something going on, some kind of, maybe it's a project kind of meeting thing or just like whole company meeting things. So that, well, okay, the office, like we have, so I work for Metosyn. Who was it? Yeah, that is funny, funny thing to make.
00:07:14
Speaker
funny libraries who are hard to pronounce. The names of the libraries that are hard to pronounce. Well, actually, I don't know, I have to link the bit assessment to the, there was the cons. And Mikko did a recording of how to pronounce Reddit and Molly. I have to find a Twitter link.
00:07:43
Speaker
Did that pronunciate something? But yeah. I think it's all the other ones though. I can't remember. There was one beginning with S, Serapooey or something. I can't really remember what it is. Yeah, that's the F party. Oh yeah, come on. I mean, that's the other party. I mean, I've forgotten what it does now, but I know it's hard to remember. Yeah, it's an interceptor kind of library, but yeah. Name is
00:08:11
Speaker
probably a bit hard. Yeah, it's a bit harder than rated. Yeah. Yeah. But okay, so so what are you doing over there at medicine? Or matossian? Yeah, I'm doing project work mainly currently. So we have like a couple of era, we are a small company. But hmm.
00:08:35
Speaker
a couple of clients and yeah and okay then we try to also maintain both libraries that we have also along hard but but yeah it's uh yeah this the like so this this uh metosyn is actually originally from tampera like oh yeah yeah and so we have offices in Helsinki but now we have office also in
00:09:04
Speaker
all like Northern Finland and Uvascula, which is like somewhere in the middle of Thampere and Olu. So I was just like remembering like we're going to the office so like being specific with the office. So like the Helsinki office is the nearest office.
00:09:28
Speaker
Well, I mean, but I'm a toaster very famous for their closure trade conferences in the past pre pandemic and I went to one in 2000 2019 I think yeah, they've been that

Journey into Programming Languages

00:09:40
Speaker
long ago. No, no, yeah, that's 21 it was the last one before I stopped for a while or at least before it was put on hiatus, let's say
00:09:50
Speaker
Yeah, true. When was that? Was that 20 or 20? 2019. I remember. It was 2019. Yeah. But yeah, so yeah. Let's see when there would be, I don't know, I have to take like the marketing guys and say anything stupid. Yeah, don't worry. Don't worry. Yeah. Yeah.
00:10:19
Speaker
But really, really nice times, but yeah, that it's been after the pandemic hit.
00:10:28
Speaker
kind of fast to restart everything. Yeah, definitely. I mean, it was famous for its kind of parties and, you know, great talks and great parties. So I was really pleased to at least hit one. But I didn't actually hit the sauna because it was a bit like, I think Tommy asked me to go to the sauna about two o'clock in the morning and I was like, easy, mate, you know. It's a me-too moment, you know.
00:10:53
Speaker
Not really, it was all very fun. But two o'clock was too late for me. But is it like this where you get when you get your office get together? So you basically just all hitting the sauna together and like sweating out? Yeah, yeah, it depends. But if there's an opportunity like last week or so, we had a kind of company thing where we went to have a dinner. There's this island near
00:11:22
Speaker
the Black Phoenix coastal area and there's a sauna there. So you had dinner and then sauna and then of course you get to go to the sea, like dip into the cold sea and then back to the sauna again. So this podcast recording thing, it's like
00:11:51
Speaker
Half past nine here, my kids are sleeping. I was thinking that, like, okay, what's the most quiet room that I don't bother the others? So that would be the sauna. Doing the forecast from sauna, maybe.
00:12:05
Speaker
That would be on brand. It's the most Finnish thing to do because I was going to say, you were saying like, yeah, this place with the sauna and like that's probably the most vague location specifier you can imagine in Finland with a close second to it's the hut next to the lake. Yeah. Yeah. Yeah.
00:12:30
Speaker
Yeah. So you've been working with Closure for a few minutes now. Come on.
00:12:37
Speaker
And so we usually start off here with a little bit of like a backstory on how you came up to closure and what your route was to finding the light. Which darkness, which particular mushrooms were you eating before you arrived to the surface of Plato's cave? How did you discover the parentheses?
00:13:03
Speaker
Yeah, I kind of wondered how to like always try to grow a new those mushrooms to like rediscover. But like maintain the heart. But yeah, yeah, I guess it's it's yeah, it's been a while back. I think I I remember when I bumped into closer it was somewhere around 2009 is
00:13:33
Speaker
first time. I was in a product company startup called Echo. It was either me or one other guy in the company, which we kind of bumped into that language. And yeah, it was the company was making like a Wi Fi
00:14:01
Speaker
planning tool, and an indoor Wi Fi processing system. So my colleague there, it's I learned quite a lot from him. It was like, after the like, 2008 economic downturn thing, we got our first first big layoffs in the company and so on. So my colleague comes to another team, and he started the project
00:14:31
Speaker
project rewrite of this application. And he started the back-end with Closer on 2009. That's very early days, no? Yeah, it was really early days. The front-end was in Flex. So before this HTML5 era, Flex was like we were. So I was like, hey, cool.
00:14:58
Speaker
But I was left as like maintaining in the team that kind of vanished around, like a lot of people left. But so I had this like Java code base, 200,000 lines, something. I was like, I can't ever put a new language here. But anyway, I had a good time. The thing that I was making there was actually like putting money into the company and
00:15:25
Speaker
It was great. And so I kind of looked like it has to be statically typed. So I kind of look this Scala and Haskell things. So like a couple of years, I did that and then kind of circled back around 2012 or so and started to look more into closer.
00:15:52
Speaker
Uh, switch team in 2013 and I don't know, 2013, 14. I guess I mean, trying to mainly too close or I switched to another company too. Well.
00:16:06
Speaker
to another Finnish consultant. Just before we finish that point, so you said you looked at Haskell and Scala or did you look at it and did you actually, did you introduce it like into the company or was it just, was it just a kind of like fetish at that time, let's say? Yeah, so it was 2010, my first phone. So I remember reading Learn Your Haskell, like,
00:16:34
Speaker
while being on parental leave and so on. And I kind of tried putting Scala into the app, but it's, I don't know, the compilation times at then. And it was just like a system of a chore. So it was like mainly me. But the closer app that was like, they were doing kind of good.
00:17:05
Speaker
had like a team going on there, the team was growing and stuff was happening. And so I can eventually that okay, there has to be a kind of a reboot of the thing and then I kind of jumped the

Java's Stagnation and Alternative JVM Languages

00:17:20
Speaker
team. And yeah, it's so hard to like, I don't know, like, if you're in a situation in a team and you have like a product going on and
00:17:33
Speaker
and you would want to try out something different. It's like if there's other people and if there's time and space, but if there's only you, you're kind of hitting the walls. Oh, yeah. You're essentially just spending time on upkeep, right? Like there's no time for experimentation and every experiment is too risky because... Yeah, there's always time for like some
00:18:02
Speaker
experiments that kind of go in the like, general direction, where the kind of product is also going. And those are just easy, easier to do. And like, fun and so on. But then if you have a bigger thing, like changing a language, that's
00:18:20
Speaker
Yeah, you might get a lot of benefit from it, but you're going to have to spend a lot of time down basically. Nobody wants that. Especially when your product is making money for a company, nobody wants to stop and sort of make it faster for the developers. It's like, well, you know.
00:18:39
Speaker
Yeah, I understand your point, but no. I've heard some good things about Kotlin in that sense, that you can introduce it relatively easily and it sort of sits in the toolchain quite easily.
00:18:54
Speaker
You know, it obviously will like dry up a well reduced amount of code you have to write, but it plays very nicely as a sort of, I think what Scala wanted to be, which was a, well, Martin or Dersky sort of tried to sell it as, which was a better Java. Whereas Kotlin, I think really genuinely is a better Java.
00:19:15
Speaker
And of course, people are getting used to it now with it being an Android default language. I used it a bit when doing some stuff on, I mean, when I still maintain an Android library. And that's how I thought about it. It's just generally a better Java. But Java has grown a lot of features in the meantime that close the gap.
00:19:42
Speaker
But I think that the main draw of it is because you can have a project, you know, like you can have your classes written as a Java class or a Kotlin class. Like you can have both files in the same project. There's nothing really special to configure. The interop is entirely seamless. So it's easy to just try it out, right? You could be like, Hey, I'll add this data class and I'll try it in Kotlin and I'll add a few methods. And yeah. Yeah. Yeah.
00:20:09
Speaker
I think things like scholar and closure are definitely, you know, like you're really changing the language, you're changing the concepts almost, whereas with some of these like, like small improvements, like I know people who did things in projects with Groovy, for example, as well, like you can embed Groovy and Java projects quite easily.
00:20:29
Speaker
So there's definitely scope even on Java projects to have other languages working in it, but it really has to be super smooth. It has to be super compatible. I remember when I was joining the company, it was 2005 and it was in the midst of a rewrite second or third, but they weren't changing the language though.
00:20:56
Speaker
it was still Java but like actually rewriting stuff but maybe something like if you hit in a face like that then like oh okay people have to like wait wait things like and well I don't know close I have nice Java interrupt and I remember later on bumping to the guys who were like uh still working uh at the company and in some euro closer conference I

Integrating Clojure with Modern Technologies

00:21:25
Speaker
maybe in practice level, I forget, but they got to like replace, well, try it out replacing like core parts of the Java thing with closer, so like emitting the interfaces and stuff. So like, but yeah, I guess the overall thing is that it changes quite a bit because for me, the, like the Java that I wrote back then it was really awkward, like all,
00:21:55
Speaker
are static functions. And trying to emulate these, like, functional kind of style. Yeah. So it was kind of natural. Like, I don't have to, like, try to hide this thing. True character. Having first class functions actually made the world saner, I guess. Yeah. Yeah. I guess someone has like, um,
00:22:23
Speaker
some way. I remember the times it was like, I don't know, the Sun and Oracle and that and like the language kind of stagnating out of more and more things. Like you were saying 2005, like that's, that's still peak Java, right? Like that's still where Java was like the one language that's going to rule everything and like Java 1.34 maybe around that time.
00:22:51
Speaker
Yeah, 1.5 was, I guess, just about. Yeah. Yeah. So, so, so I guess, I guess it was, it feels to me, at least in my recollection, it feels like the, it's, it's only like around 2009, 2010, like all the, like there's a lot of steam behind the alternative JVM languages, right? Yeah. I think like 2005 still feels like Java was the thing.
00:23:18
Speaker
Object orientation was the thing. The funny thing to me is that what I remember from that sort of period, pre-2010, was that there was a bunch of lambda proposals for Java. Because everyone kind of realized that the first class functions, as you say, was the thing. And I think there was three proposals in front of the Java language committees.
00:23:45
Speaker
I come with the details of them. But there was at least three that had relatively broad support. But in the end, they just decided to do nothing about it. And then the Oracle Buyout and all these other things. So I think that stalled a lot of innovation in the Java world until it just settled around Oracle's ownership and what they're going to do with it and all these kind of things and how the team settled out.
00:24:13
Speaker
But yeah, so all those those potential innovations that would have came before scholar enclosure around first has function just kind of fizzled out.

AWS Lambda and Serverless Architecture

00:24:21
Speaker
And then it was Yeah, quite late, wasn't it? You know, it's like, well, no way past 2010 that they got like Java 1.8. I can't remember the exact timeline is but it's certainly like way after like scholar enclosure got their star. Yeah. Yeah. Yeah. I haven't looked
00:24:40
Speaker
like what does modern Java look like? But probably you can do lots of stuff now. But yeah, I remember the kind of fear of stuckness, and that's OK. Where is this going, this language? But I guess the kind of savvy, I mean, say of it kind of, that's
00:25:07
Speaker
floated a bit more up and like closer in Scala and groovy and like a lot of options that hey, there is this actually this VM that is actually really good. Yeah. Yeah. Yeah. JRuby and I know there was quite a lot of big languages that were sort of coming to the JVM that weren't Java. I think JRuby was the one I remember being the first big one.
00:25:32
Speaker
you know, where it was like, oh, there's the running this Ruby thing on here. And it turns out that it was faster in many, many ways than the native version of Ruby for certain tasks anyway. Yeah. Even Python implementations. No. Yeah. There was this. Okay. Yeah. Python. Yeah.
00:25:53
Speaker
I remember working for also like a consultancy company at a time where we had a customer whose requirement was that everything needed to run as a web logic application container.
00:26:18
Speaker
And we were made aware quite late of this requirement in the development cycle. And so I remember having done a few really unholy things to port Python code, Ruby code, common Lisp, actually, to run inside of a container. It started when we delivered the project, but I won't vouch for anything more.
00:26:49
Speaker
Right. So, okay. So we've got to closure. No, no, it's good. It's good. So we got to closure. We've a lot of reminiscing going on here today. Um, right. So, um, maybe maybe we should, uh, have a quick like catch up as to what we kind of like started talking about Kim or to motivate you to come onto the show. Um, and to like give some background about why
00:27:15
Speaker
why we started talking on the Closureian Slack about performance of Lambdas and Closure in Lambdas. So maybe you're in a good place to give a bit of background on that. Yeah, so Ray puts out some messages in Closure Slack and then I reply the thread and a couple of messages later I get invites.
00:27:45
Speaker
Something like that. You're very good at executive summaries, yes. Losing a lot of the detail here though, but it's essentially this timeline, yeah. But yeah, so yeah, it's funny thing. I don't know how, it's kind of spans maybe a bit.
00:28:12
Speaker
Because, well, in my closer journey, I was working in another Finnish customer in Finland, and we had a telecom customer, like my third biggest telecom in Finland, and we were doing this fancy new analytics stuff. Are there any telecom companies apart from Nokia? I mean, I thought that was it.
00:28:40
Speaker
Yeah, but well, there's, well, like, uh, uh, uh, there's Delia and, uh, Elisa DNA, which you can, like, go to the store and buy your subscription.
00:28:52
Speaker
Lucky I make the hardware and other stuff, but then you have these companies who make the cell phone, subscripts and stuff. So they kind of operate the network. They run the stuff that... Well, actually, yeah, I don't know which one's hardware they actually use, but yeah, anyway. But this circling kind of tying the...
00:29:23
Speaker
ends of the rope or different ropes of this story. So I got to, I don't know, got to look into a lot of AWS stuff. I was messing around with lambdas. This was like 2016. I guess it was maybe 2016 that I watched the talk on closer coins.
00:29:53
Speaker
that Christoph Grond was giving with some other guy whose name I forget, sorry, on what he was working at that time. Yeah, I might remember this wrong, but he was working with this Patsy Spark. So it's kind of Pats streaming, Patsy.
00:30:22
Speaker
kind of MapReducey kind of thing. And he was making a library to use Closer with Apache Spark in a nice way. And how, like, Spark is written in Scala, and it has a kind of cell, like Spark cell thing, so you can start to sell a kind of REPL, and then
00:30:51
Speaker
type code in there and it would compile it, send the bytecode to the cluster and to the worker nodes. But you had, there were some closer libraries around Spark, but they kind of required you to have time compile stuff. So you're going to lost the ripple there. But, but Christopher's working in a library called Powder Cake. So you could
00:31:20
Speaker
like start the REPL and it captured the byte code and it used the same mechanism that Spark had to basically add a char to the class path of the worker nodes. So, so I was like, we were also like looking into data stuff and on that project and I was like, Hey, like we have to like, look into this. But, uh, so start of 2017, I was going to, apparently I had my second born and, uh,
00:31:50
Speaker
And so I kind of left the project. I came back after three months. So during that time, I was kind of... The Americans are all thinking, what is this guy? What is going on here? What do you mean? We're not back after like 15 minutes of parental leave. Anyway, big times, yeah. So you came back. Yeah, actually, like the current project that I'm in, my colleague also left for parental leave.
00:32:19
Speaker
maybe coming back in August too. And that project is for a client that's in the States. That's really extraordinary. Okay. Yeah. So yeah, so I kind of like to bump a bit and try to like,
00:32:41
Speaker
contribute to the library. Spark was going through an upgrade of 1.5 to 2.x something and they had some like binary incompatibilities actually. Well actually that's actually the Scala kind of compiler thing but I don't think but yeah.
00:33:06
Speaker
So anyway, I remember that the talk that Kristoff gave, he said at the end of the talk that he had some ideas for Lambda. And I was like, hmm. Then I started like contributing and he contacted me and then like, I guess it
00:33:26
Speaker
I forget, but maybe it was him who suggested that, hey, there were some ideas on Lambda. Yeah, I watched your talk. I know. Let's work on this. And we submitted a talk proposal to close the tray. And it went in. And then, like, some amount of
00:33:48
Speaker
talk to you and develop and follow. This is common in academia, by the way, like you supply the abstract and then basically you have it gets accepted and you have six months to actually figure out whether the abstract can work.
00:34:07
Speaker
That's how industry works as well, as far as I'm concerned. I make a proposal and I get funding for it, backing for it, then I spend the next six months actually seeing if it could work. Yeah, very similar. You have to get busy funding anyhow. Another story, maybe just once back into what I was asking, but I remember that these echohalt times, we were doing this hardware
00:34:36
Speaker
to like crack with Wi Fi stuff and like CEO of the company was going to investors and devices was quite kind of small. And they had like first 3D prints of the device and they were like, but like, of the box. But but it didn't have any energy. It didn't have the tip sets and like stuff inside. So he like in, he thought the story that he went to the investors and
00:35:07
Speaker
When going from the hotel, he put some soap into the box and it would wait a bit. And then he would try to get the funding for the company. Did they wash their hands of him?
00:35:31
Speaker
I don't remember. Maybe it worked. So, yeah. So, I don't know. So, we did this in 2017. I got to give a talk with Christoph in Closetray. It was on this Portkey project. Right. So, the idea was that you had the REPL and you defined the function there.
00:36:00
Speaker
And then we captured the bytecode and deployed it into Lambda with the alien runtime. And while you did this, you had this function kind of called mount. So you got to define that I have like, in my repl I have function and it takes this arguments and the mount function took a URL template and you mapped the URL parameters to the
00:36:30
Speaker
uh, function arguments. And when you did the mount, it created an API gateway, uh, uh, route into AWS. And then you like the idea was to do fast way to deploy a Lambda from your Apple. And then you get it running. And so it's like super fast way to kind of deploy it. But, uh, yeah, that, that was then, um, uh, like other interesting stuff, like spin out of
00:37:00
Speaker
spun out of that. I guess Christoph was looking at the closer code that I was writing to interact with the AWS Java library to create this AWS API and Lambda and I don't know, there's lots of fresh APIs to configure stuff in. This could be more generic. So we spun out this
00:37:30
Speaker
AWS SDK library, so it's generated clients. It did basically the same thing that AWS API, the Cognitec library does. And at the same year, the AWS API library comes out.
00:37:48
Speaker
And I was like, okay, really, there was like a lot of stuff to do on that era, but then it kind of turns out that this other company has been making this five years and they put it out. But yeah, so I don't know, I've been like, then I went to other projects and actually kind of switch company, but I've been kind of looking at this
00:38:15
Speaker
there's some something that kind of stuck with me with the idea of this running closer in the lambda environment and I guess very hard had a kind of idea that there's there's lots of there's this holy lambda thing is made by this carol I probably put through his name but but this
00:38:43
Speaker
So which kind of has tooling around making AWS Lambdas with native image compilation because, well, you cannot remember that. So how AWS Lambda kind of works is that, or I don't know, I'm not in the team, so I don't know, I just like what it looks outside. But this,
00:39:10
Speaker
So, but I don't know, they have resources in their compute data centers and then you define this lambda and it starts the process. And I like kind of like in the CTI sense of error, like when you are like having your Apache and PHP and just like sent a file there and then it starts the process to run the script and then the script runs and exits and
00:39:40
Speaker
Yeah, you have your HTML on your browser. So just like lots of options doing this Lambda stuff, you could do JVM closer and use the built-in runtime that AWS has. If the startup, because JVM startup isn't that fast, then all the crawl VM native image
00:40:09
Speaker
stuff kind of came along. And like, work through this doing great stuff with the Babaska thing. And so it holy lambda you have a there's a there's a custom runtime. Like, like, early days lambda didn't it supported like I guess, really early on supported Java quite early. I don't maybe know say yes, maybe. But some I was like, they're quite early. But
00:40:39
Speaker
But so, so I think it was one of the earliest ones because remember me and Walter actually worked together at the time that lambdas came out. And I think it was round about, let's say the August or September it came out. And I think we all got excited about it. And I think we did some kind of like proof of concept around Christmas time. Yeah. And then we did a project with it the next year because it was pretty successful, you know.
00:41:06
Speaker
And yeah, Java was definitely there from the start. I'm sure of it. I'm pretty sure we did some in Java and then some in Node.js. I kind of remember writing some Node.js code for it, but fairly certainly the Java back then. This was better, right? Like it was pre-GA. Yeah. Yeah. Yeah. Yeah. Yeah. So like they have this like only write business code.
00:41:26
Speaker
kind of thing. But yeah, but I kind of think that like targeting someone would be like, hey, there's the pieces code. Yeah. But but but yeah, so so. So okay, if you want to do closer on the lambda, you could do the table closer. But it didn't, it didn't start that fast. So you had other option could be just to
00:41:56
Speaker
abandon the stadium of totally and go notice. Yeah, yeah. And maybe it's just because I'm, I don't know, this old tell a guy and like, well, I've got to tell you that like, book, you talked about book dude early on, you know, he has this be babashka that does it, but he also has another tool NBB, NBB, which is, which is a kind of,
00:42:23
Speaker
JavaScript implementation, essentially, of ClosureScript. But they have to use this SCI interpreter. And then NBB, you can deploy that to Lambdas. And again, I think what you're looking for, you're just looking for a very, very quick start because I think people have done all these studies that said that performance is kind of evens out if you've got like, because the thing that you kind of missed out about the Lambdas is unlike the CGI thing.
00:42:53
Speaker
there is something called a warm start and

Innovations in Lambda Deployments

00:42:55
Speaker
a cold start. So maybe just pick that one up. Yeah, yeah. Finally, this NBB thing, I remember when it was called TBD to be done. There's also one trail of thought there. For some reason, I was doubling in, running a closer script on top of Node,
00:43:22
Speaker
because of writing this, that there's this frontend testing tool called Cypress. And so I tried writing this. Well, I was reading their docs and like my colleagues were like, also like, but this is a, it was a kind of hype that this is the thing. And, but how do you do closer to on this Cypress and
00:43:45
Speaker
Even in the docs they have, it's like, okay, you have this pre-processor and you could do a closed script pre-processor. It reads on their docs like, hey, who put closed script here? But is there any closed script thing? Well, not really. Then I was doubling in this area and I don't know, writing a proof of concept thing that kind of taught me at our company, looked at them. Now it's also in a couple of projects.
00:44:17
Speaker
But I don't, good stuff there, but I was kind of doubling on that subtle C and C as compilation and like running on top of not close script on top of node and somehow bumping into part two. And I guess he got pumped by this, let's do Si on node.
00:44:36
Speaker
couple of other times too. But that's my moment when I got included in. He did a presentation when NBB was kind of out. And obviously, it was also in the slides I'm contributing a little. But yeah, NBB is really great. And there's a couple of people in Closer. It's like looking into this Lambda stuff.
00:45:06
Speaker
and, and, and all like NBB, the node stuff also included them. Yeah, it's, yeah, it's, yeah, it's, it's good stuff. And I guess, like, work group is working with this cherry thing. So that's like part of kind of, uh, you get to compile stuff, uh, not always, uh, interpret. So that might be, uh, like a good way forward forward also, but
00:45:36
Speaker
But yeah, but I kind of I was still like we think that It would be neat that there would be something for the table closer. So I wouldn't have to abandon the table Yes, yeah, yeah, I think everything else as well I mean the truth of the matter is I mean both do this epic we all know this and and the tools are really getting a lot of community support but if you go to a large company and say I want to use this like
00:46:05
Speaker
tool that's just hot off the shelf six months from this one guy. It's kind of like, yeah, well, it's OK for experimenting and stuff like that. But if you're going to deploy it for business applications, and I have done that, so I'm not against that. But a lot of places, you can't do it. I think in startups and very small companies, you get a lot more freedom and latitude to do that. But if you're in a kind of relatively large company,
00:46:34
Speaker
And you have to have some like let's say you have to have some process of getting your technology approved. It's more tricky I'm not saying it's impossible or you know, and of course it should be possible but um, but but I think a lot of people would quite like to be able to use the JVM Lambdas and I think it's not just because of it's not just because of
00:46:59
Speaker
like the JVM, per se, it's because they've got some code they already use. No, they've got functions, they've got libraries. And I just want to put that rather than putting like, they want to have got a microservice now, I just want to split it out some of those functions and put them into lambdas because, you know, it only runs once a day or something, that kind of stuff. So there's plenty of like ad hoc reasons why, you know, Java closure, JVM closure should be a kind of first class citizen on lambdas, I think.
00:47:29
Speaker
Yeah, yeah. Yeah. I was going to just echo the point where you might just have a sizable investment in ecosystem, like internal libraries, external libraries, that you know how to use patterns that you've applied that, you know, work well.
00:47:46
Speaker
Or just like integration with existing things, right? Like there's a good client for this thing you have internally and there's those things are valid. Yeah. Kafka. Yeah. Yeah. For once. And those things are immensely valuable. Like there's why you should not have to abandon them.
00:48:05
Speaker
Because that's for me it's also like the telltale sign of good technology it's like it. It can meet you in the middle right like if the if the initial premise of a piece of technology is like you will have to rewrite your stack.
00:48:22
Speaker
It's probably not what you're looking for. You want gradual adoption. You want it to meet you where you are. Although like we said at the beginning, generally throw it all in a bin and write it in closure.
00:48:37
Speaker
Case in point, why are we here? Because Closure meets you where you are. It allows you to bring all the Java ecosystem with you of libraries and drivers. It's the key selling point of why this list and not another one is because it actually did make a very conscious effort to meet you where you are.
00:49:02
Speaker
You are on the GVM with all that infrastructure, so let's embrace that. That's annoyingly reasonable. I'll be banned from the next episode, it's fine. You need to get more VJ style, be more polemic. Timo, we're going back to Lambdas now. One thought of using another language
00:49:30
Speaker
maybe if you get in a big company into a project that's bought by a separate department. That kind of thing. And then work on the new stuff. It's, I don't know, I kind of experienced that. Yeah, mergers and acquisitions help. Yeah, that's for sure. Yeah, that's true. Yeah. But but yeah, yeah. And not to say that, like,
00:49:58
Speaker
there's lots of new people coming to the language. And it's not only people coming from some background, there's a lot of people coming that haven't actually been using the JVM at all. So it's truly good to be able to support other runtimes too. Definitely, definitely. But yeah, so I don't know, there has to be this kind of fancy niche of trying to use the JVM in places.
00:50:29
Speaker
So yeah, I guess that's AVM test. It's really dynamic. You can load in new code and, well, Closor has a good test ripple. The Lambda environment is kind of evil in that sense that it kind of fights you back that you have. You have only processes that are only alive
00:50:58
Speaker
when an event is handled and otherwise they are dormant. If you want to have a REPL and you're like, hey, I want to REPL this into this process and execute something. No, you have to send an event actually. You can't have anything running on the background. But yeah, I guess there was a couple of
00:51:31
Speaker
One thing I'd say about the other thing I'd say, there's a few things, there's a few bits of technology that have happened in Lambda land, let's say.
00:51:40
Speaker
that are interesting from a deployment perspective and a ruffle perspective. We didn't really talk about this, but there was something that Amazon did before the thing we're going to talk about was called Lambda URLs. That meant that you could basically deploy a Lambda and it would be given an address on the network and be given a API gateway by default. That makes it super easy to establish
00:52:10
Speaker
to establish a Lambda and establish a network connection to it. So that's really nice in terms of just workflow perspectives. So if you just want to test out one function, you can just do that without any of this fuss or muscle in front of like, because if you want to use API gateways, then it's another bit of tech you've got to learn, another bit of deployment you've got to do. And I've got a kind of, I don't know,
00:52:39
Speaker
I get API gateways useful, but it's, God damn it, it's hard to work with. This Lambda URL thing is really nice in terms of informally testing out your code in Lambda. It's really good. You can just deploy it and you don't have to worry about all the gateway stuff that is possible. Then there's this other stuff, the snap start stuff that we started to talk about.
00:53:04
Speaker
Yeah. So yeah. And maybe you can explain a little bit about snap start and firecracker and all these kinds of things. Cause I think, I think, again, I think it's fair to say that we do know how land is run, you know, not, not perfectly, but we know that there's certain substrates in place because the Amazon people tell us, you know, so it's not, it's not a secret, you know? Yeah. So I don't know. Maybe they have the, I don't know, two pizza or what's the kind of theme and David, the,
00:53:32
Speaker
service that has a like SAP API, then there's another pizza team who makes the API gateway that's kind of proxy thing. I guess they I don't know, maybe they with the function URL, maybe the pizza slice coming from the API guys that we can do this function URL for you. But but yeah, like, so yeah, actually,
00:53:59
Speaker
is Lambda stuff because the current project that I'm working in is, we have lots of serverless stuff coming, going on there. And there's a bit of NLP, machine learning thing. The kind of odd thing there is that the Excel, the application is really infrequently used. So, and like most heavily used at the end of quarter. So annoying that there's this like,
00:54:28
Speaker
activity times and then dormant time. But, but this kind of lambda thing kind of fits there. But maybe it's the way currently we have the infrastructure set up is that there's like different AWS account per client. So it's really separated. So, so I got to like, so yeah, so we were working on lambda stuff there. And, uh,
00:54:55
Speaker
We did a kind of proof of concept first, which was mostly like talking from the front end directly to like DynamoDB and so on. But then we realized that we have to, like having an API in the middle is actually a good, good, good thing. So, uh, like options for like, uh, abandon JVM or go node. Hmm. I know. And then like, well, okay. That's if we go JVM, then, uh,
00:55:24
Speaker
Well, there might be this startup stuff. Maybe that would have not been such a big thing even, but so I put in this Holy Lambda thing then. We have native image there for the startup. But like, yeah, the snap start, which came. But I think the Holy Lambda, if I'm not wrong, uses Docker. Well, yeah. So it has this custom runtime.
00:55:54
Speaker
In the Lambda thing, they have these built-in runtimes with the Java runtime, for example. Just put char and implement an interface and the runtime looks up a class that you give by name. It has to implement the interface and invokes the interface at the custom runtime. It's such that you have a process that pulls events from local hosts from this specific port.
00:56:25
Speaker
the Holy Lambda has its own runtime. That's why, and that runtime is then native image compiled. At the end, you have your ring handler. You close a ring handler and you implement your API in that. But I guess it was in December last year, 2022,
00:56:54
Speaker
AWS came up with this, uh, London snap start. And specifically, I guess it's currently only for the Java runtime. Yes. But, uh, turns out that how they run, uh, that there are processes that they run. They use this, uh, Linux hypervisor is KVM kernel virtual machine. And, and they, there's this firecracker, uh, thing, which I guess is a some kind of.
00:57:24
Speaker
kind of from them to the KVM thing, virtual machine manager thing. Yeah. I think they call it a micro virtual machine manager, don't they? Yeah, something like that. This is also something that I bumped into a long time ago when I was studying in universities. What if you take a process
00:57:54
Speaker
And what if you freeze it? You save the memory and everything that the process is running, move it to another computer later on, and throw it. And stuff is the same thing. But when you deploy this JVM Lambda, if you make a published version out of it,
00:58:21
Speaker
it's like currently kind of tied to the kind of publishing thing, then they start the process and then they freeze it. And it's actually like when you do it, if you go and look at cloud with locks, you see that they do it in like several reasons, like they do several snaps of like when that happens.
00:58:46
Speaker
And the thing why they do it is that like resuming the stuff that is fast. So like we have this in this like application. Well, this is like back of the napping kind of stats, but there's, there's a like ring, ring API, uh, it rated them more than all the other medicine libraries. And so it, it, it kind of took in the order of five to eight seconds, the cold start. Right.
00:59:16
Speaker
to start. But with that snap start, you get you're like, well, in an unoptimized way in the ballpark of 500 milliseconds. So, so like, I went this effort of like putting native image compilation, they will cut a couple of tricky things, like getting, say, closer, the body library, it has this bouncy castle, and crypto things to kind of actually work with that. And then this snapshot
00:59:46
Speaker
like comes along and you don't have to do this anymore. So that you didn't have to abandon the KVM. You could still be writing Lambda with KVM closer and you still have the opportunity for it to start fast because of this underlying kind of technology change.
01:00:14
Speaker
Yeah, yeah. Yeah, I think it's an amazing stuff. It's just like the virtual machine images used to get or whatever, you know, pre docker you people used to make a virtual machine and then send it around, but it's very heavyweight.
01:00:27
Speaker
But like you say, now what they do is they start up this process at a certain entry point, freeze it, and then they'll just rehydrate it on demand. And their argument is that all you do is just read it from the disk and just start the process running. And it's super, super, super fast. If I have to wager a guess here, I'm fairly certain it's essentially a VM snapshot.
01:00:54
Speaker
Because basically, your Lambda is an actual VM, right? So this is a VM snapshot. Yeah, and that's what Firecracker runs these. They're called micro VMs. But the idea is that at the end of the day, something, this Lambda is a virtual machine. Yeah. So we don't know. We don't know, like Kimo says, we don't know how it produces this machine exactly.
01:01:19
Speaker
you know, what, what, what the mechanism is to exactly produce this machine. But it's, it's some kind of like, uh, virtual virtual, you know, they're using some kind of virtualization technology to make it happen. Yeah. But I think most likely why it only works on the built in one is because like you said, you have to create your own class and in then, um,
01:01:44
Speaker
Like they will call the method on the class, which gets you the event, but that gives them like super low level access because they can basically boot the VM and they know. They know also inside of the JVM then like up until what point it needs to run before their code makes the method call to the thing you've implemented.
01:02:04
Speaker
Which gives them this real line of, this is where we snapshot. The code is most likely, just before we call your method handler, we're calling snapshot on the runtime underneath. That's true, but remember Java has an entry point called man.
01:02:25
Speaker
So there are already mechanisms used in Java for this. So there are plenty of other technologies that can take advantage of this. I think what you're saying is there's a functional contract in the Lambda, which is definitely true.
01:02:45
Speaker
But maybe Kimmo can explain. There are other JVM technologies, which are looking at doing something very similar. There is even a standard proposal for it, if I remember rightly. Yeah, still projects. So Linux also has this checkpoint and restore in users base kind of thing.
01:03:14
Speaker
like, like, so if you have a running process, it might be reading out of a socket, it might be reading a file, might be doing all sorts of like, I don't know, useful and crazy stuff. And then then the kernel comes up like, hey, let's close this thing. Let's like, let's turn the lights off and put you into like sleep. And then you have
01:03:43
Speaker
Hey, but I have this socket here and I'm reading this file. So there's like all sorts of state going on and that is possibly kind of tricky. And like turns out that there's this checkpoint and, well, okay. If you go into the Lambda docs, you can implement another, there's a Java interface which you can implement and which gets called when there's
01:04:12
Speaker
a checkpoint being made. And you have a method that gets called when the checkpoint is made. And then you can do other stuff, like prepare for it. Maybe you could pull some machine learning image somewhere and initialize some stuff. And then when you wake up, you have it initialized and ready for use. And that interface has a method that gets called after the restore.
01:04:41
Speaker
They have some limits. I guess the checkpoint version, you have your maximum 15 minute runtime at use, but the restore has to happen fast. I recall that there's two seconds runtime. Within this runtime, you have to do your stuff. Otherwise, if you don't, then there's an execution error. But yeah, now I'm blanking out on the
01:05:11
Speaker
other standard stuff. But anyway, this process of hibernating and towing, that's just allowed to still go and use the champion.
01:05:30
Speaker
Yeah, I mean, I think the point is that there is a, I mean, maybe we will park that JVM standard for a minute. But the point is that it's actually quite a general solution.
01:05:44
Speaker
But I think what's interesting that we talked about in the library that you made is how to exploit these hooks in the Snapstart thing to kind of rebuild the Lambda, the closure Lambda. So you don't have to have this, the cycle for,
01:06:12
Speaker
for kind of redeployment is super smooth and super quick. Maybe just talk through that a little bit, because that's pretty exciting, I think. Yeah, yeah. Hopefully, there won't be a letdown at the end. But yeah, so I was like, OK, hey, there's this stuff.
01:06:40
Speaker
What if you have new code going on? We would have this checkpoint space where you could execute stuff. So you could have like a, because there's, well, yeah, because there's this other kind of thing. Lambda has this layer.
01:07:10
Speaker
kind of thing that you can make some kind of shared layer and then have the application code. Yeah, so the idea that I had that let's only, yeah, okay, so let's do it so that you could deploy your Lambda, the closer code and
01:07:40
Speaker
For it to run, you have to ahead of time compile. But if you wouldn't need to do that ahead of time compilation on your machine, let the lambda do it. So the idea was that you only compile this minimal class.
01:08:11
Speaker
The Java Lambda runtime works so that it loads a Java class. But if you compile this Java class, and not the whole application, and inside the Java class, you do requiring resolve. So you load the closer code, and it's only at that time when you have to turn the closer code to bytecode, and you might have
01:08:41
Speaker
I don't know, someone might have closed the code. Maybe I'm in there. Maybe. And doing this at the start of event handling would be not so nice. But hey, let's put this into this checkpoint space. So when the process starts. When you've got 15 minutes, yeah. So you've got the CEI that has 15 minutes of credit. Yeah, yeah. So let's try doing that.
01:09:10
Speaker
So I did this CSA lambda layer thing. But yeah, okay, so I kind of continued after that, also looking a bit further, but it kind of actually turns out that, well, yeah, like the laptop that I have, it's kind of speedy. It's more speedy than the lambda that I have. So why not use it to some amount of the compilation too?
01:09:39
Speaker
So it was, but the idea was still to kind of use the checkpoint phase for like great good on the cloud. Obviously they also build you for that. So it's not like a freelance, but anyway, like open some kind of avenues of doing computation during the deployment in order to prepare
01:10:07
Speaker
your stuff so that when the event comes, then you're ready. And kind of tie it to the development phase that you would have a fast way of deploying stuff. Maybe this could be a segue to another experiment that I did after that. Yes. Yeah.
01:10:37
Speaker
part key and hacking with Kristoff Ground and kind of live Lambda from the repo reloading thing. I guess one idea left from that time was that, well, what if you put, because every time when you deploy a new Lambda, it has to make a new process. So it has to kind of
01:11:07
Speaker
uh, abandon the old process and, uh, start a new process. But, uh, but if you, uh, like would not do a deploy, uh, you have a class loader in the JVM and you could like go and load new code, possibly into like into the running, uh, program. So, so, uh, okay, let's, let's do a setup so that, uh,
01:11:37
Speaker
So I did a library, what's the name? It's the CLC lambda site loader. So you do this initial compilation. You initially compile, AOT, compile your closer JVM code to bytecode, deploy your lambda, but include into it a site loader thing
01:12:06
Speaker
that's been enabled just goes and adds to the class path with a URL class loader, a new archive. So you can do an arrangement so that you have your Lambda initially and then deploy it.
01:12:33
Speaker
and then you leave a what's running on the background and you do new closer code. When you first ahead of time, compile the closer code to bytecode and send the archive to Lambda, you skip the closer source code. So you only put the bytecode there and how closer compiler works and it looks at you, is there a bytecode for this?
01:13:02
Speaker
like function. And if not, then it looks that if there are a file for this stuff is content in, and then it does the compilation. So the idea was, OK, let's do a side loader that if there's any, locally, you start the bots. That package is your closer code into a zip file.
01:13:32
Speaker
puts it into S3, and when the lambda runs, then it goes to S3, looks at the specific object. And if it has changed, then loads it locally onto the classpad. And then you have to write your event handler so that it's reloadable. And in the event handler, before running the event handler, we basically call
01:14:01
Speaker
require reload. So that's a neat hack. I haven't used this in production, but I don't know. I don't know. Others like not using it.
01:14:17
Speaker
But yeah, I mean, to me, it sort of reminds me of like, you know, people were sort of talking about the like the iPhone, like, if you if you've got if you're on the iPhone, you want to deploy something, yeah, then you have to sign it. And you have to upload an artifact and get it all deployed. Let's go through this. Just a kind of check at the app store. They get it signed off and it's deployed onto the phones and it's in the app. It's in the app store. And
01:14:45
Speaker
When Apple released this JavaScript call, a lot of people started saying, maybe it's just maybe we can load code at runtime that doesn't require going through the Apple store. And it turns out that Apple are OK with this, providing you don't change the nature of the application. As long as you're just basically essentially bug fixing or just adding small features to the application, then they're OK with it.
01:15:15
Speaker
And this kind of reminds me of that for Lambdas, that you're avoiding paying all of that cost of the kind of deployment hassle and all this kind of stuff. And instead, you're just using a really nice trick, let's call it, to essentially load the code at runtime. And no one gets hurt. Yeah, I remember React Native when it came out. Same pitch.
01:15:45
Speaker
I had a chance to do a React Native app, and I was almost doing the sideload stuff. But they didn't quite make it. It had a tight deadline when it had to come out. But yeah, I guess at the end of the day, it's like Lambda and AWS, they make their environment and they have their
01:16:13
Speaker
how you should use it, but then I was like, Hey, there's this idea. Like, you gave me a computer, I could try using it this way. But isn't it also a bit where, I mean, it's been a long time since I used like Lambda, so maybe the ergonomics of it have changed, right? But
01:16:38
Speaker
What I still remember is that the dev cycle is kind of long, and the deployment process is kind of painful. Like you said, there's quite a few steps to it. And so it's cumbersome, and it's slow. And especially during development, you want quick feedback. I mean, it's why the REPL is praised in every episode.
01:17:06
Speaker
Like the fast feedback is, is especially when I had to go to very important and like with lambda, I didn't have that at the time, right? Cause like the only way to sort of test it was to run it on their infrastructure, which meant you had to do a deploy, which, you know, like in best case scenario, I don't know. I don't want to put a number in it. It was like, it was counted in minutes, right? Which is long enough to get you out of your flow. So I feel like this, this is, um,
01:17:34
Speaker
This is meant to scratch that itch, I guess, because you just package a zip and upload to S3, which goes as fast as your upload link can do. And then you fire off the event using curl, I suppose, or something similar. And like you test, which gets you way closer to how you would develop something similar just locally on your laptop, I guess. Like you have a REPL, you patch the function up and you hit the same HTTP call and you check whether the result is now what you want.
01:18:05
Speaker
Yeah, Amazon has this for other languages, not for Java, actually, but for other languages, it has this console. So you can go into the console and you can change your code and redeploy. That's actually quite nice and quick, that one. And that's one of the things that both dude like was trying to push for when he was doing NBB.
01:18:24
Speaker
And depending on the arrangements you make, if the NBB code is relatively light, you can edit it in the console, redeploy, and then you get what you're talking about, Wouter. It's like super fast. You've got something, or you've made a mistake on some colon, or some small thing, or some number is out of back, or whatever.
01:18:44
Speaker
So hey, Presto, now you can do the same kind of thing without all the bullshit of Amazon's console even. You can be doing it from your... Let's just put in VJ's little words here. Emacs or some other shit. Exactly. It's got to say, but it doesn't have Emacs the console. It's not Emacs.
01:19:05
Speaker
Well, maybe we can paste in his question now, Kimur. E-MAX or some other shit? Yeah, so it's E-MAX. Oh, fuck. Thank God we've got the interesting stuff out the way first, then. I don't know. So I had good friends at university, and they were all using E-MAX, so I don't know. I went alone. I had my years of eclipse.
01:19:35
Speaker
Like Antawa. And I don't know, it was when I was coming back to Closer then. Well, Eclipse had a plugin, I forget the name, but it kind of died off. Clockwerk or something, wasn't it? Yeah, counterclockwise. Counterclockwise, yeah. But I don't know, I guess, I don't know.
01:20:00
Speaker
went back to EMAC. Eclipse startup time makes EMACs look fast. It makes JVM closure on Lambdas look fast. Exactly. If that's the alternative, I can't blame you. This Lambda and the development thing, maybe it's the first
01:20:27
Speaker
I said that not everything is awesome. So, yeah, it's not that awesome. But I guess there's like, there's something to it to try to like, okay, hey, maybe they are working on it and stuff like that stuff.

Future and Environmental Impact of Serverless Computing

01:20:45
Speaker
And kind of like, could there be a way? So it's, it's kind of enough of a lore to kind of look into that thing. Yeah. Yeah.
01:20:58
Speaker
Yeah, I think you did a really great job there, Kim O, actually. Honestly, I think it was, to me, it's sort of what triggered in me was this feeling. And I've had conversations with other people as well, that this seems like, I think it's once the sort of scales fall from your eyes,
01:21:18
Speaker
you kind of realize that this is just the way we should be doing things so before before we go into like that i think this there are probably something that you say about like open sockets and networks and like seeds for crypto functions there are bits and pieces like this that there are there are sort of design aspects of the project when you're restarting things.
01:21:40
Speaker
there are some considerations that you have to care about because not all state is equal. Even though you've got yourself all your bytecode state and you've got yourself started up, if you're relying on states like particular crypto seeds is a great example, then you've got to be super careful. But in general,
01:22:04
Speaker
If you're not doing anything like that, or if you can arrange for those things to be injected after this particular startup period, and usually you can, I think, then it's all good. Then the question comes as to why aren't we just doing this everywhere all the time for everything? Because then what's the point of Docker, basically?
01:22:30
Speaker
Because the whole point about Docker is that it's run time that can be shipped everywhere and restarted essentially. But it's kind of heavy weight for what it's doing. So now the interesting thing to me about this is that this firecracker thing that we haven't really talked in depth about too much, but we're kind of in the background, it's Amazon's first big open source project. And it is this thing that creates these virtual machines.
01:23:00
Speaker
and then is able to restart them and stop them and manage them. It's very lightweight, it's very secure, very fast, and it's completely open source. Even other Cloud vendors can use it. Hint, hint. I don't know what you're talking about. Yeah, I don't know if that's right. I don't get any bonuses from AWS.
01:23:26
Speaker
Well, I'm thinking about my friends in Exoscale who might want to implement Lambdas. Yeah, no comment. No, but the point is that, you know, I don't know if Firecracker will be the thing that Docker became. I mean, Docker seemed to be intent on trying to kill itself.
01:23:50
Speaker
But it's a sort of idea of being able to start up a small process and then just run it super quick from the beginning. This has general applicability. It's not just something which is appropriate to lambdas or to small things. So yeah, I think it's generally interesting. Yeah, there's this tower and the startup time. There's this process laden.
01:24:20
Speaker
I don't know how to pronounce, but generally, you create CPU cycles for storage, in that sense. There could be, say, environmental aspects also.
01:24:50
Speaker
that when you're about to. Let's not get into the environment because you get all these rust fuckwets who tell us that if you're not writing everything in rust or sea or whatever, then you're burning the planet and it's all your fault. Fuck off. Talk to Exxon and BP first. Come to me later. Jesus Christ. Okay. Sorry. Yeah.
01:25:20
Speaker
Anyway, well said. You're going to interrupt. Well, no, no, I really just think of it. Well, first, I just wanted to say that something we've been experimenting with the last two weeks at work is running VMs in Docker containers for hashtag reasons. But so it can be done. There's actually some benefit to it. But no.
01:25:50
Speaker
What I like about it is it's this thing where latency matters, like immediacy, latency, feedback loops, things like that. And generally speaking, slow startup times, they kill latency. And it's a bit the same if you do full fat VMs. Let's call them like that. Like startup times are slow.
01:26:19
Speaker
But I'm just ranting a bit here, I think general musings, this might not go anywhere. But I think that's also part of the brand of the podcast, so I'll just let it rip. In general, we don't really tend to optimize for that.
01:26:41
Speaker
Um, where like we optimize kind of first throughput, uh, but, and then often like that leads to batching and like batching is very poor for latency. And then every once in a while, like latency gets really bad and like some new tech will pop up, which has better latency. Like you said, it was one of the prime reasons why Docker containers got pitched as like lightweight VMs because start-up time, right? Like start-up time was about as fast as your program.
01:27:09
Speaker
And the runtime it's, I mean, there's some overhead to Docker, but like, it's extremely minimal. And like, at the end, it's just your kernel. But like, startup time was a huge argument for it. And lots of different things, like all the work Borkdoot's been doing.
01:27:25
Speaker
most likely all the work around native image. Like there's so much engineering that's going on, but like the one, the one thing like where, why people reach for it is startup time. Cause I can't build my CLI in Java slash closure slash JVM. And like if I do native image, which is like this huge chunk of like startup, like latency gets better. Um, at the expense of throughput, because if you want to run a high volume web server, you, you're probably better off with like regular JVM over a native image.
01:27:57
Speaker
But generally, it's kind of like, often, a lot of optimisation gets put into throughput and other things, and we neglect latency, and then every once in a while, something new pops up, because at the end, we like things to be immediate. I think there's also the sliding. So if I just before came out back in, I think there's also this sliding scale in between
01:28:18
Speaker
like latency throughput and availability. And I think this last one, availability, is something which is what the sort of lambda and fargate and all these kind of things are playing with. This idea that actually you don't want these things to be on all the time. You don't want them to be available all the time, but you don't want them to be consuming memory or space or electricity.
01:28:43
Speaker
So if you can do things like really on demand, like a network level, as soon as packet comes in, you start your machine up, then that's really good because that means no one's paying for anything at that point. And it doesn't matter to a lot. And again,
01:29:00
Speaker
If you've got a very high throughput, very active website all the time, then your kind of design is horrible with these lambdas because you end up spending a lot of more money. And I think there's been a case in the last week or two where this has come back for Amazon themselves with Prime Video.
01:29:18
Speaker
you know, that reabsorbed our lambda kind of design into a monolith. But my point is that there's a kind of like, there's a sort of three-dimensional space here that we need to play with, you know.
01:29:32
Speaker
Yeah, I don't know where this crowd currently is, but it also has this throughput problem that if you run a long time, you'll probably get better results if you would be on hotspot. Yeah, that's not true. Yeah, and I guess, so I don't know how they do the billing, who figures out how much users would pay,
01:30:00
Speaker
from Lambda, but if you take a Lambda, that would be like all of this on and compared to easy to the Lambda prices higher. But then I was listening to
01:30:21
Speaker
No, not mention it. Don't mention it. Don't mention it. Whatever it is. We're like Elon Musk's Twitter. We don't like external links. Freedom of speech is okay, but not that much. Come on, Jesus. I think I cut out the signal, the Trent name. Anyway.
01:30:46
Speaker
No, but who was it really? It's a, it's a, not mine. I don't, uh, this, uh, same thing, not some threads chain chain street. Oh, okay. But yeah, yeah, there's, there's lots of alcohol in that talk, but yeah, they were like going to this, uh, this, uh, garbage collection stuff, memory management things, but like, uh, like, uh, like that, that's, uh, if you do reference counting.
01:31:15
Speaker
garbage collection. It might be kind of snappy or like coming back on the reservations and freeing up. But if you do it, there might be like cases when just doing it in the background, in batches is actually better. So it's like, it's not everything is on the same shape. But yeah, this
01:31:42
Speaker
this trading CPU for this, like for the startup speed that if you would have a way to like, do initializations of say, I don't know, I guess the JVM and they are like with the project lighting and so on, like, let's try building something into the language too, that you could do work before you execute and
01:32:11
Speaker
Then while you are executing, then you get to apply this throughput friendly ways, but you could still have a way for the first thing that runs executed in fast way. And I guess there might be, when you run it locally on your laptop, it might be different than when you run it in your data center. I guess the data center environment is
01:32:39
Speaker
Well, it's more closed. But on your laptop, I don't know. I don't know. Cats running around. But the environment differs. It's a bit complicated to get the same experience, exactly the same, working both environments.
01:33:06
Speaker
There's this nagging thing in my head because I know there's a project that did exactly what you're saying here and I used it. I can't find the name, but where you would start up and then in runtime code, you would basically could make a call and say like, take a memory snapshot right now, which it would write down. That's the crack thing.
01:33:32
Speaker
Is it, I didn't, well, most, most likely the pattern that you could start up another invocation and like reuse the memory dump that you did before and like therefore regain it. But I didn't use crack. Like it was not that it was something else. Um, but like in any case, like you said, like this tradeoff between the straight off between startup time.
01:33:54
Speaker
in disk space or even like optimizations, because you could, for example, like run, you could choose to run pieces of your workload, right? Which would get, I don't know what certain pieces of the runtime in a certain shape or state, and then you snapshot that. But for the last hour, almost I've been trying to think of the name. It's not coming. But I've used something like that. Yeah, but I think it comes back to this point that you
01:34:24
Speaker
I mean, I don't know. So it's like, you know, we know there's a few things that are hard in computing and caching is one of them. And, and that's, that's, that's where you try, you know, you're essentially caching bytecode here. You know, whether you're doing it through a compiler or whether you're doing it through a snapshot, you know, it's actually the same thing, you know, whenever you compile anything, you've made some decisions that you're committing to.
01:34:50
Speaker
And so you're essentially cashing your decisions, if you like, in a program. But in the reality of the program and world itself, or the program itself, you're right, Walter. I mean, half the time, these caches could be just brought up front. And
01:35:12
Speaker
and then brought into being made live. We're seeing this in the web world, aren't we, where they have this server-side rendering stuff occurring, where they're saying, actually, in order to give fast startup time for most of our customers, we'll render it on the client, we'll render it on the server.
01:35:30
Speaker
They don't want a static website, which is all server-side rendering ahead of time, but again, it's like mix and match situation where they can render the server-side stuff for that particular client, then let the client continue its operations.
01:35:46
Speaker
There's all sorts of technology around that. I think it's very interesting how we're constantly trying to trade this space over. I think there was an idea in the past, well, I've got a computer at the end of the network. It's a fast computer, either it's a laptop or a phone. They've both got fast CPUs on them these days.
01:36:08
Speaker
then you start doing more and more and more in the JavaScript and it becomes heavier and heavier and heavier. The network load becomes heavier. So it's not even, it becomes impossible to actually open this goddamn page in less than 10 seconds. Yeah. Yeah. So the next thing that we have to do is some kind of, uh, uh, awesome check point, restart thing that we have.
01:36:31
Speaker
for the client in a specific state and then the client comes in and wakes up the phone being.
01:36:46
Speaker
Well, that's one of the things, by the way, it's one of the things that always, I mean, I know we're kind of joking about this, but I mean, one of the things I always like about lisps in general, and one of the things I like about your sideloader is essentially you're just shipping code. All you're shipping is words, text,
01:37:05
Speaker
You're putting it somewhere and then you're letting it tend to bytecode later. Now, obviously, you've kind of played with it in your raffle. Let's say you're confident that it works before you save it to disk or ship it over there or rely on it. But it's very nice to be able to just be sending small bits of text around to be compiled at the right time. I think this is really nice.
01:37:30
Speaker
That's I mean, even if you're even if you're not the perfect rebel environment, the idea of being able to send code somewhere as just as text, and then let the machinery take care of like the deployment and the compilation, those kind of stuff. This is a wonderful thing, in my opinion. Yeah, yeah, you know, it's the SAP, the PHP file.
01:37:50
Speaker
kind of thing. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. But like, but did you say you or did you say, Oh, nice. I'm nice. Because like, when you talk with with people who came up, or who I say we're around during the early web, right? I text me being one of them, you know, like,
01:38:16
Speaker
the joy of deploying PHP, like where it was essentially like I added the file FTP to the server and it's done.
01:38:27
Speaker
The immediacy of it, like that tight feedback side. I know quite a few people who don't like PHP, but by God, they love the way the deployment flow went and how it got us all excited. Like, I've got this new feature. It's right there. I just put the file. And this gets super close to that, which I think is really... It's why I really, really like it because
01:38:53
Speaker
Again, like the fast feedback cycle is what drives joy. And like at the end of the day, it gets the endorphins flowing. Yeah. Although, like, I don't know, I've been this clunky in my skills, but the VS code people have this remote stuff going on and like that. Yeah.
01:39:13
Speaker
the feedback thing there. You can actually edit it over, you can edit it on the server and just hit Google assets live. We sell it as a superpower, right? Like I can open my REPL on my server and like a hot patch, you know? Yeah. And like, I don't know, even the kind of massing stuff up. So that's kind of the reason, the reality is on this VS code and to put around stuff
01:39:43
Speaker
Like if we get a machine that has a TP, we don't want to maybe leave it running, like, and then discover at the end of one, okay, what's on your wheel? So there's like something in this running stuff remotely and being able to load code and get the feedback fast and use the computers that are there, maybe not on your laptop, I don't know.

Streamlining Development Processes

01:40:10
Speaker
If you have that kind of machine on your
01:40:12
Speaker
home. That might be also nice. But I think the main thing for me is that you can use your own computer and you can use their computer, but you don't have to stop and go through this extra build step. That's the thing which I think tends to kill productivity. It's like, oh, I'm going to deploy something now. Oh, I've got to stop. I've got to think about what my tool chain needs to do to deploy it out.
01:40:37
Speaker
And i've got to think of this extra step and it's so annoying to me so tedious i mean i get that it's sometimes necessary but to me it's so it's necessary because we haven't got these other bits of tubing in place you know. Now there may be some reasons around security and provenance and blah blah blah blah which you know.
01:40:55
Speaker
I'm kind of like, I'm down with that, but at the same time, I want that to be forced upon us, let's say, rather than be an inherent part of our environment. It's context, right? Essentially, from deploying software that needs to go to a fleet of 500 machines, a bit of process is kind of worth it.
01:41:14
Speaker
You want some guardrails and that can take a while, but when I'm actually actively developing something, I don't need to go through the same guardrails.
01:41:34
Speaker
If the technology makes you go through the same thing, then it's not accounting for a context. And so I think, like you said, you're not running the side loader in production, but it's totally fine for a dev environment to get your feedback cycle. And you actually have a nice gradual route to something that will get the seal of approval from the enterprise security team, right?
01:41:59
Speaker
Like you're not excluding it, but I get least I get in the context where it's applicable. I got a nice Yeah, but I'll just give you one more example of this though the bulk dude did so try not to bulk dude again is this Skittle because Skittle is another tool where you can write your closer script code this time and you just save it to your files and
01:42:23
Speaker
And then you get push it or if you're working locally, you just save it to your files locally. And then you refresh your page and you didn't do anything. There was no shadow CLGS compiler running in the background. There was no close to script deploy step. You just save the file, you reload the page and the program just works. This is nice.
01:42:48
Speaker
And again, you know, maybe it's for optimization reasons, for loading files, you know, for, like you say, wow, if you're going to deploy it to the internet and you want it to have provenance, maybe you have to go through this process of condensing it and, you know, whatever these things are.
01:43:05
Speaker
then okay, fine. But for just developing an idea just to scratch something up, it's super nice not to have to have a bunch of toing. And it's especially good, I think, for beginners, these sort of experiences, all the kind of bureaucracy is a sort of inhibitor to the beginner experience, I think. So if you can kind of get rid of the bureaucracy and just do things, then
01:43:34
Speaker
everyone's happier, I think. And then you can introduce these doing of things, like you said, for a particular context, for a particular reason, because enterprise needs it. And then, you know, you can argue about that when that's negotiation, if you're like, you know, or that's, or that is kind of, yeah, okay, it's something we have to do. But it's not built into the tooling. I think that's the important part that we want to talk about. If it is built into the tooling, then the tooling should do all of it.
01:44:02
Speaker
Yes, indeed. Yes. It's just like, don't leave it halfway. I've got the same thing with code generation. It's like if you start to generate code, it needs to cover all the cases and work all the time because I don't want to think about it. Yeah. I guess you can easily end up into like a works on my machine. Yeah, exactly. Don't want to go through the
01:44:30
Speaker
hassle of deployment because it's slow and takes time. There was a train of thought left from this Lambda annoying deployment thing. I kind of forgot to say that how in this current project that I'm working on, we have, we just use Yeti locally and we have the ring handler there. And it's funny, I think
01:45:00
Speaker
People come into the project and they not realizing that this is running in Lambda, because it's kind of so magical. That's how it works. But yeah, if there's a long-taking, not so trivial compilation step that you have to go through, because there are changes that you have to try out in the environment there. So then you don't do that, and then it's...
01:45:29
Speaker
So it's like the damn process also. There might be the...
01:45:35
Speaker
can use different products in the corporate way. Right now, Kimo, I've just looked at the clock here and it's like we're without Vijay, we're going very long. We're going for it. Yeah. Oh my God. It's the timekeeper. It's the timekeeper. Yeah. Oh, Jesus. Yeah. So we're going to really be annoying the audience here, but fuck it. If it isn't this long, they can carry on for another 10 minutes.

The Joy and Elegance of Using Clojure

01:46:03
Speaker
So one thing we like to talk about, actually, is there's a few standard things we like to go through at least. One part is, and I think we've covered it a little bit, but what kind of things are you really enjoying about Closure in general? What kind of things can you say? Where's the joy coming from Closure, from your perspective? Well, like,
01:46:32
Speaker
I don't know, maybe I'm too old, but not actually. Maybe they'll flip the question then because we can then start with the misery and end with a joy, give you a better chance to work up to the joy. It's really nice that while working in REPL and working with data and code, when you write the code to match the data,
01:47:01
Speaker
Both ways even out. If you have some data that is awkward, your code will also look awkward. Somehow, when you get the save of the data right, then you get the save of the code also.
01:47:31
Speaker
the ripple. That's the kind of good thing. But yeah, I guess the, I don't know the downsides. Sometimes you just don't know what's inside this map in here. But there might be something wrong in the code then that it's gotten too far away of the context of the data that you're working on. That if you need, like,
01:48:01
Speaker
Well, yeah. I used to think that there has to be types, because how else you can find out if you change the low-level model somewhere, but what are the ripple effects? But if you are in the place where you really need that kind of info, there might be chances that there's something else also wrong.
01:48:29
Speaker
But it might not be because of you, it might be of reasons of the project and context. If your type ends up being in every other file of the project, maybe you have a design and encapsulation problem. But I really like that heuristic though, Kim. I've heard people talk around about it in certain ways, but I've never heard it encapsulate so crisply.
01:48:54
Speaker
you know, if the shape of your data is right, the shape of your code will be right. So that's really, I like that. I like that a lot. I think this, you know, this is the quote of the episode, you know, it does, it rings true to me, you know, it really does ring true. Yeah. So, so, okay, just rounding it out then. So what, what, so what's, what's next then for, for Kimo?
01:49:19
Speaker
For me, I don't know. Summer holidays. For you, yeah. I'm sure he'll find another runtime to hack. Summer holidays, yeah, that sounds good, yeah. Yeah, I don't know. I tend to work on interesting things, and interesting things tend to come my way. OK, so they come at you rather than you're in there.
01:49:50
Speaker
Yeah, like this opportunity too. But there might be like a lot of legwork before that arise. Yeah, nice, nice, nice.
01:50:05
Speaker
Well, this has been really good. Thank you very much. I don't know if you've got any more questions for Kim Mo, Walter? No, not at all. Like, amazing conversation and thanks for putting up with me. I mean, I'm new to this, you know, like, I very much am the person behind the scenes normally, so.
01:50:27
Speaker
No, but it was amazing, tons of great subjects, lovely ideas, amazing quote, because it stood out to me like Ray said, you know, if the...
01:50:37
Speaker
you know, the shape of the data is right, the shape of the code is right as well. And I was also... I didn't say that. He said that. When he said it, he jumped out. My contribution. Yeah, it's super powerful. And I was also thinking like, that's most likely why
01:50:59
Speaker
Why, or at least a big chunk of why I enjoy writing closure because it, because effectively this is true. It doesn't really add any complexity beyond that, right? Like if you, if you get the shape of the data correct, your code will look elegant and like everything is in the right place.
01:51:17
Speaker
And I guess that's not true in Java, generally, or at least not old school Java, right? Where it adds a burden on top of it that you don't get to the equilibrium, maybe. I think if you want to end on a hickey quote, it's like the only thing you can do with information is ruin it. True.
01:51:37
Speaker
Oh, you know. All right. I'll Channel Maynard VJ on that bombshell. We've got to put him in at the end on that one. Yeah. Yeah. Thanks very much. It's been really good. Yeah. Thank you. Yeah. Really nice opportunity. Yeah. All right. I feel glad to have been here. Thank you.
01:52:07
Speaker
Thank you for listening to this episode of DeafN and the awesome vegetarian music on the track is Melon Hamburger by Pizzeri and the show's audio is mixed by Wouter Dullert. I'm pretty sure I butchered his name. Maybe you should insert your own name here Dullert.
01:52:24
Speaker
If you'd like to support us, please do check out our Patreon page and you can show your appreciation to all the hard work or the lack of hard work that we're doing. And you can also catch up with either Ray with me for some unexplainable reason you want to interact with us, then do check us out on Slack, Closureion Slack or Closureverse or on Zulep or just at us at Deafened Podcast on Twitter.
01:52:53
Speaker
Enjoy your day and see you in the next episode.
01:53:29
Speaker
Thank you for listening and thank you for tolerating this episode. Your patience is very much appreciated
01:53:38
Speaker
So Vouter and Ray took advantage of my absence and decided to destroy the reputation of low-quality content that I have worked hard to bring to DeafM. And I apologize for that. Nevertheless, I promise to return in the next episode to bring back my bullshit quality that you all expect from this podcast. Until next time, stay safe.