Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#43 - Alex Yakushev image

#43 - Alex Yakushev

defn
Avatar
15 Plays6 years ago
In this episode Alex stuns us with his preferred option in the spaces / tabs debate. He goes on to impress with his work on Clojure performance at scale. Here are some links to his work: Writing: http://clojure-goes-fast.com Code: https://github.com/clojure-goes-fast/clj-memory-meter Follow Alex on Twitter: https://twitter.com/unlog1c Sadly, we didn't get around to talk about this on the show but it's a great favourite of the community: https://github.com/alexander-yakushev/compliment
Transcript

Introductions and Technical Issues

00:00:15
Speaker
So, Deaf in episode number 43 with Aleksandar Yakushyev. This is Vijay from Holland. Ray from Belgium. And Alex, where are you sitting right now? Oh, sorry, I'm having echo right now. Oh, crap. Oh, okay. Is it Alexa? That's how we start. What is it? Yeah.
00:00:40
Speaker
Okay, okay. Yeah, I think I started. I sorted this out. Yeah, sorry. Sorry about that. No, I'm, I am in Kiev. I'm based in Kiev, Ukraine. So that's where I am from. And I'm glad to be here. Actually, Alex, I never really noticed this before. But I was, I was gonna make a joke, actually. But it turns out to be kind of true that your surname is basically Yakchev.
00:01:03
Speaker
Yep. Is that a joke already? Or is that you know, if this I don't get it yet. You don't have it. You have to explain it to me. I haven't got long enough. But I refer into the yak shaving part of the surname. I am. Yes. Yeah. Sorry. It's an absolutely disgusting, terrible joke. So we'll move on.
00:01:29
Speaker
You get one per episode. Yeah. Only one.
00:01:35
Speaker
OK, so let's get started.

Patron Shoutouts and Milestones

00:01:38
Speaker
I think, first of all, we'd like to thank our patrons who have been supporting us. And we just crossed 25, which is 26 right now. So officially, we are over a quarter century of supporters for DeafN, which is amazing. Actually, it's catching up with the episode numbers. I don't know what happens when we get to, like,
00:02:02
Speaker
if the number of supporters always hit the number of episodes. I don't know if we all die or something, but I hope not. I've got a few names actually I'd like to give a shout out to if that's possible.
00:02:12
Speaker
Yes, please. Yeah. So first of all, in the end of all call, yeah, yeah, is a Fed Reggiato. Thank you very much, Fed. We've talked to you on Patreon as well. You've been a stalwart supporter. Thank you very much. Next up is Donald Bleil. Thank you very much, Donald. It's really good. And Jeremy Field is next. Thank you, Jeremy. Really appreciate it. And last,
00:02:37
Speaker
is Alan, but with no surname, but he's at co-op sauce.org. So that sounds good. Right, so thank you very much all of you guys and girls and all the people that are in between up and down, whatever you're doing. If you support us, you're awesome. Even if you don't, you're still pretty good. You're like closure, you're listening to this stuff non accidentally. So thank you very much. We really appreciate it. Cheers.
00:03:04
Speaker
Great. So let's get into the episode.

Alex's Role at Grammarly

00:03:07
Speaker
So Alex, can you please give a small introduction of yourself? Where do you work? What is your relationship with closure? Is it intimate, Alex?
00:03:19
Speaker
Yes. Yeah, sure. Sure. I can start. So I work here in Kyiv at a company called Grammarly. And I work as a backend engineer there doing backend stuff, a little bit of frontend to some NLP and that sort of thing. And I've been using Closure for eight years or something like that. So I guess it's pretty intimate by this point.
00:03:48
Speaker
Yeah, it's so intimate that you're involved in the back end. But moving on, yeah, Vijay. So, of course, one of the things that you've been talking about a lot, or at least you're maintaining the blog called Closure Goes Fast.
00:04:06
Speaker
So you seem to be obsessed with performance about closure and JVM stuff. So can you give us some background about what is your opinion about closure performance and can it really go fast?

Clojure Performance Obsession

00:04:18
Speaker
And if it is going fast, how fast is it?
00:04:21
Speaker
Okay, yeah, obsessed is the good word. Probably not even a strong one in this case. But yeah, I've been caring a lot about performance on the JVM foreclosure specifically for the past year or so.
00:04:38
Speaker
And that comes mostly from the work I'm doing, because the loads we have to deal with, the amount of processing our servers have to do is quite substantial.
00:04:57
Speaker
When doing web servers, web services stuff, any computational problem is at least O, big O of n problem. So it's like even if your algorithm is just for some constant time, but you still have to check some amount of them for every user that uses your server, that kind of thing.
00:05:22
Speaker
the performance part, at some point it becomes important for different reasons. Even though the computers are getting faster and faster, most of the time we don't get to see that too much.
00:05:38
Speaker
just because software can always eat away that extra performance year after year so that the experience remains pretty much the same. Well, at least that's for the desktop software, for the user-facing software. For the servers, the situation is a bit better. So the better hardware we get allows us to process more requests to handle
00:06:07
Speaker
more users, but it's still important to know what you're doing, to know how the underlying things work, to be able to extract all the performance from the hardware. Quick question for you actually on that. In your use case, is your website, what kind of portion of your website is kind of dynamic versus
00:06:29
Speaker
can be cached. How much can you do in terms of performance outside of the fundamental processing of forms or eval closure code?

Grammarly's NLP Work

00:06:44
Speaker
Those are the obvious easy wins, aren't they, to cache things and to CDN them and to make things static or memoize them or whatever. How much of that kind of stuff are you involved in doing versus
00:06:57
Speaker
Oh no, we really have to have everything live all the time. Okay, so the thing you're talking about is mostly the websites thing and what I work on is more. So I work mostly on the NLP team. So what we do is we process user tags to find errors in them or some
00:07:19
Speaker
a possibility for improvement of that text. And that involves processing text in real time, extracting semantics from it, extracting grammatical structures and checking them if they are correct, comparing them to some baseline, and then coming up with some suggestions for how to improve that.
00:07:41
Speaker
Basically, there isn't much that you can cache there. On the high level, of course, there are different caches on different levels under the hood, but it's not something that you can compute once and then just return to any amount of users, regardless of how many you have.
00:08:02
Speaker
And how long have you been working at Grammarly with Closure? So I've been working for three years now. And I think I started using Closure there right from the start. Just the intensity of it was changing over time.

Alex's Programming Journey

00:08:20
Speaker
But apart from Closure, we also use quite a lot of Common Lisp and Java as well. So I've been writing some Common Lisp there, too, and Java.
00:08:30
Speaker
Okay, and before we get more deeper into into grammarly stuff or the work that you're doing right now. So what was the journey before closure and how did how did you hear about closure? How did you get into this? Okay. Yeah, my story, I think it's quite
00:08:46
Speaker
an ordinary one for some people can recognize themselves in that. But I started programming when I was 12, I think. And it all started with Pascal and Delphi. And I was doing some competitive programming like algorithmic stuff, that kind of thing. And looking back at it, I think I wouldn't even call that programming.
00:09:08
Speaker
And because I knew algorithms, I could come up with a way to solve the artificial task that was mostly what it was. Was it like top quarter or something? Yes, some sort of it, but just not online. It was on the school level and college level.
00:09:30
Speaker
So yeah, my programs worked and they could solve the task. But I mean, I didn't use functions. I didn't use procedures back at the time. So it was just one large blob of code. And I didn't use indentation back then. Why would you? Yeah, of course. This is the first time I'm hearing I didn't use indentation. It's like a feature of a language or something.
00:09:53
Speaker
Yeah, I think invitation is a nice feature and it's interesting. Yeah, it was in Pascal. So you had to define all the variables up front, up front the procedure. And since I didn't use any procedures, it was just one big begin and kind of program. So I had like this block of 20, 30 variables with different names on top. Yeah, it was fun. But anyway, when I went to university, they taught us Java there. So I switched to Java and that was kind of like,
00:10:24
Speaker
mind opener kind of thing, how programming actually works, all the abstractions. Yeah, indentation, oh my god. Actually, they forced us to indent our programs, and I still had this very strong opinion about that. So what I did instead, again, I used Pascal to write an auto-indenter for Pascal code.
00:10:51
Speaker
Just so that I don't have to to it being done that manually. Yeah, I didn't know about good editors or IDs back then too. But anyway, in the tabs and spaces war, you're just no nothing. Nothing. Yeah, absolutely. It's nothing zero with space.
00:11:13
Speaker
You've introduced a new category now, you know, that's totally awesome. It's much better than two-party system, right? We need to have this one. Tabs of spaces? None. Yeah, it's a false dichotomy, right?
00:11:33
Speaker
Awesome. Okay. So anyway, Java was fun to me back then, but quite quickly I started looking at some other things because everybody in our group was learning Java and using Java and wanted to try something else.
00:11:50
Speaker
just to stand out of the crowd. And I looked at Ruby and that also interests me a lot because it was very different from what I seen before, what I learned before. It was very easy to write these small programs and learned about regular expressions. It was also a mind-blowing thing.
00:12:10
Speaker
And then somehow, I think somewhere on the forum, someone wrote about this Lisp weird Lisp theme, which is like the best language ever. And everyone else who doesn't know, who doesn't write Lisp, it's like missing out, totally. So I looked that up. I downloaded the thing. I think it was called Lispbox. So it was an Emacs with common Lisp bundled together.
00:12:35
Speaker
So I managed to install that on Windows. And well, yeah, back then I was using Windows and it actually worked for some definition of work. But I also found out the book, the GigaMonkeys book, the Peter Siebel book, I think, Practical Common Lisp. And the first chapters, they were totally mind blowing. I mean, the second chapter is where you get to implement SQL-like language inside Lisp.
00:13:05
Speaker
Yeah, back then I understood that I certainly was missing out on something and it's like the programming can be absolutely different. And then quickly I discovered sickbook. So I tried comma list for a bit, then played with scheme a little bit too. Yeah, but then, so the assignments we had at the university
00:13:33
Speaker
We had to do some graphical stuff. I mean, you have to write a program, but then you have to make some sort of user interface for it. And because there were not many options doing that in Common Lisp or in Scheme, so I had to do still to do that in Java. And at some point, I decided to look something like is there a Lisp for the JVM? And after a bit of
00:13:58
Speaker
Googling on some some evening i found i actually find a few there is this cava thing which is like scheme on the jvm a bcl which is common list on the jvm and there are a few a few others but i also found closure and closure i think it had the most.
00:14:17
Speaker
the most professionally done website of the of the most amateurish websites for the languages. I think at that point, I made my mind about what I should try next. And so basically, you're saying that the level of professionalism and welcomeness of the closure website got you going. I mean, that's I mean, I'm pretty sure it's going to be a retweet from Stuart Holloway.
00:14:46
Speaker
Or Alex Miller. I think at that time it was on the PB Wiki or something, right? It's like the website was not actually like this at that time. It was built by Rich's brother. And the website was the PB Wiki shitty thingy somewhere. It's basically a Wiki website. Yeah, it could be. It was 2009 or something. No, so I find 2010 maybe by then it was actually the website we have. No, yeah.
00:15:15
Speaker
Not the one right now, but it's like just the white thing. Yeah. Okay. Yeah. It was based on a PB wiki thingy, I think. I don't remember exactly. It's called Peanut Butter Wiki or some of those things. Anyway, good times. That was a long time ago.
00:15:35
Speaker
Okay, cool. So that's how you got into Closure. So what kind of stack are you using

Lisp in Computational Linguistics

00:15:43
Speaker
right now? Because I know we spoke a bit about your work when you were here for the Dutch Closure Day 2018, which is I think nine months ago, I suppose.
00:15:54
Speaker
So you are working on maybe a quick idea about what your product does. That helps people to put some mental model of Grammarly.
00:16:08
Speaker
Yeah, again, as I was telling Grammarly improves the text writing assistant and it checks for errors in writing and suggests improvements in grammar and style and other areas. And we have a big team of computational linguists and researchers who are skilled in linguistics and in natural language processing.
00:16:36
Speaker
And they come up with different algorithms and programs to do all that stuff. And my role of my small team is basically to give them the instruments to do that. So that involves giving some sort of platform where they can develop.
00:16:57
Speaker
their algorithms, their code. And then at the same time, it's a platform to ship those features, those algorithms to the end users. And yeah, that's what I'm mostly working on. And then at the same time, once those are shipped, I'm responsible for maintaining that, running that in production. So do you have a DSL then for your scientists?
00:17:27
Speaker
Yeah, actually, we have, and it's written in Common Lisp right now, most of it. But there are some plans to move some of that to Closure as well. Nice. It's a fantastic product, by the way. I'm a happy customer, so I've been using it for a long time now, especially for writing my old MBA assignments and all that shit. It's really nice to switch between formal language, and it's really fast. I really like it.
00:17:59
Speaker
Anyway, Grammarly doesn't sponsor us, but it's a very nice product and I've seen so many people using it already. I think there is already a free version that you can use without any restrictions or something. Anyway, but is it only English or do you plan to go to other languages as well?
00:18:19
Speaker
Yeah, first, thanks for the compliments. It's only English right now and there are no hard plans to move to other languages yet. And mostly that's because every natural language is like, they're so much different. And it's like the stuff that we learned for English and like the
00:18:44
Speaker
the work that we already have done for the English language doesn't actually help much in moving such product to other languages. You actually have to redo it mostly from scratch. And that's like both for the programming part and for the domain language part as well. Like it's totally different knowledge and totally different pipeline and
00:19:12
Speaker
Yeah, it's quite hard to just to come up with a new language and product like that. But how did Grammarly pick Closure or why? I think that's mostly because of me.
00:19:28
Speaker
That's nice. Grammarly, it's only our team uses Closure, so it's not a completely Closure-based company. People use Java as well, and Scala, and of course the front-end teams are doing their front-end things.
00:19:49
Speaker
And so there is there is no closure script or something like that on the front. How are you able to how are you able to sell closure to them? Well, since since the.
00:20:04
Speaker
that the team that I've ended up and used Common Lisp already. So I think that's even a bigger evil, a harder thing to sell. But that also happened historically. Was that your opening pitch? Common Lisp, it's a much bigger evil than the one I'm offering. We're a bit lesser evil. We've got a lesser evil. I mean, interesting pitch.
00:20:32
Speaker
Yeah, but the common Lisp system is quite old already. Old in terms of it's not outdated, we're actually working on it. But I mean, it's been around for quite a lot of time, and grammarly. And so historically,
00:20:51
Speaker
quite popular at AI stuff, language processing as well, for different reasons, a bit for historical reasons, because the first wave of AI happened to coincide with the times when Lisp was developed where it was actively used and popular. Unfortunately, it's not like that anymore, but some people are still carrying the flag,

Benefits of Lisp Languages

00:21:20
Speaker
so to say.
00:21:20
Speaker
Yeah, but list-list works quite well for a lot of reasons in this task. Common Lisp itself is a great language and it's a language that lacks good runtime.
00:21:36
Speaker
like JVM, for example, but the language itself is actually spectacular. And the environment that you can build with it and this interactive live theme that you can then give to your domain specialists and they can work with it and you can evolve it, you can improve it, you can develop custom things for them.
00:22:00
Speaker
And they can use it without doing those recompilations and restarts and reloads. It's like developing a product and an IDE at the same time. So that's when I describe our Lisp system to other people. That's what I usually say. Without it, we'd have to develop two separate things, like a production system and an IDE to develop it. And basically, we
00:22:29
Speaker
cut the development times by half. Yeah. That's a really nice pitch. And so I'm assuming it's Emacs everywhere for you. Absolutely. Of course.
00:22:42
Speaker
Well, for Common Lisp, there are not a lot of options. There are a few commercial ones as far as I know, but I don't know if anyone at Grammarly has ever tried them. I certainly didn't, but Emacs plus slime works quite well for us.
00:23:03
Speaker
Nice. So because you have experience in Common Lisp as well, how do you contrast it with Closure? Common Lisp is certainly much more holistic in the meaning that it's a thing in itself. It was designed from the ground up to be a Lisp, right? Because Closure piggybacks a lot on the JVM and that involves making some design decisions.
00:23:34
Speaker
which are not completely Lisp-like, so to say. For example, the Common Lisp, it has a superior error handling system. And it's... Here we go. Yeah, that's where you're waiting for it.
00:23:49
Speaker
No. Error handling in Lisp, it's a lot different from regular languages. The main thing about it is that it's similar to
00:24:05
Speaker
just to exceptions and to catching exceptions. But when you do catch exceptions, you don't unwind the stack. And that means that the program is able to go back to the place where the failure occurred. And then it's like continue processing, continue going from that point. I can give an analogy that I like to tell. So imagine
00:24:29
Speaker
Imagine there is a factory and there is a factory worker, let's call him John, and he's cutting bolts from the metal molds or something. And then he has a batch of 1000 bolts to make, and then at the bolt, 527s, something breaks. I don't know, the bolt that he was making.
00:24:53
Speaker
like it got broken. And now this John guy, he has a few options. He can throw it away and then make another one when I'm making the replacement. He can throw it away and not make a replacement and produce a batch of 999 volts. But then it's like the amount of source material will not be changed. Or maybe he can stop the production line. Maybe he can do something else. So he has all the
00:25:21
Speaker
abilities to make a fix, but he doesn't know which one to do because he doesn't have the authority to decide, for example. So if it's a Java factory, then what he does is he throws away all of his batch. He shuts down the line. He goes to the person responsible for the line and asks, like, what do I do with this bolt? And that person doesn't know. So he shuts down the whole building.
00:25:50
Speaker
And he goes to the director of the factory. And yeah, he powered out everything. Actually, he burned everything to the ground. And he goes to the director of the building, the director of the factory and says, like, what do I do with this bolt? And the director might have to say something about, yeah, okay, you have to replace it. Or here is another bolt replacement for you.
00:26:14
Speaker
But then at that point, the John guy stopped bothering me. Yeah. He, he just quits his job and the whole factory burned down. And now the director of the factory has to build a new one and hire all the new workers and start the production again. Uh, so that's, that's how Java works or closure closure for that, for that matter. But in something like Common Lisp.
00:26:39
Speaker
So the error handling is actually split into the place of the error and the place of the decision making. And it's called the restarts and condition handlers.
00:26:54
Speaker
and what it means basically is that the code underneath it knows how to fix things and the code on the top knows what decisions to make and if something gets broken then this
00:27:10
Speaker
this error, it propagates to the top. And on the top, the code on the top says, OK, in case of a network failure, you have to restart. You have to try again. And then that failed code, it continues right from the place where it erred. And it means not burning the factory and building a new one. It can continue right from the same place. Actually, the error handling is also very
00:27:37
Speaker
closely tied with the debugger. So if there is no error handling code, and for example, the network failed, then you get an exception. You get a stack trace in your Emacs, hopefully using Emacs. And what the user, what the programmer can do at that point, he can just click on the button saying like restart, retry that thing. It will not retry the whole operation that you use.
00:28:07
Speaker
you've said it to do, but it will retry that little thing that failed and continue. And if there are already some state in your program, it will be retained. It will continue from the failing point. That sort of reminds me of when Windows gets a problem and asks you to debug it.

Error Handling: Common Lisp vs. Java

00:28:24
Speaker
Yeah. Emacs does that too.
00:28:29
Speaker
Yeah, they are both operating systems, right? Exactly. But do you think this is like a, because you dig a lot into JVM internals as well with your investigation for the performance, do you think this is something that can be built like the effect system or the restart system can be built into closure? Is it easy? Yeah. Or is it simple?
00:28:57
Speaker
Yeah, it's a bit of both. But it is possible to build something like that on top of what Closure already offers. Actually, you don't need much more than just dynamic variables, because that's what underneath it, it uses some just stack, putting things on the call stack and then being able to look them up.
00:29:24
Speaker
And I have a small library, it's called Perseverance. It's in the Grammarly repository. And it actually implements something like that just for doing the retry logic to be able to split the place where the error can happen and where you want something to be retried to separate it from the place where you want to decide what to do.
00:29:51
Speaker
But to be able to do that for the general case, on the JVM, I think that would be close to impossible, just because JVM does different design decisions in that regard. Because Common Lisp stack is more heavy weight, so by the point something happens, something error,
00:30:11
Speaker
Nothing is thrown out. Everything is retained. So you can actually look at the local variables on the stack, like on each stack frame. That's very convenient. But then it brings a lot of overhead as well. So if you want to write a classic Java program that throws an exception 1,000 times per second, which every Java program does, right?
00:30:35
Speaker
then you probably would not be able to do that with something as heavyweight as Common Lisp exceptions. Okay. And because the other thing that I was wondering about is because you're digging deeper into Clojure's performance, are there any surprises or some things that you found out that, okay, this should be better or this is the reason why it is getting slow? Or is it slow?
00:31:05
Speaker
Yes, so the thing about closure performance, and I think it's by design, and I'm quite happy about that, that closure always leaves a leeway if you need better performance for that particular thing. If you want that thing to be faster than it is by default, there is a way to do that. So closure
00:31:31
Speaker
Most of the time, it doesn't make some hard choices like, OK, this is going to be slow because we want this to be convenient, something like that. It comes to many places like to immutability because you can opt out of it. It comes to the dispatch. So you can call to, you can use interop anytime to call into Java. You can use unsafe math if the safe math doesn't satisfy you.
00:32:01
Speaker
For many such cases, you can still drop down to the raw thing, at least in the raw in the JVM sense, and use it. So I cannot say that there were some surprises where I found out that Closure does something slower than it should, because I think just most of the time it's not a problem, because I don't concentrate on that much, and I just use the faster thing possible.
00:32:32
Speaker
What about the classic Lisp sort of scheme things that the JVM doesn't have, like tail-called recursion?
00:32:41
Speaker
A tail call optimization, you mean. Yeah. Sorry, it's not a call optimization. Yeah, for recursion. Yeah. Well, in my practice, that never becomes a problem, actually, because for just recursive algorithms, you can do a loop recur thing. Or again, there are different ways to write it. And as a just
00:33:04
Speaker
as a generic optimization where you do the tail call optimization all the time so that you can elide the stacks and it suddenly becomes faster, that optimization actually comes with a trade-off because suddenly you don't have the stack. So if something blows in your face, then you cannot look up what were the steps that your program
00:33:26
Speaker
and did to arrive at this point. So I think there is a trade-off there. And JVM, for example, it does a lot of different optimizations to improve the performance for such cases without taking the trade-off of the DCO. So yeah.
00:33:46
Speaker
What kind of tools do you use to measure these things? Of course, there is Visual VM and a couple of other tools that you get, JTrays and all that stuff that you get with JDK. But with Closure stuff, do you use anything different than that? Again, the beauty of Closure in this case is that anything that works for the JVM applies to Closure perfectly.
00:34:08
Speaker
And that's one of the reasons why I keep using Closure, because if you take something else, if you take something new or something not as well supported as the JVM, then you most of the time have to dig through some half-finished attempts to have the functionality that is available for the JVM readily, and a lot of people maintain it, a lot of people work on it.
00:34:36
Speaker
And we have access to that just for free, just because we stay on the JVM. So all the tools I use, yeah, you mentioned VisualVM. It's a nice tool, but it's quite limited in its functionality. So I wrote a few libraries we might link on there.
00:34:56
Speaker
on the show notes. So there is a profiler and also tools to measure the memory usage, both for the objects. Also, recently they did a thing to measure the allocation rate for the current JVM process. And then there are also some even lower level tools, like perf tools from Brandon Gregg, which are not completely, not really JVM focused
00:35:26
Speaker
but it's there for system level, system level debugging, system level profiling and benchmarking, but they can become quite helpful as well. There is also a set of Unix tools, like the generic VMstat and Iostat and all those tools that many people use and I know what else. But yeah, there is just
00:35:54
Speaker
a bunch of them and you can collect your tool belt and start using it without having to write them yourself or just...
00:36:05
Speaker
spending a lot of time checking what works and what doesn't. Well, what about that? Because there are some commercial things, aren't there? It's like ukit and stuff like that, which will give you profile information on the JVM as well, kind of a runtime and give you causes of memory leaks and things like that. There are some kind of commercial toolkits like that as well.
00:36:31
Speaker
Yeah, there are some. I haven't used them too much. I think I have tried your kit maybe once or twice. But recently, regarding profilers specifically, there is like a boon of good free profilers. There is this async profiler, which is I think the best that JVM has right now.
00:36:56
Speaker
And it's kind of predecessor, the honest profiler. So there are a few of them. They are quite lightweight and suitable to be used in production as well as in development. And they give this new alternative ways to render the results using the flame graphs, which are quite intuitive and portable as well.
00:37:20
Speaker
I think the commercial profilers can still offer something that's more than is available in the free ones. But in my work, the free ones are often enough.
00:37:33
Speaker
do you use any profiling in the production code because recently I started using tuft library from Peter Tausanis, the library that basically measures at runtime, there is a macro that you use that is going to collect all the invocations and it will give like the list of the performance, the detail of function calls or whatever, like a code blocks.
00:38:01
Speaker
I started using it. Do you use something like that? Yeah, the Tuft. I think it's called, is it pronounced Tuft? Yeah, Tuft. I don't know. I have no idea. I think it's based on Edward Tuft. I don't know, the visualization guy. It's based on his name, the professor.
00:38:23
Speaker
Anyway, yeah. So that library, I tried it before. Actually, I think I use it once or twice. But there are these two ways to do the profiling, CPU profiling, for example. But it applies to others as well. So there are sampling profilers and there are instrumenting profilers. And instrumenting profilers, what they do is they modify, let's say, every
00:38:49
Speaker
average of method, they put some special code at the beginning and the end. So on the way in, on the way out, and that code basically measures the time and it puts two timestamps. And so between the two timestamps you can get for how much time the method ran. And I think for Tufta you have to manually specify which functions you want to profile.
00:39:15
Speaker
Yes, so you wrap the code block in a macro, and then there is a global level one. You can turn it on and off to make sure that the macro is running at the runtime. Yeah, so it works for bigger chunks of code, like for longer running things. And if you don't instrument too many functions, then that could work.
00:39:37
Speaker
But in general, the instrumenting profilers, they bring a lot of overhead for different reasons. Because the code that is injected, it brings overhead. Then if you try to instrument smaller methods, it could lead to methods stopping being inlined. So it disables some JVM optimizations. So there are some second-order effects that the instrumenting profilers bring.
00:40:05
Speaker
And then the sampling profilers, they work differently. The idea basically you stop the execution of the JVM, let's say 100 times per second, and then you see what is on the call stack at that moment. And then you compare with the, yeah.
00:40:20
Speaker
And then you just build a statistical profile of what your program was doing all the time. And you render that, and that gives you an idea what is slower, what is faster. So it's less accurate.
00:40:40
Speaker
Actually, the instrumenting profilers are also not very accurate because of all this changing of behavior that they do to your program. And the sampling profilers don't do that. So I actually prefer the second ones and the sampling profilers, and they're also safer to be used in production as well.
00:41:01
Speaker
Yeah, and sampling profiles are non-intrusive, as in you don't need to modify the code, you just monitor the program from external tooling. Yeah, right. Okay, nice. So can you give us some idea about, because you wrote a couple of other libraries already, right? To measure the memory, to measure the asynchronous profiler. Can you give us some idea about what those libraries are for?
00:41:28
Speaker
Yeah, so let's start with the profiler. The CLJ async profiler is actually a very small version wrapper around async profiler, which I already mentioned. And what it gives you, it gives you the ability to start and stop the profiler to run it for
00:41:49
Speaker
for some amount of time, let's say for 10 seconds. And once it's done, it generates you this flame drafting, which is interactive SVG file that you can open in your browser. And from there, understand where your program spends most of the time.
00:42:05
Speaker
So it's a same wrapper, but at the same time, it gives you this interactivity in your closure program, which allows you to very rapidly check what are the slowest part of the function you just wrote or of the sub-module you just wrote. And the experience it provides, at least to me,
00:42:30
Speaker
It's very, very in line with what REPL provides in general. It's like it took...
00:42:38
Speaker
It took the industry to invent the whole methodology, like called test-driven development, to start people writing small functions that you quickly check what they do without writing this huge system. And then just started for the first time after half a year. So with test-driven development, people start doing something else. And in the List community, it was there all along.
00:43:05
Speaker
Basically, it was always like that. People were writing small functions, they were building bottom-up systems. And with performance measurements, with profiling or whatever else, it's actually very similar.
00:43:21
Speaker
If it takes a lot of effort for you to see how your program is doing in terms of performance, then it's like if it takes a lot of effort, if you don't want to do it, then you will probably postpone it until the very last moment where it's like your program takes 100 times slower than it's supposed to do. But if it's seamless, if it's
00:43:49
Speaker
very easy to do and it's also quite entertaining and rewarding in some sense, then you will keep using it. It will just become the part of your workflow.
00:44:02
Speaker
So where do you see, because this is all based on JVM and recently there is a lot of noise around GraalVM.

GraalVM and JVM Trade-offs

00:44:09
Speaker
So where do you see this? Isn't that supposed to give us more performance benefits because it's going to be on the native code, essentially not running on top of the VM?
00:44:21
Speaker
Yes, so the growl thing, actually one team in our company evaluates growl for some other project right now. It's an interesting technology. I haven't looked into it too closely yet, but I think growl works better for languages that
00:44:44
Speaker
that did some trade-offs in the past, like, for example, JRuby, which tries to be more like Ruby, right? So it has these design decisions that were already made before JRuby came to life. And so there are many things in terms of dispatch that are slower in JRuby and that crawl VM can optimize.
00:45:11
Speaker
And for Clojure, I think there are not so many benefits that Clojure can reap from GraphVM because Clojure already had some design decisions that made it already fast on the JVM.
00:45:28
Speaker
I think it's mainly just the start of time, which seems to be a bit of a, I don't know why people are so obsessed by that, but. Oh, okay. So you're probably talking about, yeah, you're probably talking about substrate.
00:45:42
Speaker
theme, which is like a part of the crawl VM package that allows you to natively compile Java program or Closure program, for that matter. Yeah, that one is interesting, actually. And yeah, the startup time for some kinds of applications, like for some user space, tools, startup time becomes common land tools. That can be important.
00:46:11
Speaker
And for that, this substrate thing is interesting, but there are quite a lot of limitations as well. So I'd be happy to check it in half a year or a year or so when more people experiment with it and come up with something interesting.
00:46:31
Speaker
Yeah, but like you say, it's not really, it's not really, it doesn't really help the REPL experience because, you know, that's something downstream. That's more tool chainy than, um, because, you know, you can run things in the REPL with growl. And the main advantage there, as far as I know, is that you can interoperate with, with R and Python and, and, and other languages. So that's, it's this kind of polyglot aspect of the REPL that's interesting, isn't it?
00:46:58
Speaker
Yeah, that's the one benefit. The other one is that, as far as I know, they have a completely different just-in-time compiler. So it's not the hot spot anymore. It's a JIT written in Java itself. And it's some sort of fancy thing. It has its own optimizations, which are more complicated and obscure.
00:47:25
Speaker
Yeah, it's going to be better than. Yeah, so we did some experiments, some rough experiments with the crawl VM and turned out to be slower than the JVM for just a few tests that we ran. So we postponed that for now, at least.
00:47:46
Speaker
Talking of tests that you run, maybe it's coming back onto the main thing. If you're running with a profiler and stuff like this, what do you do in terms of... I used to do a lot of performance stuff as well, to be honest. The biggest complication I think you have apart from
00:48:05
Speaker
With the JVM especially, you can get lots of misreads. If you just put a profiler on something in your development environment, you can get a lot of misreads versus your production code because actually the JIT does all kinds of things to make things faster.
00:48:25
Speaker
There's always kind of warm up activity on the jvm so you know how do you kind of like what's your philosophy around kind of guarding against those misreads.
00:48:37
Speaker
Yeah, that's actually a very good question because with performance optimizations and performance benchmarking and profiling, it's actually harder than doing just regular development work in the sense that if you do an error in your program, then most of the time it will throw an exception or write incorrect data or do some other thing that will be apparent that it is broken, that it was wrong.
00:49:05
Speaker
With performance stuff, you are never sure. It's like whatever, whatever you do, whatever you benchmark, whatever you compare against, you never know if you did it correctly. If you, if you accounted for all the moving parts, if you, uh,
00:49:21
Speaker
Yeah, if you did all the good work in there or you just made some mistake and now you're just reading garbage. So about the philosophy, there is this guy, Alexei Shpilov, which is very vocal JVM performance blogger and he
00:49:41
Speaker
He has a blog and he does a lot of conference talks about this topic. He also has a library called Java Micro Benchmarking Harness, which I use and also recommend to others, which simplifies some things in that regard, but at least it prevents you from doing some silly mistakes.
00:50:01
Speaker
And there are a lot of silly mistakes you can do when testing benchmarking performance. Because like you said, JVM does a lot of optimizations. And if you forget that it can do that in that case, then the results you are getting are just garbage. But his main philosophy about doing all those performance things is that
00:50:25
Speaker
numbers, they don't tell you anything. So until you interpreted those numbers, until you build the performance profile of the program in your head, and then you verify that performance model against some more experiments that
00:50:45
Speaker
that otherwise should disprove your model, then you just did nothing. I mean, just raw numbers, they don't tell you anything. You should not use them as an argument, like if something is faster, if something is slower, something works better or not. Those are just data that you have to collect and interpret and then make some
00:51:14
Speaker
like assumptions and then prove them or disprove them. It's like that. So I agree with him totally in that regard. I try to work like that as well. I mean, the one I always remember from the Java days was things like
00:51:33
Speaker
Adding two strings, for example, was a kind of classic, oh, you shouldn't do that because it's slower. And then it turned out that, well, it wasn't that slow for most cases. So shut up. It's actually much easier to read the code where you just put a plus sign for a start.
00:51:51
Speaker
um, rather than all these string buffer type nonsense. Um, and then B, uh, it turns out that, you know, I think in 1.2 or 1.3, the JVM had an optimization for this anyway and essentially rewrote your code. So, you know, you kind of like,
00:52:10
Speaker
Every time that you change your code to compensate for some performance issue, you're at risk actually of all that code, the complexity of that code being wasted in the future. Because in fact, the JVM will just get better and your code might in fact end up being slower because you're not taking advantage of the common case that the JVM people are optimizing for.
00:52:38
Speaker
Yeah, yeah, that's true. So like most of the time, or like in general, the more abstract or the more general code you write, there will be more room, bigger room for optimizations. And it's like, yeah, there are several phases of optimizing performance, right? So first, you try to get rid of all the stupid things that you wrote.
00:53:04
Speaker
like like code getting getting rid of code always helps that's yeah that's the ultimate no code is really fast yeah yeah absolutely exactly but uh right so you start you start with that until until your
00:53:19
Speaker
code is perfect in that regard. So it's like it does what you intend to do and it reads that way. But then if it's still not enough, then you start doing all those micro optimizations and you base them on the fact how things work right now, but in future they might work differently. And there is a trade-off there.
00:53:47
Speaker
I mean, how much in your kind of work, like how many times do you think or how much code do you feel like people genuinely have to shift around versus
00:54:00
Speaker
The kind of the feel feeling that, for instance, I mean, you know, okay, this is a weird question, long winded question. But do you know what I mean? What I find in general is that people, the performance that people are suffering is just because they've kind of put a tight loop in the wrong place or they've, they've done something in a way that is just
00:54:20
Speaker
There's nothing fancy you have to do with the language to make it better. It's mostly just about, oh, shit, yes, that's where the performance is. And now I just need to rewrite it and reorganize my code a bit more. So it's very rare in my experience that you actually have to go down to the micro code to see exactly how you should align your arrays, for instance.
00:54:44
Speaker
It's like low latency Java people are doing amazing things in terms of rewriting array buffers so they align to the hardware, but that's pretty rare in most cases, I think.
00:54:57
Speaker
Or do you find in Grammarly that you actually have to go along and use those low latency tricks? Yeah, so that, again, that depends on the context, depends on what you're writing. And sure, most of the time you don't get as fancy as just...
00:55:16
Speaker
and doing all those super micro optimizations for some very generic code for some REST API or something. But yeah, I mean, you get a bit of both, a bit of everything. At least I get that in my work. So sometimes it's just some
00:55:39
Speaker
stupid code that can be rewritten and it is suddenly just enough for the performance. And sometimes you see that, all right, everything looks as good as it could be, but then it still can be done faster. So we have to do something else. So yeah, I mean,
00:55:58
Speaker
People are bashing premature optimization a lot. And there was a nice tweet recently that those who dislike premature optimizations probably haven't used any modern software, especially Slack for example.
00:56:16
Speaker
which could use some premature or whatever optimization there could be. But anyway, I think the premature optimization thing is also quite helpful.
00:56:29
Speaker
in the sense that people who do it, it might not be needed. But for example, for me, it takes to do that once on some projects, some projects like to do the optimization to see that it didn't work, or it didn't deliver the thing that expired for it, or maybe the performance was OK all along, but at least you learned something. So it's like,
00:56:54
Speaker
This performance work is a combination of actually improving something and learning something new, learning the stuff underneath so that the next time it comes up, you know what to do or you know what not to do, which is equally important. And regarding the question, we had a fun story actually on one of the services where the bottleneck was the number of
00:57:24
Speaker
network interrupts that the operating system could do on one core.
00:57:30
Speaker
So there is this optimization Ubuntu that it will bag only all of the interrupts to one core on the CPU, because generally that's a faster way to do it. But then the number of requests per second we were getting on that machine was so high that just one core could not handle all that. So we had to increase that.
00:57:55
Speaker
I mean, of course, 99% of the time, you never ever have to know about that. But that 1% of the time, it's better when you do rather than you don't. And then you have no idea where to start looking at where the problem could be. So yeah, I think. So what is the hardware budget for Grammarly? I mean, do you guys get like 20 euro per month so you have to squeeze all the performance out of every machine?
00:58:23
Speaker
These days, everything is clouded, you know, like, oh, sure, you know, I'm going to throw it on the laptop. Yeah, I don't care anymore. And actually, actually that that's a valid point as well. I've talked about that on the closure exchange in my talk so that it is it is all cloud nowadays and.
00:58:41
Speaker
especially if the problem you're dealing with is embarrassingly parallel and there is no shared state. So why not just spin a few extra machines and just be done with it? And I already said that I think that I have a conspiracy theory that's a life spread by Amazon and Google Cloud and all those other cloud providers so that people buy more cloud for them. But in reality, at least in my practice,
00:59:12
Speaker
There is no such thing as horizontal scaling and at least effortless horizontal scaling. So you're always paying with something. So it could be operational overhead. So you have to have to spin up new infrastructure to deal with those extra machines or suddenly
00:59:31
Speaker
It's like you thought you already solved the problem of monitoring. You have a graphite cluster for your 25 machines. But then suddenly when there are 100 machines, your graphite cluster cannot deal with that. So all of your dashboards take like 10 minutes to load. And your deployments are now, they take two hours to completely deploy.
00:59:54
Speaker
And if you watch a release, then you will know only in 10 minutes that something is wrong because your monitoring system is not good enough. Delayed by 10 minutes. But then it's like already how the deploy is done and you have to roll it back and it takes another hour. So it gets quite painful actually.
01:00:15
Speaker
And on top of it, you hire more people to deal with that. You hire DevOps and infrastructure staff, those kind of people. And it costs even more money than just the extra machines that you're running.
01:00:31
Speaker
I think it still makes sense to optimize just even for the number of machines you need for the service because it's never free in terms of money, in terms of mental health, all those things.
01:00:48
Speaker
I remember when I was working at Toyota, they have a thing, like a Japanese phrase in the manufacturing called Hejunka. And what that means is evenness, so smoothness. So to your point about if you're not careful with all these edge systems, they become swamped. So the idea is that you always try and across the entire system,
01:01:12
Speaker
promote a holistic evenness so that you are always emphasizing smooth throughput rather than one specific process which is going a lot faster. Because then you find that if the front-end process is going very fast, then all the back-ends just start to get overwhelmed. It's an interesting architectural thought that I think as to how you do all the buffering at the right places and the throughputs.
01:01:42
Speaker
So I think that's a general nice bit of advice to emphasize smoothness across your entire system, not just performance across your entire system, not just one piece. Yeah, makes sense. So I think our friend of the show, Mr. Zach Oakes, is asking about good old days of closure on Android.

Clojure on Android Experience

01:02:03
Speaker
Yeah, right. Hi, Zach.
01:02:12
Speaker
I think it's seven years ago now when I started participating in Google Summer of Code working on Closure on Android, actually. And the start with my first mentor was Daniel Solano Gomez, who is an amazing guy. And I think he still contributes to the Google Summer of Code things on the Closure part.
01:02:37
Speaker
at least he did a few years back. And then my second mentor was Zach Oakes. And yeah, back in the days, I think he was a sole user of all the things I developed. So it made sense for me to pick him as a mentor.
01:02:56
Speaker
But yeah, it was a fun project. I kept it going for years, and it was quite entertaining and fun as well. I was super hyped to have a run in REPL on Android. And I developed a couple of apps that way, and it was a completely different experience from what Android tool chain offered back in the days. And I think they still have nothing like that.
01:03:23
Speaker
I'm pretty sure they don't. But yeah, unfortunately, I stopped working on Android platform for different reasons. So those projects kind of died out. But it was fun to do that back then.
01:03:38
Speaker
Yeah, I noticed, actually, I can't remember his name now, unfortunately. I think it's Dimitri, someone. Just the past couple of days on the Twitter feed said he's porting Replete, which is the Mike Fikes iOS REPL for iOS. He's porting it to Android. He's got a few screenshots. So it seems like the REPL for Android is coming back, you know? Yeah. So timely memory there.
01:04:07
Speaker
Yeah, so I think you can still download the foreclosure app for Android from the Play Market. All right. Yeah, I did that. And it's actually, it's like complete closure inside. So there is closure compiler inside. So you can solve foreclosure tasks.
01:04:26
Speaker
from your Android, even without the network or anything. It just runs on the phone itself. And yeah, a fun way to spend time on the subway or somewhere. Did you do any performance analysis on that one?
01:04:41
Speaker
You might think you're joking, but actually, two years of Google Summer of Code, I actually spent more not on the Android, but on the optimizing closure compiler that would produce the code that would have faster load time and would be slightly faster at runtime as well.
01:05:04
Speaker
I also kind of, that project is in slumber as well, also for different reasons, but it was also quite fun to investigate that, to get why closure gets to start so slowly. And yeah, I still think there are things to be done in that regard. Just need some time and effort.
01:05:28
Speaker
So are there any like a rule of thumbs for people writing closure to make sure that it will still continue going fast?
01:05:38
Speaker
Yeah, well, as with everything in performance, the first rule of it is to use the profiler to see where the biggest problem is. Because the intuition is quite often wrong in that regard. You might be thinking that this part of code is to blame, and you can spend two days just micro-optimizing it, getting the most of it. But then it was in some other place all along.
01:06:07
Speaker
So the first the first thing is to use the profiler and then that would tell you most of the time that would tell you a lot about what you have to do. But then of course there are there are different different things that could go wrong in the closure program in terms of performance there is there.
01:06:25
Speaker
the reflection that you probably didn't want to invoke or to go into, or the immutability of the data structures that you're using in that particular case that could get in the way, or the rest stuff with math, with boxed math that is very close to reflection in that regard. But yeah, Profiler will get you half of the way to solving the problem.
01:06:51
Speaker
So before we move on to the next topic, so how do you handle with all the, there is a lot of discussion about Closure 110 and specs and error messages and all that stuff.

Clojure 1.10 Error Message Improvements

01:07:02
Speaker
So what is your opinion on current state of error messages that are coming up in 110? I think there is significant improvements made already. Or do you think it is still debatable?
01:07:16
Speaker
Yeah, well, I can tell you that it's certainly better than 1.9. That's for sure with the introduction of spec. I mean, I never had the problem with closure error messages. Actually, it's like it took me a bit to get used to them. And then it's like you just visually do the pattern matching. It's like, OK, the loan cannot be cast to IFN. OK, then it means I tried to call the number or something.
01:07:44
Speaker
So it's not ideal, but then it wasn't a problem for me. And with 1.9, all the specs blowing out in your face, that became a bit of a problem. More like mild annoyance or something.
01:08:01
Speaker
Something already said that on your podcast that they just stopped reading the error messages with 1.9. Something is broken. Computer told me something is broken. I'm going to figure out what it is. 1.9 puts that back into the place. I'm happy about that at least. Nice.
01:08:28
Speaker
So I think we are almost one hour, almost, or maybe over one hour. We were just thinking, because this is going to be our last episode for this year. I don't even remember when we started this, maybe two years ago, or maybe I think so. It's been too long. 43 episodes. 43 episodes, yeah. So more or less one, one and a half episode per month. So I think it's.
01:08:58
Speaker
So it's a nice time to wrap up the year by reflecting on what happened in Clojure for 2018.

Clojure's 2018 Developments

01:09:09
Speaker
So Alex, do you want to talk about how was Clojure in 2018?
01:09:16
Speaker
Yeah, for me, it was certainly good. It was a good year for me in terms of using Clojure and just observing the new cool stuff that came up from different people. And yeah, one of the major things is
01:09:32
Speaker
We already mentioned the error messages. I'm happy not as much about the messages themselves, but that Cognitekt showed that demonstrated that it's really something that matters to them and they listen to the community opinion as much as they can.
01:09:57
Speaker
Actually, another big thing that you already sure know that happened. These debates on Twitter, then the ever ending ones that, yeah, some point was troublesome to just keep up with them. You open the Twitter and you see another one. And it was quite painful. Yeah, but I think
01:10:22
Speaker
I'm happy that it came to an end, at least in my Twitter feed. Maybe I just unfollowed enough people.
01:10:33
Speaker
Yeah, but it ended and I think people made the correct conclusions out of it and we got a few bigger explanations from Rich Hickey and from Tim Baldrige as well. I think Taketalman also wrote a bit, so I think
01:10:53
Speaker
people in general won't. I mean, it's like the things you can read now and you can understand it better. So yeah, everything got better from it. I hope. I think that's an important thing that happened in 2018. Nice. Andrei, recap of 2018.
01:11:15
Speaker
Yeah, well, I agree with a lot of it. I mean, everything that Alex said there. And I think that for me, I mean, my highlights were like, definitely the conferences. I didn't get to the American conferences, but I went to the Closure Days in Amsterdam, and that was really excellent. Some great speakers.
01:11:35
Speaker
Just a great community again, you know, just lovely people, lots of ideas, lots of positivity. And then myself and Alex met again at the Closure X in London, which is also, you know, a superb conference. Shout out to John who organises that. You know, they do a great job there. It's a great fun, very entertaining, lovely venue, the people with skills might do a great job.
01:12:00
Speaker
So, yeah, it was really nice and it was nice to see at the end of that conference that Christophe Legrand, Christophe Legrand and Bojadeer, who was the most miserable person on our podcast ever, actually were celebrating with a beer about the joining of N-Repel and N-Repel. So the N-Repel, the back end repel wars were seeming like calming down as well.
01:12:24
Speaker
That was good. So I think, you know, basically the community for me, I know I really enjoyed some of the bullshit on the flame wars were annoying. But overall, I think the community has been really good this year, coming out with some great ideas and some great tools and some great stuff around documentation.
01:12:41
Speaker
Maria Cloud, these tools are gradually coming out around spec as well to make that more consumable. That's all good. I myself have started to make this collaborative REPL for, so I've become a tooling guy as well now. That's been a fun time and I'm working full-time with Closure, so it's great to be doing it full-time and as a hobby. I've just been loving it really. How about you, Vijay?
01:13:10
Speaker
Yeah, I think obviously, as you said, the conferences were super fun. I mean, that's where you get to talk to the community. And obviously, Alex was there at Dutch Closure Day opening the conference with a fantastic talk. And Closure D in Berlin, super fun.
01:13:27
Speaker
And yeah, I think, yeah, there are some, the whole discussion on Twitter and then the whole people complaining about the community and all that stuff. And I think right before the show, I was just chatting with Alex that, you know,
01:13:44
Speaker
When people are complaining about it, that means we have grown a lot. That means we became mainstream, which is pretty awesome. The more people complain, that means the bigger we are. But in general, I think it's been a fantastic community. Of course, I'm doing a lot of close stuff as well for an NDA, something that I can't disclose.
01:14:12
Speaker
And then, of course, we have Dutch Closure Day coming up for the next year as well. We just tweeted out, hey, we want to organize the conference. And then people are reaching out to say that, hey, we want to sponsor, which is pretty amazing. And we have 150 spots. And we almost are like 90 free tickets are gone already. So that means there is a lot of pressure on us to run it.
01:14:35
Speaker
And from the language-wise, I think a couple of things from Kong was really fun with the rebel, the new thing and the data-fying stuff, which is super cool. I think it's too much use to the open source shit. So we know every time I see something that is, oh, we only have GitHub for issues. That's a bit of a...
01:14:59
Speaker
kind of a strange thing to hear because so many years of working in open source with JBoss tools and all that stuff, and suddenly hear something that there is a tool that is available, but it's not open source. Yeah, it's an unconventional. Yeah, exactly. But hey, we use Cloud Shit and nobody complains whether it is open source or not. But otherwise, it's been open source, that's for sure. Yeah, exactly. And Jira and all the shit that we use day in and day out.
01:15:28
Speaker
But i think it's a fantastic year and i'm looking forward to next year what we are going to what the community is going to bring up and most recently if you see the amount of people who are doing the advent of code enclosure with people sharing their knowledge and everything that shows that you know there is a lot of
01:15:51
Speaker
Happy people trying to learn closure trying to do amazing things with closure So that's uh, that's 2018 for me and I hope we'll continue doing this Podcast and of course a shout out to the two new podcasts the ripple By Daniel Compton. Yeah. Yeah, it's really good though
01:16:10
Speaker
Yeah, and also the functional programming with closure podcast or something that I retweeted at some point. So there is a lot of, you know, we are growing. So this is a nice, nice community to be in. I think it's a fantastic year. Watch out. Maybe maybe those are also closure weekend podcasts and you will not be number one anymore. I don't think we're not aiming for number one. It's okay. We are we I think we are like the
01:16:39
Speaker
We'll acquire them eventually. I think we're still the number one vegetarian closure podcast. I think we are probably the only R-rated closure podcast with the amount of fucks and shit in the same episode. I think that cannot be beaten anyway. Merry fucking Christmas. Happy holidays.
01:17:08
Speaker
That's it from us for this year, I think. So we'll be back next year. And thanks a lot for joining, Alex, taking time from your, I guess it's not Christmas holiday yet there. Yeah, but yeah, it's getting closer. Yeah, yeah. And you have a new church now or something like that or in Ukraine. And you what?
01:17:30
Speaker
a new they're forming a new church in Ukraine. Anyway, yeah, that's probably yeah. Let's talk about that afterwards. Yeah, but thanks for having me. It was nice to be here. It was a pleasure. Yeah, yeah. I just said that you're wearing a Christmassy jumper, which is you can't hear on the podcast. Yeah, yeah. I enjoy that. You know, snowflakes and all. Very, very closure. I mean, in the in the right mood. Yeah, yeah. Perfect.
01:17:57
Speaker
Yeah, I just want to say the last few words. So regarding the flame wars and stuff, I think just to offer a counter opinion, I think we should cherish and protect and respect our open source maintainers. So at least
01:18:15
Speaker
for the fact that closure has been maintained for 11 years straight by a small bunch of individuals not backed by any corporation or something. And just doing that without burning out and just wanted to keep doing that, I think that's at least the thing to respect them for and to support them for.
01:18:40
Speaker
That's what I plan to do the next year and years to come and encourage everyone to do that as well. Awesome. Yes. So that's it from us for this year, episode number 43 with Aleksandar Yakoshev. And we'll see you every one in January, I hope. Yes. Yeah. Bye-bye.
01:19:32
Speaker
you