Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#68 - Michiel Borkent aka borkdude image

#68 - Michiel Borkent aka borkdude

defn
Avatar
47 Plays4 years ago
We got borkdude back on the show for an epi(c)sode ! Diving into all sorts of Clojure stuff!
Transcript

Humorous Introduction and Guest Welcome

00:00:14
Speaker
Welcome to Deaf and episode number 68, our first... Fuck, hold on. I wanted him on 69. Well, we can have him on 69 as well. God damn it. Finally, we're reaching the bottom of the battle for jokes now. Completely juvenile humor. We lost all sophistication and then it's all downhill from here. When you said we lost all sophistication...
00:00:43
Speaker
That's a bold assumption. Can you remember when in the previous episodes we had found all the sophistication? Before we started, definitely. Maybe it was the last time Mikael was on, you know? Maybe. All down here from there.

Closure Updates and Spec Two Discussion

00:01:01
Speaker
Episode number 69 minus one. I'll tell David all in that one, Mikael. Off by one.
00:01:08
Speaker
Off by one, exactly. Welcome to DeafN, Michael. Well, I would say welcome back to DeafN, Michael. It's been some time.
00:01:18
Speaker
very happy to have you back. So let's get started. So what is happening with Closure World? I think the plan is that Ray is going to continue for half an hour now, right? Planting and bullshitting. So we can just sit back and enjoy our tea. No, I'm not going to do anything about that. I think we did have a few little chit chats about what's going on with spec and Mali and things like this and these other things that are
00:01:46
Speaker
popping up from the woodwork. But other than that, I don't know. I don't think it's particularly relevant for Michiel, that's for sure. Or is it Michiel? Well, I am still waiting for spec to come out. So in that sense, it's relevant, I guess. But I think I'm not unique in that regard.
00:02:15
Speaker
Yeah, I have a feeling that it's a bit like, you know, I don't know what's going on with, uh, with the sort of, it's, it's, it's a bit like, I dunno, there's some sort of big brain occurring, you know, riches in his hammock thinking about shit. And we just got to wait until he, until he, well, until he just like lets out some big fart or something, or, you know, just
00:02:37
Speaker
gradually kind of explodes the functional universe with spec two.

Spec Two Delays and Community Impact

00:02:45
Speaker
But I think a lot of people have given up because it's just been so long now. How long was it anyway? Is it like six years?
00:02:55
Speaker
Yeah, yeah, six years. Yeah, and I think I think that that is not so bad. It's just I think it's been two or three years since you said I got it wrong. Yeah. Oops. Well, I think we were both at Clujetre 2019.
00:03:14
Speaker
Ray, you and me. And then Alex, he had a talk about, well, we're working on spec two and hopefully it will come out at the end of the year. So everyone was like, ah, cool. That was in 2019, wasn't it? Yeah. And I think in the last episode you described this as the, was it the Osborn effect or?
00:03:42
Speaker
I don't know if it was me, but someone might have been pre-announcing it. What was the laptop that didn't sell because it was born? I think I'm seeing that effect clearly in the ecosystem, at least among several people.
00:04:03
Speaker
But the effect of that one was that they pre announced that the next version would be better. And then the whole thing went bankrupt. So are you saying that spec is going to become bankrupt like yours on computers?

Ecosystem Stagnation and Spec Adoption Challenges

00:04:17
Speaker
Well, not bankrupt, but everyone is waiting for spec two and not really a lot is happening around spec one because of that reason, because
00:04:31
Speaker
That is my impression. I want to add a closure spec to Babeska scripting tool. But I'm not doing it right now because I'm waiting for a spec too. And the author of Reframe, Mike Thompson, he also said, I want to add something to Reframe for validation.
00:05:01
Speaker
Yeah, I'm not sure if he wasn't sure if we should wait for spec two. And Molly looked great for this purpose to him. So yeah, I think we're in this transition period where nobody's sure what to do about spec anymore.

Community's Adaptation and Mikael's Experience

00:05:23
Speaker
Yeah. Yeah, but I think it's not like, you know,
00:05:27
Speaker
Rich oases no respect to so if it decides that okay, we know we're all we're all happy with molly or schema or whatever, you know something Yeah, that was there before and then the new things known and if there is a critical mass around The new well hang on I just agree with you there because I I think he actually does ours because it's been it's been announced it's happening and
00:05:52
Speaker
So I think this idea that he doesn't owe us something is a bit off. I think no. There has been, you know, it's in the process of being fixed. There is a sort of branch there. Yeah, but there is no commitment, right? It's open source. You do it when you want to do it, and it's not like a marketing deadline for anything.
00:06:17
Speaker
Yeah, you could wonder if it would have been better if they just didn't announce it and then announced it when it was really ready, like part of Closure 1.11 or something. Yeah. And then everybody maybe would have invested more into spec as it is right now.
00:06:38
Speaker
I think the fundamental point that everyone's known for a long time is that it has a chilling effect on the ecosystem. I think it had a chilling effect on the ecosystem just being an alpha to begin with, instead an alpha forever. Everyone's thinking it's going to change.
00:07:02
Speaker
Why is it not? And why is it in alpha for so long? It's a lot of hammock time. So coming out with the details on spec two takes longer than the entire design of closure itself. But if it's worth the wait,
00:07:24
Speaker
Yeah, maybe it's very good. Well, we'll just have to see, right? Yeah, but but I definitely think that in the meantime, it's like, you know, I think the guys at metorsin have just took a practical perspective, which is, okay, let's just start doing some things with the tech that's out there right now. So, you know, I really like the Mali approach, I've started using it. And
00:07:50
Speaker
I'm not giving up on spec because I think spec is still very good for certain data interchange. Yeah, definitely. I'm using spec for parsing code as well. I have made a tool to search code through Closure specs so you can describe the shape of the code.
00:08:13
Speaker
And that works very well with spec. I haven't tried this with, uh, Molly because Molly didn't have this, uh, sequence schemas yet, uh, before one of before 0 3 0. Right. So, but, but spec works very, very well for, for these kinds of things. So maybe in the end, the community will see in.
00:08:36
Speaker
like three years from now, when to choose Molly and when to choose back and what are the benefits of both approaches.
00:08:46
Speaker
Yeah, the only thing that's a bit annoying, obviously, is having to learn two tools that are kind of different enough to be, you know, to be a bit annoying, you know, so yeah, yeah, I propose to Tommy, like, let's make a tool that helps you migrate from specs, spec to molly schemas, and vice versa. I think that would be cool. Yeah, that would be really cool. Yeah.

Tool Discussion: CLJ Condo and Closure LSP

00:09:15
Speaker
And then I think every project requires three different entities now. Half of the project is in one, half of it is in two and then there is a glue code somewhere. That's a little bit the sad part that we don't have one solution that everyone will use because Molly, for example, now has a nice tool which lets you emit CLJ condo information. So you get the,
00:09:44
Speaker
the specs that you write for your functions will also be used for linting. And if everyone would invest in these type annotations, we could have really good experience also using this for linting, which gets you like maybe 50% of the benefits of static typing. I'm not sure. That's science. I think you just came out with that.
00:10:11
Speaker
But it surely will give you a lot of benefits if everyone uses the same approach and commits to it. And now we have all of these separate solutions.
00:10:23
Speaker
Yeah. Maybe it's a good, I know CLJ Condo is more like a foundational tool now in Closure development, pretty much in every tool, which is super awesome by the way. Maybe it's a good idea to people who still don't know about CLJ Condo to tell them what it is. Yeah. So CLJ Condo is a Closure Linter and also a Static Analyzer, which can emit information about your code.
00:10:54
Speaker
But it's primarily known for its use as a linter. It just spits out information about if you call functions with an invalid amount of arguments, for example, it can detect type problems. So if you call the inc function with a keyword instead of a number, for example, it will tell you if you've done something wrong.
00:11:19
Speaker
So in general, it prevents you from making silly mistakes while you're typing your code. That is the main idea about it. Yeah. And it is now basically included in every IDE that we use, right? It's plugged into a cellular LSP cover base code.
00:11:41
Speaker
Yeah, so how it works. So, say the condo can emit information. If you type a function, it will emit a namespace element and function element and the name of the function and the arities of the function.
00:12:00
Speaker
So it can be used as a library to get this information, and then you can build tools on top of this. For example, Closure LSP, that is a tool which implements an LSP server for Closure. I will explain it. So it uses CLDCondo to do the code analysis, and then it implements the LSP protocol.
00:12:29
Speaker
And it stands for Language Server Protocol. And this Language Server Protocol is invented by Microsoft. Initially, it was for VS Code to implement a server that's running in the background. And then the editor talks to the server to get information about what, yeah, to do things related to your code. And also for linting.
00:12:58
Speaker
But this then got the protocol got, I think standardized. I don't know the exact history, but I think I imagined it went like that. And then other editors also implemented clients for this. So using the same server that you're using for VS code, you can now reuse this.
00:13:19
Speaker
For example, for Emacs, if you build a client in E-Lisp, then you can reuse the exact same server to get linting and code navigation features and completions for Emacs or for IntelliJ or for any other editor that can talk to an LSP server. Nice. Yeah, so it's now used in Kalta.
00:13:49
Speaker
So before I've written an LSP server myself, which only exposed Celia Kondo information. And this was a separate extension that you had to install from Calva in VS Code. But now, Calva, they use the Closure LSP as a library.
00:14:16
Speaker
And that LSP uses CLD Condo again. So now you don't need to install this separate extension anymore. So you get it basically bundled with Calva. I recently started using Closure LSP in Emacs. It's really cool because you get the navigation that
00:14:45
Speaker
that you usually need a REPL for. You now get only using static analysis. So even, well, in ClosureScript, I'm usually too lazy to set up something like a REPL. And now I get all this navigation after years of not having this in ClosureScript. I finally get this again. So yeah, I can see the benefits.
00:15:11
Speaker
Yeah, but it's been really nice that, as you said, in Emacs, for example, or any other editor, now there is a nice foundation of common functionality that you can get in every editor regardless of what editor that you use. But obviously you should use Emacs because the rest of the shit is junk anyway.
00:15:30
Speaker
So, um, I agree. People can, people can finally experience some of the, you know, features across all. Stop baking me out. We got better. Yeah. Certainly there's problems with the internet. Yes. I think this, this is something that, that, that you're doing, right? I think you.
00:16:01
Speaker
give like somebody's ID max, you know, somebody has somebody's ID max on the internet, let's bring down the internet. But I will get back in a minute, but, but you use a IntelliJ, right? Is IntelliJ also using LSP, you know that already, or they have their own? I don't think they use LSP because I think IntelliJ basically has a very complicated indexer, um, that does all this static analysis. Yeah. That's what I think, um, cursive,
00:16:29
Speaker
relies on as well. I think that is like MPS or something. Yeah, they have a different language protocol basically of their own. And then I think he extends the MPS for closure. But yeah, I think a lot probably, I don't know if he uses Condo either actually, because I always use Condo
00:16:56
Speaker
For myself, what we tend to do is we put condo in the CI and we, we do it on the command line before we check stuff in, but I don't have it as a sort of, um, real time feedback in the editor. Yeah. Do you have it set up like that?
00:17:15
Speaker
No, I just use Emacs and then Emacs with LSP recently. Well, I mean, it's always been Emacs with rebels mostly, but these days with LSP once it was released, I think a few months ago, maybe a bit. Hello. Yeah. Yeah. I don't know what happened.
00:17:36
Speaker
So we were just talking whether IntelliJ is using LSP or not. IntelliJ idea of cursive. No, no, no.

IntelliJ's Role in Closure Development

00:17:44
Speaker
No, no. So cursive has its own, I think IntelliJ has its own model and cursive builds on top of this. Musical instruments that you play with by waving your hands in the air near antennas and it goes. It's pretty neat.
00:18:07
Speaker
um so you know you know ghost sounds from okay right okay take two well it's just a 30 minute so i think we should uh keep michael away from computers from now on yeah
00:18:33
Speaker
Maybe you know, Zencaster isn't using clj.com in their CI. That's the problem. That should have fixed everything. So are you going to start again? What do you think? I mean, do you think it would have kept some of those files? Yeah, it did, because I can see recording one, there is enough of content there. So we can continue talking about
00:18:59
Speaker
But E-mass LSP and IntelliJ LSP. Exactly. Yeah. So yeah, we were just talking about whether IntelliJ is using LSP or not. And also IntelliJ using, not IntelliJ but Karsiv probably, you know. Yeah. Karsiv using CLJ Condo or not.
00:19:18
Speaker
No, no. So IntelliJ has its own model of the code. I'm not exactly sure how it works, but it's a proprietary thing, or maybe it's open source, I don't know, from JetBrains and they use their code analysis or all uses that framework.
00:19:39
Speaker
And Cursive, I think, uses that framework. Yeah, that's right. It doesn't use LSP or CLDA Condo, but you can install in IntelliJ and plug-in for LSP, and then you can also use a CLDA Condo LSP server, or you can use Closure LSP, which is the full Closure LSP project with navigation and completion.
00:20:08
Speaker
and CLD Condo included. Okay. But what's the point though, because you get all the navigation from, from cursive. Yeah, that's true. I guess you have to pay for cursive. So maybe he's, you know, that's true. But maybe, maybe a closure LSP has some function or some features that cursive doesn't have, have, or, uh, at least you get the linting from CLD Condo also.
00:20:33
Speaker
Uh, but, uh, I'm not, I'm not sure. Uh, I, I haven't used cursive that much, so I'm not really able to compare. Yeah. Cause you get a lot of, um, I mean, you know, not.
00:20:45
Speaker
I think COJ condos have got more features than the cursive thing does. But for instance, when you're editing Closure in cursive, it will tell you things that are wrong. It will tell you where things aren't being used. For example, it will gray out parameters that aren't being used or requirement or deaths that aren't being used.
00:21:08
Speaker
Fair enough, it doesn't work. I think where Condor works better is if you refer a variable, for example, a refer a function, it doesn't catch that in cursive, but obviously CLJ Condor catches it. If you don't use the referred function, you mean? Yeah. In that case, it doesn't catch it. What I do is I tend to edit with cursive and then
00:21:37
Speaker
Before checking the code in, I will run the COJ Condor just to, it's a final check. You know, you could just make your life easier by using Emacs and LSP.
00:21:50
Speaker
Well, you know, it's, it's like spec too, you know, it's like, one of these days it will happen, I guess, you know, but it takes a lot of thinking about this. Yeah. Anyway, so, but CLJKondo is also in, CLJKondo is also in WIM thingy, right? WIM fireplace, or there is a different plugin for that.
00:22:12
Speaker
I think so. I don't know much about Vim, but I see a lot of mentions about Vim Iced these days, which is made by, I think, Japanese or Chinese.
00:22:26
Speaker
I think Japanese guy, uh, who also provides some extra features. If you have CLD condo installed and it will call out to this binary to, to also catch this analysis information and also do some stuff, but you can also use LSP from, from VIM and then you get exactly the same features as, as in Emacs or, uh, this code.
00:22:51
Speaker
So it's nice that you have to just write one thing and it gets used everywhere. That's a big winner. So the basic editing experience of these terminals can be similar. Yeah. Is that an insult? Yes, absolutely. Thank you for catching that, Vicky. Of these terminals. Exactly.
00:23:22
Speaker
He was thinking that, you know, like, okay, you know, terminals are inferior. That's basically what he... The mainframe editors can all have the same performance. I think this is coming from a guy who's using Linux on his desktop.

Debate on Development Platforms: Linux vs Mac

00:23:38
Speaker
What's wrong with that? Half the shit on Linux is on terminal, right?
00:23:45
Speaker
You know, there's a totally tendentious argument, you know, for the issues. No, I mean, you know, the fact is that Linux is really good. You know, you can make, you can buy very, very cheap hardware and use it and have a great desktop experience for Linux. Otherwise idiots like you buying these Macs for, you know, spending three or $4,000 for a piece of crap computer.
00:24:10
Speaker
Yes, exactly. And that is also the reason why- I could spend a quarter of that and get something much better. That's also the reason why you spent two hours figuring out why the fuck headphones are not working. Meanwhile, I'm more than recording some shit. Oh yeah, because this has been such a smooth recording experience for you, Jay. That's Zencaster. I think we can complain about Zencaster without any issues. Anyway, I don't care if it is any type of computer as long as I don't need to spend time tinkering this shit.
00:24:41
Speaker
Well, I think the whole point is that everyone's got some kind of cost-benefit analysis to do anything. If you just want to get some reliable hardware, max-grade, if you don't know much about hardware, if you can put together your own computers,
00:25:00
Speaker
then. Well, that's what I did when I was a kid and I'm not anymore. Ouch. Ouch. Building those on this fucking gigabyte shitty laptops and then trying to figure out all the crap, maybe the turbo LED and that doesn't do anything.
00:25:16
Speaker
This should happen in the past. I know, but these days you can basically go on the internet, get a PC builder. It's pretty trivial. I get my sons to do it. They've not even got much experience, but they can put together a PC in half a day. It all works very, very reliably. It's like I said, it's a quarter of the price of a Mac. That's true. It's got twice as much memory and more cores and all this stuff.
00:25:45
Speaker
So it's a bit like people who want to use Emacs. You put the investment up front in terms of learning the technology, then you reap the benefits later. Yeah, but it's not like it doesn't, especially once you these days, because all the developer tools are available on pretty much all the platforms. So it really given Windows as WSL now, basically whole Linux simulation layer.
00:26:11
Speaker
I think we'll talk about it a bit in the, he was already experimenting with WSL. So I'm curious where he is at because I'm, I tried WSL one and then WSL two a little bit, but I've also built. Go on. Yeah, I did build a PC as well during last year in the pandemic.
00:26:32
Speaker
Because a lot of people were talking about it. Well, we're in the house anyway, so let's not use laptops anymore. That's all build a PC. So that's what I did as well. And so the first thing I tried was installing Ubuntu. But then my video card drivers didn't work.
00:26:51
Speaker
Which was expected. You should have checked the wiki for support. Eventually I got it working.
00:27:03
Speaker
but it takes some time. But I was also curious about WSL2. So I just installed Windows as well and then WSL2 on top. So WSL2 is a Linux, kind of a virtual machine, but not really. It's something more integrated into Windows
00:27:30
Speaker
But it uses, I think, hypervisor technology to run a Linux system. And this really works really smooth. So you just install this and then...
00:27:46
Speaker
Well, you can install Ubuntu and then you can run all your tools, but if you then start VS Code, for example, you can edit natively in Windows, but it will connect to your programs in the WSL2 environment.
00:28:07
Speaker
like as if it's the same system. It's a really smooth experience. Yeah, I think WSL 2 is way more impressive compared to WSL 1. I think WSL 1 has more in terms of the integration. I think WSL 2 has more kernel level integration, I think. So the file system acts faster if you are copying between Linux and Windows, and that made the whole experience super smoother, I think.
00:28:33
Speaker
Yeah, I don't think that's the most performant bit, copying from Windows to Linux, but I think they improved that as well, but that is still pretty slow compared to copying from the same.
00:28:49
Speaker
But so in general, I think you should just do all your development stuff in the WSL 2 and maybe the graphical part in Windows itself. Why would you use Windows though? I mean, I'm not joking about that because I think Windows is a really terrible operating system. Yeah, I don't disagree. I mean, I'm not even joking. You know, I think it's awful.
00:29:13
Speaker
Yeah, yeah, I don't disagree. But for me personally, I
00:29:21
Speaker
I make all these binary programs for a CLD condo. I distribute binaries and for Babeska as well, and some other tools. And I just want to be able to test this on Windows as well. So I can do the compilation on Windows itself natively and in the WSL 2 in the Linux environment. And I do have a Mac laptop, so I can test out very easily all the three main operating systems.
00:29:51
Speaker
Right, right. And the Windows WSL2 system, it's a very fast system with a Ryzen processor and 128 gigs of memory. So it compiles Babeska twice as fast as on my Mac laptop. But maybe with the M1 processor, it will change again.

Hardware Preferences and Remote Development

00:30:17
Speaker
Yeah, I've been hearing some good stuff about M1. I was thinking of getting a Mac Mini, because as you said, everybody is at home these days, so laptops are not really that useful anymore, given lack of travel. But I heard pretty good stuff about M1, especially I know a couple of people who are using M1 MacBooks, and they're like, the battery life is crazy on those things. Yeah. That's what I hear.
00:30:44
Speaker
Hopefully, when all the tools are ported properly, maybe then I'll give it a try. But nowadays, I thought I would work most of the time on my new PC.
00:30:58
Speaker
Yeah, I think it's pretty convenient to just pick the laptop and sit wherever you want and then do your work, even if I'm at my desk most of the time. So I tried to set it up like I can always work from my laptop and then log into my PC using the terminal. Using the terminal. Awesome. Yeah, using the terminal. I actually love the terminal.
00:31:29
Speaker
So I tried all these combinations like launching Emacs from the terminal on my PC and then working from my laptop and also tunneling X server to my laptop. VS Code has a mode to directly add it on an SSH connection. I also tried that. I also tried
00:31:55
Speaker
cyber connecting to a repo on my PC from my laptop and while the files are edited locally. So I tried all these combinations. So why aren't you using Tmux, Mikhail? Come on. I don't know. I just haven't tried it. And you wonder why your Wi-Fi is saturated.
00:32:19
Speaker
All these things stuck in each other, like bazillion needle files floating around from one laptop to the desktop to back. But eventually I only do the heavy stuff now on my PC and then I just
00:32:34
Speaker
copy the stuff back to my laptop. That's what I ended up with because there is always a bit of delay and ergonomics issue. I think the other thing, I don't know what it's like with Emacs, by the way, but you can tell me, but are the codes different on Windows and on Mac or are they the same? No.
00:33:01
Speaker
Well, the chords are the same. That's what you call it, isn't it? A chord. Yeah, I think so. So the key combinations, the chords, are the same.
00:33:14
Speaker
if the keyboard layout is not always the same for a keyboard which is suited for Windows or a Mac. So one thing that bugged me was the function key is on Mac laptops. It's on the far left. And on Windows, keyboards is usually the control key is on the far left, and then you get the function. And this is very annoying with Emacs, I find personally.
00:33:45
Speaker
Let's just ruminate on the fact that things were annoying with Emacs. Yeah. Okay. Carry on. Let's move on. We've been around some houses here. Anyway, CLJ Condor.

Open Source Support and Mikael's Motivation

00:34:02
Speaker
I think it's really a big success story, actually, Mikael, for you. And I think you're getting a lot of contributions and
00:34:12
Speaker
I think people are in the community are very excited about it because I think what you did with CLJ Kondo fundamentally, and I think that was your objective, was to make the lint of something which was done in Clojure rather than in Go or some other language. So I think this has been a huge win. And obviously the benefits from Growl VM in terms of speed were the real innovation that you latched onto. So I think it's a great success.
00:34:38
Speaker
We were going to talk a little bit about the fact that with this success and this fame, you're now the recipient of the beneficence of Cognitech in their largesse as they give out money to their winners. And you are one of the lucky golden ticket holders. Yeah, I was happy to see this. No, it's really good. It's a good thing by Cognitech for sure.
00:35:07
Speaker
Yeah, I'm really grateful that they are doing this. So yeah, hopefully in the future.
00:35:17
Speaker
can work more on these tools. Yeah, by having some, some income from these open source projects. That's really cool if that would grow in the in the future. But right now, I'm really grateful for companies like company tech, and there's actually one other company who is
00:35:37
Speaker
Also sponsoring me with substantial amounts, which is at Goji from the Netherlands. But they are doing it through Open Collective. So if I would have, let's say five or 10 of these companies, I could make a living of this. So it's a very interesting trend.
00:36:05
Speaker
to see where this GitHub donations and also Open Collective is going. Because in the start of 2019, I started with the sale of the condo and not for the money, you know, just because I wanted to build a cool and useful tool.
00:36:26
Speaker
But now it's becoming more serious. And yeah, I'm really happy that it's still going like this. Because it is, as I said, one of the foundational tool now, as we were saying. It's part of practically every Closure Developers Toolkit now. If it is not, I think people should check it out. Stop the podcast and then install the show right now.
00:36:54
Speaker
Yeah, but I think it's really interesting how you've, you've taken that though, Michael is, you know, you're looking at low level tools like will enter and then you've got these a little low level tools like the small closure interpreter as well.
00:37:07
Speaker
you know, and all these little tools that maybe you could describe a little bit about what your motivation is for some of these other little, not little in the sort of impact sense, but you know, they're kind of like piecing together little bits and pieces. Maybe I'm into playing it. You tell me a bit a bit about what your kind of strategy is for a lot of these tools as you bring them together. So
00:37:32
Speaker
It started with CLG Condo, so I built a tool that could analyze arbitrary closure code and then spit out some information about that.

Closure Interpreter Development: Sci

00:37:42
Speaker
And I compiled this using GraalVM native image. So it had instant startup, and you can just invoke it from the terminal and have instant feedback. And I thought this was really cool.
00:38:03
Speaker
And it took me, it crossed my mind a couple of times to also build a Closure interpreter with instant execution instead of only linting. So you can execute arbitrary Closure expressions from the terminal as a script, you know. But because Sylvia Condor was already taking
00:38:32
Speaker
so much time. It crossed my mind a few times, but I thought, well, let's not do that, because that will be a too big of a project. But until I did, try it. So I started very small, like, OK, just for today, let's just try to implement a let expression or something, which you can execute from the terminal. And this is basically how the
00:38:59
Speaker
the interpreter, the small closure interpreter, started. So you can execute a let expression. And then I tried to support loops. And then I tried to support functions. But initially, the only thing that I had in mind was you pipe some input to this command. And then you can execute an expression. Then it will spit out something. That is basically what I only had in mind.
00:39:27
Speaker
But what is small in small closure interpreter then? So is it small? It started really small, like that. But I added more stuff to it up to the point that it almost supports all of closure now.
00:39:43
Speaker
So it became super Closure interpreter. Yeah, so it supports... I think you should call it simple Closure interpreter, just for the trolling. Yeah, for the trolling, or easy, easy Closure interpreter. So it now supports functions, vars, namespaces, multi-methods, macros, of course. At first it didn't support macros, but after a while it did. And so it went on and on.
00:40:12
Speaker
And so the main differences now with JVM closure and the small closure interpreter is things like where the closure compiler emits new classes. That is something I cannot do in the small closure interpreter because it runs in, or you can use it with GraalVM native image.
00:40:37
Speaker
And once you have your native image, that is a closed world. So you cannot create new classes while it's already compiled. And this is where I need to come up with some workarounds in this interpreter. So things like protocols, they emit new classes in Clojure in the JVM.
00:41:02
Speaker
But in the interpreter, I just have to work my way around this, basically.
00:41:11
Speaker
So is the idea, just to sort of back up a little bit, is any of the SCI is something that's used via GraphVM. So it's precompiled and it's not dynamic like Closure is. Is that the headline news? Yes. So the difference, so the use case is you want to create a GraphVM native image, which has an instant starter, but you also want Closure evaluation.
00:41:42
Speaker
And this is something you cannot do just by calling eval from Closure Core, because eval will use the compiler to emit bytecode, and a native image cannot execute bytecode. It has already compiled this bytecode to native.
00:42:05
Speaker
some object code. So you cannot do this at runtime anymore. So instead of using bytecode, bytecode completion, I use an interpreter. So it just parses your string into S expressions, and then it interprets those S expressions instead of emitting bytecode. So this is the difference. How does it manage to interpret them rather than evaluate them then?
00:42:35
Speaker
Yeah, so that's basically the goal of the library, to interpret these things. So the interpreter is built from a couple of components. So obviously, you have the parser.
00:42:52
Speaker
which goes from a string to S expressions. And then you have something called the analyzer. And the analyzer tries to optimize the S expressions in such a way that it's fast to execute the forms. So for example, if we have a do expression,
00:43:13
Speaker
So let's say do and then print foo and then print bar. So you have two things in the do block. What you can do is just walk over all the expressions in the do block and then execute them using a loop, for example. But what you can also do, and that's an optimization that I'm doing, is that you analyze each
00:43:41
Speaker
expression beforehand and then save those expressions in a local and then just when you evaluate the entire do block you just look up the expressions in the local instead of looping overall all of them so you can unroll let's say these do blocks so I have a couple of macros that optimize this for like 20
00:44:12
Speaker
up to 20 expressions or so. And if you have more than them, then it will use a loop, for example. So this is done in the analyzer. What the analyzer also does is look up var references and class references. So if you say ink one, then it will look up the var ink.
00:44:36
Speaker
in the analyzer, and then it will emit a form that just has a reference to the var directly, instead of looking it up at evaluation time, let's say. So if you evaluate this form multiple times, you don't have to look up this var all the time. So these are the things in the analyzer, and the analyzer then emits expressions that, let's say, the evaluator is what I call it.
00:45:05
Speaker
then executes those expressions. Do you build all this from scratch, or are you using some sort of pre-existing tools? Yeah. So I took care that all of this works with GrowlVM. And so if you try to do this using
00:45:26
Speaker
at the time, at least tools, tools reader directly for closure, that is a parser for closure, then you would run into some, some things related to eval, which were not possible. And maybe
00:45:43
Speaker
Right now, you can work around this. But there was also a bug, or not really a bug, but an issue enclosure related to the locking macro. So Gravium did not understand the locking macro enclosure. And this was also an issue with lots of projects at the time that you would try. They just didn't compile with Gravium. So I basically rebuilt everything
00:46:12
Speaker
in such a way that it works with GraphVM. So the parser, it is using basically only the Eden parser from ToolsReader because that ensures that there is no evaluation happening. And then I added some stuff on top to get better location information. So in the interpreter, you have information about all the symbols as well instead of only the lines.
00:46:43
Speaker
Right. So if you type top level symbol in closure, you will not get location information about this. But in inside, you will get online 20 column for there is an unresolved symbol X or something. So so that is also what this parser does. The parser is called edamame.
00:47:10
Speaker
And it's also configurable. So you can say, I want to parse only Eden plus this little thing on extra. Or you can say I want to parse full closure syntax. So it has this configuration, but I'm using this parser in the interpreter. And yeah, it's pretty much built from scratch.
00:47:40
Speaker
I don't really use any libraries except tools reader. Yeah. So obviously the interpreter is more or less, you would say complete in terms of the features that you want to have or there are more things that you want to add there. Well, I think it's pretty, uh, pretty much, uh,
00:48:03
Speaker
let's say the first 80% are always the easiest. And so it supports the basic things you will need for scripting, like up to DevRecord, let's say. But after that, even DevRecord is a little bit of an edge case because in Closure itself, in the real Closure, let's say, it will also emit a class. That's already where I have to
00:48:33
Speaker
If you say def record foo, and then you say instance foo, and then you create a record, then it should return true. So even there already, I have a patch in my instance function because I don't emit these classes. But I still wanted to return true. So I have some workarounds there. So this is def record.
00:49:03
Speaker
And then there are things like reify. So you want to reify interfaces and protocols. So before yesterday, it was only possible to reify protocols.
00:49:17
Speaker
or interfaces, but not both at the same time. And the reason is, again, that in GraalVM native image, you cannot emit new classes. So what I did before yesterday is
00:49:34
Speaker
I put breaking news on the podcast. So what I did is I heard it here first. Yeah, it's breaking news. So so what I did to support reify before is
00:49:53
Speaker
Is it reified? Is it because it's me? Yeah, it's reified. Yeah. Yeah. You were thinking, oh, I'm coming on the podcast. Let's do something for Ray. Okay. I appreciate it. Yeah. Yeah.
00:50:08
Speaker
We will get to vjify later. So I think if you do vjify every function, it keeps returning just one string saying, use fucking emacs.
00:50:24
Speaker
Nothing else and it doesn't matter what you what you write in the body it always returns just one string that says use my max Let's say for the last 20% to support To to get more closure compatibility because that's that's one of the goals right to have a scripting tool that supports most of closure Yeah, and and also can run
00:50:47
Speaker
libraries from the Closure ecosystem, from Closure. So you can just say, I want to use this library, and then it hopefully works. Because that is currently not the case with every library. So if your library uses DevType, for example, it won't work today. Because DevType is the most low-level thing you can have in Closure. So I don't have a good answer to that.
00:51:17
Speaker
yet, but it might be coming in a couple of months. I don't know. Hmm. But before you go on, Michael, I mean, you know, so like you're talking about these, like 80%. I mean, is that, you know, when you look at the libraries out there, um, have you got a decent corpus of libraries that are supported?
00:51:38
Speaker
Yeah, I have a link on the, so maybe I should also explain that. So the interpreter is called SI, S-E-I, but this is a library and this library is used in Babashka. Babashka is the interpreter compiled to GraalVM, with GraalVM, but with a selection of libraries built in, suitable for scripting.
00:52:08
Speaker
So this is basically an application that is built using Sci. But Sci also compiles to JavaScript. So you can also use it from JavaScript. But about library compatibility. So in Babashka, Babashka supports something like a class path. So you can say, I want this library on my class path.
00:52:36
Speaker
And, uh, but not all of the libraries work. So we do have a page with a list of libraries that work with Babeska in the documentation. It's called, uh, doc slash projects dot MD. There is an entire list there. No, nice.
00:52:53
Speaker
So for, so, because you said CLG Condo, going back to CLG Condo a little bit, because what are the next plans

Enhancements in Static Analysis Tools

00:53:01
Speaker
for it? So where do you want to take it? Is it going to be like, you're going to introduce the rest of the 50% of the static type? Or what is the idea? Yeah, so one relatively new feature in CLG Condo is it also uses the interpreter now.
00:53:20
Speaker
but for macros. So if you, well, a user can now write a hook to expand your macro calls, let's say, to get better analysis for macros that Salesforce Condo doesn't understand out of the box.
00:53:43
Speaker
Yeah. So this was recently added, or maybe half a year ago already, but that is still an area of improvement to add more abilities to this so people can do more, so they can emit also linting warnings during their hooks, for example. This is already possible, but I think it can be improved.
00:54:13
Speaker
I'm constantly working with the LSP team as well to give them more data to improve LSP. So that is an area that I think I will work on in the future. And then there is a pretty big list of things that are still to do, but it's mostly
00:54:39
Speaker
small stuff, you know, no, no, not really, not really any big topics, but more finishing touches, let's say, yeah. For example, well, in sale, the economy, you can lint sale, DC files. But so if you have you just type the letter X,
00:55:03
Speaker
you get unresolved symbol, but for both languages actually, or for both, because CLDConder will lint the CLDC file as if it's closure and closure script. So it will lint it actually twice. So you will get the warning twice, but it's deduplicated before it's printed. But if you have a function and then
00:55:32
Speaker
you have a branch, so you have an argument to the function x, but you're only using it in the closure branch, let's say. You will still get a warning that x is unused because you don't have a closure script branch that uses it.
00:55:50
Speaker
And this is sometimes confusing people. So maybe we will add something to the warning. It's not used in this language, but it is in that other language. This is why you get the warning, for example. So these are the finishing touches, let's say, but there are always, yeah. What were the things that you were contemplating around spec on
00:56:20
Speaker
Yeah. So that's one of the things with spec one and spec two, right? So I'm not really investing a lot in this right now because I know that spec is going to change. So I'm kind of waiting for that. And then once that is, once spec two is there, I think I want to support analyzing specs statically. So you get also type
00:56:49
Speaker
type related lintings, whatever it is. So if you make an fdef that doesn't conform to the actual function that you're trying to fdef, for example. For example, but also if you fdef that the first argument is an int, and then you call that function with a string, that you also get this linting for free, you know? And this already works with Mali.
00:57:17
Speaker
Not because Celiacondo has support for Mali, but because Mali built a tool which can spit out information that Celiacondo understands. So any tool can basically do this. So if you write your own Mali or spec, you can write a plugin for your validation system to spit out this information.
00:57:42
Speaker
uh to to make sale jaconda understand the types for your function but this is also something that still can be improved i think so there are all all of these areas that that are there already but they they can be improved uh so yeah that is my plan for the future just to make sale jaconda more stable and more
00:58:06
Speaker
a good out of the box experience for the mainstream closure developer, let's say. Yeah. Well, I think it's still, it's already pretty awesome. So yeah. Yeah. So one thing that was also really recently added, uh, is support for the closure core match macro, which is a pretty, pretty, uh, complicated macro because it has.
00:58:32
Speaker
a lot of syntactical constructs that introduce new bindings. But it was on the list already for one and a half years. But I finally did that a couple of weeks ago. And there are several of these things that are on the list. So the more, I think,
00:58:55
Speaker
out of the box, this works, the better it is, because for beginners, it's not so nice if they install a tool and then it yells at you for something that that's not wrong, right? Yeah. You did something else as well, didn't you? Like you're talking about like using specs to look things up, grab or something that you did? Yeah, it's called grasp.
00:59:21
Speaker
Grasp, not grab. It's a little bit of means the same thing in English. So that's why. Yeah, I think there is a jux library called grab operating in the five, five character libraries. But, um, yeah, so it's called, it's called, uh, grasp, uh, which is the pun on grip and, uh,
00:59:48
Speaker
What you can do is specify. So what it does, you can search through your code base or through your entire Maven repository, for example, for shapes of code and these shapes you can describe with Closure Spec. For example, I used it today to see which interfaces and protocols people are using with DevRecord.
01:00:17
Speaker
In general because that is something I want to support better in the interpreter So I built a spec That says it starts with the it's a list and then it starts with the def record symbol And then it then you have a name and then you have a vector, but I'm interested in the rest of that so so it grasps your entire M2 directory for example, and then it's it will
01:00:46
Speaker
give you the S expressions of all the dev records that you were interested in. And then you can do some post processing on that. That's basically the idea. So you cloned up, so you cloned entire closures into your M2. So what? So you cloned entire closures into your, your Maven repository. No, no. Yeah. So I just analyzed the, the M2 that I had. Okay. Yeah. Is that something that could be done now is that you could like,
01:01:16
Speaker
talk to the clergy people and get access to the box. I am actually talking to Toby. I think it's Toby. Yeah. About this because I also made a tool recently which you can throw some closure code at and then it will tell you which dependencies you actually needed to
01:01:43
Speaker
to execute this code. So if you say, for example, I have madly.core slash index by. I'm using this. But you don't have madly in your depth.eden. Then it will tell you, oh, I found a library in your M2 directory, which actually has this function. So you probably need that one.
01:02:14
Speaker
But I actually want to build some integration with Clojars. So I proposed to Toby to make an index of all namespaces to jar files and more information about which file in the jar contains this namespace.
01:02:40
Speaker
Uh, so, but he's currently working on something else related to the, I think domain names. Uh, so you cannot create, uh, organization names that you don't own anymore. So he's first doing this. That's already security problem recently. Yeah. Yeah. That's, uh, but after that, he might look into that. So then I can integrate that with my tool to.
01:03:04
Speaker
to look up by namespace libraries enclosures that you might need. But I can also
01:03:13
Speaker
use that index to download all of closures and then do the... That's what you built a PC for, right? Yeah, exactly. And then do all the analysis on that. But I'm mostly interested in, let's say, the top 100 things that are occurring in a general closure code base. Yeah, sure. So that is why I built this tool.
01:03:40
Speaker
And I also use this grasp tool to find out. There was a recent discussion in the Closure Dev channel on Closure in Slack about arities of a social function. Because if you call a social with more than one key value pair, then the performance becomes slower because it
01:04:06
Speaker
does first and last and our first and rest over the arguments a couple of times. And that could be optimized by introducing more arities in in the SOS function. And then there was a discussion. Yeah, but people usually don't call it with more than two key value pairs. So I
01:04:29
Speaker
And this was an unscientific claim, basically. Yeah, I don't think it's worth it. But then I used a grasp to do some research. And I posted this research on the JIRA to actually see how people are using a search with how many key value pairs. And I think we even found a library where they had 20 or 30 key value pairs. Wow.
01:04:55
Speaker
But that was definitely not the common use case. So there's always a bell curve here. So there is a way now to actually get your features into closure. Just go and create open source projects with the features that you want, and then make Michael index those projects.
01:05:14
Speaker
prove that as a data point and that these 200 projects are using this shit. I don't know why, but, but it must be popular. Yeah. So it's kind of, this tool is kind of a spinoff from, uh, doing research for my interpreter or for CLD combo. Yeah. So that's really good. Fascinating. So, um, wow, I think we, we almost have a,
01:05:43
Speaker
one hour of recording, hopefully, unless we're missing any other topics.

Babashka's Growth and Cross-Platform Functionality

01:05:50
Speaker
You know, obviously we can talk about Babushka a little bit, maybe, you know, give us an update on what is happening in that world and how it is being used.
01:05:59
Speaker
I think what I remember from, I think last time we spoke about Babashka was just coming out. It was just being, it was just just coming, I think, or it being released, but it wasn't, you know, I think you've done a lot of work in it. Yeah. The last time I was on CLG, uh, on Dev and podcast, uh, Babashka wasn't even a thing yet. It wasn't, there was nothing. So I only had CLG condo then, but I started working on this when I was.
01:06:29
Speaker
on a vacation in Switzerland in August 2019. And then, yeah, right now it's being used by quite a lot of people, I think. So it has 1,800 stars on GitHub. And I have a Slack channel with almost 500 people now.
01:06:56
Speaker
So I get daily I see posts on on Twitter of I made this script and it does this thing and now I don't have to write it in bash anymore. And so these are and this you could already do this a year ago, I think. But so it's always the last 20% that that that gets the most work. And so the last year I've
01:07:26
Speaker
added several features to Babashka, so better integration with the tools.deps ecosystem. So you can say in Babashka itself, you can say require Babashka.deps. And then you can say add.deps. And then you can just put a DepsEden map in there. Then it will download the libraries for you and put those libraries on the class path. So you can just do this from one script.
01:07:55
Speaker
And it will use the real tools depths, actually. So it will require JVM when you want to add these libraries. But if you invoke it the second time, it's all cached. So then it's very fast. But I'm also experimenting with making tools depths itself native. So I already have a working version of this.
01:08:25
Speaker
So then you won't even need a JVM for, for this anymore. So entire tool steps is native and it downloads your dependencies and yeah, without the JVM startup. The other thing I remember that you did, I thought was very interesting was take the closure script, the closure bash script and made that work on windows. Um, cause that was, you know, cause that was a bit of a sore point, wasn't it? For some windows users for closure.
01:08:55
Speaker
That's true. So the current closure CLI on Windows is only supported from PowerShell. And it's very awkward if you want to shell out to this thing because it's wrapped in this PowerShell function or something.
01:09:14
Speaker
And you cannot use this from command.exe, so you always have to use PowerShell. And there are some weird edge cases around this. So what I did is port the bash script to Closure itself. And now you can run this bash script with Babashka. And this works on
01:09:38
Speaker
every operating system because Babeska works on every operating system. But I also made binaries out of this. So you can also just run it directly in a binary. You're just using raw VM. That's called depth.exe on Windows. So you just download the binary and then it works.
01:10:06
Speaker
So this is only a reimplementation of the bash script and not tools that itself. So, no, sure. Yeah. So it's only, let's say that the front end, but I'm using this, the same front end is now used in the bash guy as well to do, to download these depths. That's exactly the same code. Okay. Yeah. Nice. Yeah.
01:10:29
Speaker
So another thing that I added in the last year was pods. And so Babashka pods are basically other... You're ahead of Rich Hickey in this sense, I think. Yeah. Sorry? You're ahead of Rich Hickey in this sense. Yeah. I think he was going to have pods at one point. Maybe I will have spec too in the next... Pods. The next version of Babashka.
01:10:58
Speaker
The small closure interpreter will have spec too. I think you have the whole ecosystem, right? You have the interpreter, you have the linter, you have the static analyzer, you have a shell, so pretty much everything. Oh, we lost him again. We're almost there. Oh, Jesus. Could be his internet.
01:11:23
Speaker
That's my. Could be. I think he's downloading cloud drives as we speak. Yeah. Here we go. Are you still here? Yeah. Yes. Okay. So I can just keep on talking. Just keep on talking. We'll just. So what was I saying? So the parts, right? Yeah. So, so the Babashka parts are basically binaries to,
01:11:53
Speaker
Normally, you can shell out to binaries. So for example, you have a script, and you want to shell out to SQLite, for example, to save some data. Then you can shell out using Closure Java shell, for example, and then send some SQL. And then if you want to select some SQL,
01:12:16
Speaker
or some data from the SQL database, you can shell out, and then you get it in a CSV format. So that's a little bit awkward. It's not nicely integrated. And this is a problem that Babeska pods are saving, are providing. So you can, for example, use the Babeska SQLite pod, and then you can call functions
01:12:46
Speaker
that do kind of an RPC call to these other binaries. So a pod is another binary, which Babashka talks to, as in more or less an RPC-like fashion. But you are basically just calling functions. You cannot really tell the difference as a user.
01:13:12
Speaker
But this is just a way to extend Babashka with stuff that is not supported from source. So you can run libraries from source, but this is not always possible if those libraries have classes that Babashka doesn't know about, because we cannot interpret Java classes, for example. So then you can build a pod, and then this is a way to then extend Babashka.
01:13:42
Speaker
Uh, but apart can be written in any language and not only closure, for example, as long as it exposes the right, uh, protocol. Yeah. So you've got a FFI now. It's kind of an FFI. Yeah. It's a more or less, uh, high level FFI because the, the interop happens, uh, using, uh,
01:14:06
Speaker
uh, Jason or Eden or transit. So we're just sending sending Eden over the wire as if you're talking to a web server. I think that's the future of all FFIs. So it's all good. Yeah. But in scripting, it's, uh, for, for typical scripting tasks, it's, it's very fast. So I recently, uh, implemented, uh, a buddy pod.
01:14:33
Speaker
So Buddy is a library for closure for hashing. Well, it has more, but it has a, it has a hashing namespace. So I'm, I'm exposing that through the pod. So you can say load pod org Babashka Buddy. And then you'd, you say require a pod Babashka Buddy. And then you can say the Babashka, sorry, Buddy hash.
01:15:02
Speaker
some weird hash algorithm and then this data and it will just work from there without Babeska having to bundle the entire body library because it...
01:15:16
Speaker
This will also make Babeska bigger and bigger. So this is a way that we can delay or not include things. Yeah, we can compose things without including everything in one big binary. That's perfect. Yeah. Nice. I think all these tools are like, you know, it's like alternative.
01:15:41
Speaker
closure tooling that you're building in the whole stack, step by step. Yeah. The other thing I think with Babashka as well, you have a web server in it now, don't you? Yeah, there is. Emacs is going to come into it next.
01:16:03
Speaker
Well, it will support e-list maybe. Yeah. I think that, but, uh, yeah, there is, there is HTTP kit, uh, server. Yeah. Uh, and the client is also in there. So the HTTP client, uh, but you can now, someone recently built a script, uh, that re implements. Python HTTP server commands in the best car. So you will just see all the files on your own file system.
01:16:32
Speaker
in the current directory. That's not very easy because of this built-in AC2P server. So there was a lot of attention or work done on libraries as well. So we have a Babashka FS library, file system library, which is a wrapper around Java Neo stuff, basically, which is not so nice to call from closure because you have to create all these arrays for all the options at the end.
01:17:01
Speaker
And so this is, yeah, all close URI fight into a nice library, which also is bundled with Babashka. But the libraries you can also use from the JVM. So it's all, that's the nice thing about, I think about Babashka is that all the code that gets written is also reusable still on the JVM. So it's not, it's not an entirely separate ecosystem. The thing,
01:17:31
Speaker
Interesting question for you is like, you know, like, obviously, if you if you know bash, I know bash quite well, I've worked with it for, you know, probably years. And, you know, so I know bash better than I know, closure, probably. But, um, well, the trouble is that there are certain things in the bash that just are really horrible to work with, like Jason, actually,
01:17:57
Speaker
know, Jason is a total nightmare to work with. So you have to use some third-party tool like JQ or something like that, which is also quite, again, a bit of an awkward tool. The syntax isn't exactly perfect. And it's kind of annoying to, again, learn other tools outside of the Bash itself. So I think in that case, Babashka seems to
01:18:22
Speaker
be fitting very nicely because, you know, often you do want to have a web client that consumes something and get back some JSON. And if you want to do that in the bash, it's quite annoying. Whereas with Babashka, it becomes trivial. Yeah, that's that. Yeah.
01:18:38
Speaker
I'm trying to sell it to you here. I might try it out. I'm trying to think of other use cases where, you know, where, cause if you're just doing one line bashes, then I think it's very hard to, uh, to make the investment. But that's true. But where you've got these like use cases, like the threshold is around five lines of bash, I think. Right. Okay. I call this, uh,
01:19:06
Speaker
I actually, the number five I use is in two ways. So more than five lines of bash, you use Babashka. But if the Babashka script itself takes longer than five seconds, you use the JVM. Right. So this is kind of the progression. So here, so because the interpreter is not as fast as, as the JVM, because that is compiled. Yeah.
01:19:33
Speaker
So, so you start with the best script. And I think a lot of people before Babeska, they just stayed in, in bash because, uh, or they used maybe Luma or plank or, or maybe Joker, which is a interpreter in, in go, but, uh, but usually maybe they, they stayed in, in bash.
01:19:55
Speaker
And they didn't make it didn't even occur to them that they could use closure for build scripts, right, because it just takes two seconds to run it and the JVM is too heavy for these simple scripts. Now you can do this in Babashka, the simple scripts, although it, it can, it can run quite some, some complex libraries now like honey SQL, for example, and so you
01:20:23
Speaker
It runs like 2K lines of closure in 100 milliseconds, let's say. So it's not only small scripts, but if you have lots of loops and numerical intensive computations, let's say, then the performance is not as good as on the JVM. So if you have a lot of this complexity, this loops, and then the script might take longer. Well, definitely not longer than in Bash.
01:20:53
Speaker
but longer than on the JVM. And this is where you, I think, have to draw the line. If your fans are spinning and it takes longer than five seconds, go to the JVM. So if you're pausing very big log files or something, it might not work out so well for Papashka. Yeah, well, it's more like the executions per second that you
01:21:20
Speaker
Yeah. So, so log files might be okay because that, that is just calling into a native compiled function. But if your interpreter functions are in a loop, let's say, then, then it can, uh, become, become slow. But for typical scripts, I've never actually seen this as, uh, occur as a problem as a performance problem. But people might try Babashka from the JVM perspective, like, Oh, it's the same, but with better startup.
01:21:49
Speaker
But for some programs, like don't do machine learning in Babeska. You have Excel for that, so don't worry. Right. So use Excel interop. So this is a rule of thumb. Like if a script takes longer than five seconds, maybe use the JVM. And then if you're on the JVM, you can enter
01:22:16
Speaker
use GrowlVM to compile that to a native image. And then you will, again, get the fast startup time. So, yeah, five lines of Bash, the Bashka, longer than five seconds to the AVM, and then GrowlVM native image. One of the things, I mean, I don't know, you tell me actually, one of the things that sort of, again, you're probably deeper into it than I am, but
01:22:44
Speaker
One of the things we talk about here is there's a tool chain now. Obviously, we've got closure, and that's a relatively, you just fire up closure, you've got your editor and you've got your tool chain. As soon as you start using things like Babashka and Sai and all these other things, and then the tool chain growl, then the tool chain becomes a bit longer.

GraalVM Toolchain and Java Version Discussion

01:23:08
Speaker
Has anything happened in that world to make that a bit smoother?
01:23:14
Speaker
To make, to make that tool chain experience a little bit less kind of like, um, I dunno, a bit less annoying. I'm not sure. I don't know. I mean, maybe it's just not annoying. I just don't, I don't use it enough. So from a user perspective, uh, if you want to use Babeska, the only thing you have to do is download the binary and that's it.
01:23:37
Speaker
Oh, no, no, I get that. I'm just talking about like, if you want to use growl VM and you know, you've got your, because, you know, like you say, you're looking for a start of time.
01:23:48
Speaker
So, you know, you've written these, you've written these programs, which are targeted at sort of like small running programs. Yeah. Yeah. I get what you mean. So if you want to get started, let's say with the growl VM and you want to compile something to native, uh, there are a couple of tools for this. Uh, but I generally avoid all these tools. So what I,
01:24:12
Speaker
What I usually do is download GrowlVM. It's just a zip file from Oracle or the Oracle GitHub. You unzip this, and then it's in my downloads folder, and then I set an environment variable, GrowlVM Home. This is in my ZSH initialization file somewhere.
01:24:36
Speaker
And I change this every season because every season there is a new release. And then I usually copy scripts from my previous projects, which just have all the options that you typically need for a Closure project. And then that's basically it. So the main thing you need to do is write a main function, like you're used to with an Uber jar.
01:25:04
Speaker
Usually, if you want to invoke that using Java minus jar, you need a main function and you can just make the Uber jar and then pass the jar to GrowlVM and then it will compile it. Does GrowlVM immediately kind of spew if it knows that it won't be able to make a native image out of these things? Yeah.
01:25:28
Speaker
It will complain if you, for example, try to use closure evolved and it will complain about a dynamic class loader not being supported, for example. So you will definitely get errors. And, um, I mean, are there any cases where you can compile it, but then when you run it, it doesn't work.
01:25:51
Speaker
Yeah, it depends on what settings you use. So there is one setting report unsupported elements at runtime. So if you use that, it will try to compile while there might be things that it doesn't support.
01:26:09
Speaker
And then you will get the error at runtime, but this is a trade off because sometimes there is closure code, which, which uses Evol somewhere, but maybe for a very niche case. And if you don't use this at runtime at all in your Crawl VM binary, then you might still want to, to compile it at the cost of having this runtime error. But you can turn this on or off. Yeah.
01:26:38
Speaker
Uh, usually there is one thing, um, that you usually only will find out at runtime, which is, uh, so growl VM does, uh, static analysis on your code. And if, but if you use reflection to, to interrupt with other classes, so you, you reflectively look up some class at runtime, then growl VM cannot see this. So then you will have a runtime, uh, problem.
01:27:07
Speaker
So it will say, I don't know what this class is, or null point, or exceptional, or whatever. But you can solve this using a configuration file where you just put a list of classes that you will need at runtime. Then it will also include those. So that will also work. It also has an agent to detect all these reflective usages during a run of a program.
01:27:38
Speaker
So you can also generate these configurations. So in general, you find like, you know, using growl VM is not, has it improved or is it just like, you know, how do you, you know, how have you like other, other tips and tricks somewhere like, uh, the, that you've published or do you think that the growl VM team themselves are doing a good job of making, are they aware of closure on, on growl VM?
01:28:05
Speaker
Yeah. So, uh, as for tips and tricks, there is one, uh, repo called CLJ growl docs, which has, uh, has a list of tips and tricks, uh, maintained by Lee Reed and myself and, and other people, everyone can contribute. Uh, so we keep a list of, of, uh, yeah, common, common, uh, issues that you can run into.
01:28:32
Speaker
but also Hello World getting started page. I also made a YouTube video about this yesterday. So you can watch that if you want to get started. And as for, so the improvement over time with GrafVM, it came from both sides because Closure itself also had an issue with the locking macro. And this got fixed in 1.10.2.
01:29:02
Speaker
And also some other things have been improved, like avoiding reflection warnings, for example. So if you want to do a GRAV-VM binary, use closure 1.10.2 or newer, and also use GRAV-VM21 or newer, because GRAV-VM21 added the capability of resolving method handles
01:29:31
Speaker
And this is one thing that closure uses in closure Lang reflector. There is a conditional that checks if you are on Java eight or later. And if you're on Java eight, then it will, or later, if you are on 11, let's say, then it will reflectively get a method handle and use that to do
01:30:01
Speaker
some reflection, which isn't available in Java 8. This has to do with the whole module stuff. Yeah, right. And this was something that Grafium could not figure out before the version 21. So that also has improved. So this combination is really good. And this
01:30:26
Speaker
This combination has only existed for a couple of months now. So you're, I think if you want to get started, you're now in a good place. And as for, um, what was your last question? I forgot. Yeah. So it died. Yeah. Now the question was like, you know, about the tips and tricks and, you know, making sure that the actual tool kit, the tools themselves were kind of like relatively easy to use. Yeah. Yeah.
01:30:54
Speaker
You know, my question was whether, whether like, I know that closures went from growl. I wondered whether growl had learned from closure, but like, yeah. So, um, yeah, to summarize, uh, you only need native image tool and the closure compiler. And this is basically all you need to build a binary, but as for knowledge from the graph VM team about closure, um,
01:31:24
Speaker
I think they are aware of closure, but it's not a big thing, I think, for Oracle. Because for Oracle, they are mostly interested in the mainstream languages. So they are building, so Graavium is not only a thing to compile to native, but it actually has a lot more, which is called
01:31:52
Speaker
the polyglot platform. So you can run multiple languages in one JVM. So they have something called a Truffle framework. You can build a Truffle interpreter for a specific language. They have an interpreter for Python, for R, for Ruby, for JavaScript. And so you can combine all of these languages in one runtime. So I think they're
01:32:21
Speaker
more invested in these languages than enclosure. But I think they are aware of, for example, what I'm doing with Babeska. Because one week ago, there was a conference, a workshop organized by Twitter about the latest developments about GrauVM. It was actually at a compiler conference.
01:32:50
Speaker
So there were people from Twitter, from Facebook, from Shopify, from Alibaba, who all contributed to the Graphium compiler and made optimizations. And it was deeply technical. But there were also a couple of just developers, let's say like me, who were showing what they were doing with Graphium. So there I, I presented also the work
01:33:17
Speaker
I've been doing with Babeska. So at least some people are aware. I'm also, uh, yeah. Visiting the growl VM slack, um, channel, they have a separate, uh, Slack community. I'm also in there and, uh, yeah, there are a couple of, of developers from the growl team that, uh, that know me and I know them. So yeah. Also issues, issues on GitHub.
01:33:47
Speaker
get responded to fairly fast. All in all, it's a pretty positive experience to deal with the GraalVM people. I don't know if this is, we can cut this next question if it's wrong, but there is something about, is it GraalVM or is it the JVM that's doing this?
01:34:11
Speaker
multiple kind of homing environment where you can have multiple multiple runtimes in the same JVM. Is that is that JVM? Or is that growl VM? Isn't that the trouble thing? Or is it? Yeah, that's what I'm thinking. Yeah. Yeah, I mean, I don't mean different runtimes. I mean, like, I mean, like, multi tenant Java, you know, yeah, you can also I'm not sure if gravity is the only one, but
01:34:36
Speaker
Uh, they recently introduced a truffle interpreter for Java. So then you can run, uh, so growl VM.
01:34:46
Speaker
Let's say eight can run an interpreter that interprets Java 11 code. So you can have totally isolated JVM inside of a JVM. But that's fairly experimental still. And so the performance is not that good yet. So it is very much research at this point. So the interpreter is called Espresso.
01:35:13
Speaker
But that might open new ways because using Espresso, we can maybe use the Closure Compiler to compile in this Java bytecode interpreter. And that we can maybe compile to native image again.
01:35:36
Speaker
And then we can maybe have something like Babeska, but with the full closure and even the JIT capabilities. I'm not sure if that will work out, but it's certainly interesting to try. Yeah. I mean, it's also the interesting concept of having essentially two runtimes where you can potentially swap the code between one runtime and another runtime. That's going to work pretty well.
01:36:08
Speaker
Anyway, I think the whole ecosystem around GrowLVM and the JVM is still very interesting. I don't know about you guys, but we're on Java 11 mostly. Some bits are still on Java 8. Is it Java 17, which is the next one, which will actually be LTS?
01:36:36
Speaker
Which I think is, I don't know, they're simply issuing every six months or so, so it should be like sometime this year where we get it. Because I don't keep up with any more the Java virtual machines. I remember back in the day, like everybody's complaining like Java is not updated at all. And then the very moment they start releasing, you should like, I'm not going to move off of eight, fuck you, you know. I'm just going to stay on eight, you know, you can do whatever you want.
01:37:04
Speaker
But everybody was like, after five, there is nothing else. And we got stuck here and everybody is moving and they keep releasing every six more. They're like, no, I'm not gonna grade. I mean, I was just wondering actually whether you actually knew Mikael, whether there wasn't a thing worth thinking about in like these, in these JVMs, because I honestly, I've never even thought about it. Well, I know about 11, but I don't know about anything. I'm using 11 now pretty much everywhere.
01:37:33
Speaker
at work recently or maybe a year ago, we migrated everything to 11.
01:37:41
Speaker
By the way, what did you say? Because I mean, I get this question sometimes at work. It's like, why should we move to 11? Everything works with 8. There used to be an argument that security bugs weren't getting fixed in Java 8, but now they're fixing them all. Oh, such and such a thing, like garbage theft, it doesn't work in Java 8. We're backporting it. I mean, Oracle are doing a Microsoft, you know? Everything is, like with Windows 8, everything went back to Windows 8, you know?
01:38:06
Speaker
They're back-porting everything. And now this is like Java. It's like the number eight seems to be very sticky in developers with laptops. I think you're right, right? There is this Shenandoah GC from 11. So there is a bit more functionality there because in one of the projects I'm using Java 8 obviously because of the Scala Spark big data step. And in another project we are on 13 already. And then I already see, oh, okay, you're using some specific GC option that are not going to work.
01:38:34
Speaker
going forward or something. There are GC-level functionality, but unless you're using Java, Java, then if you're using language like Closure, then you don't see Java features as in the language features. You're only seeing the JVM features.
01:38:51
Speaker
So that's the, that's probably the difference, I think. But I was thinking more about like, cause obviously with Java, you got the, you got the whole kind of like, um, Lambda stuff coming in and people start in the streams. Yeah. Um, so I was wondering whether there's anything like, like making that again, that kind of stuff going, I think there is properties and value classes coming in. I think maybe they're already in, I'm not sure. So there are things on the horizon, but I don't think they're there yet.
01:39:21
Speaker
You know, maybe that that's the making Java more like more functional, I would say more like Scala. Yeah. Yes. Meanwhile, Scala is becoming Python slowly. But what about you, Michelle? I mean, do you notice anything from like the, the experiments with gravity? I'm about like the late Java versions is something I am using, uh, grow VM Java 11 mostly, um, uh,
01:39:50
Speaker
But also because in Babeska, I have a couple of libraries around Process Builder, for example, which is supported in Java 8. But in Java 11, they built some extra features into that. So I'm using that in Babeska as well. So this is a reason why I'm using Java 11. And also some classes that are only
01:40:16
Speaker
in Java 11 that can now be used in Wabashka. But not really any big things. But yeah, I just have the feeling that the Growl VM 11 is maybe better or supported because it's more used. So this is basically why I'm using it as the default thing.
01:40:44
Speaker
Okay. Nice. Cool.

Production REPLs and Immutable Architecture Risks

01:40:46
Speaker
The other thing I was going to, so before we wrap up, the other thing I was going to talk to you about, and I don't know if this is something, a topic that you're kind of like pushing a little bit with SCI, but having it as a kind of production repl. I remember talking to you at either one conference or another, we were discussing the idea of production repls and
01:41:06
Speaker
We're all getting very scared about them. But you were like, eh, come on, let's do it. I remember I was saying let's do it. I can't remember. I think we were going back and forth. I am pro production rebels because production rebels give me the ability to inspect runtime states in the production. And this has helped me more than it
01:41:34
Speaker
didn't help me, let's say. But you have to be careful, of course, to not redefine things. But you can always redeploy the production app when you do that. But you have to be careful, of course. But for inspection, I think, in general, having a REPL in production is very good.
01:42:03
Speaker
I'm not sure what we discussed at this conference and also. I think it was like the concept of whitelisting certain actions or whatever, you know, this kind of concept where you could.
01:42:16
Speaker
in a production repl turn off certain, you know, so you could read, but you couldn't, you know, you can look, but you can't touch type thing. No, I don't know if that, I think we were just talking around the idea. I don't know if anything came up. There was just something cause for us, we, we, you know, we're, we're fans of this, like immutable architecture stuff. Um, but the, obviously with a repl, you know,
01:42:43
Speaker
Yes, it's immutable data and it's all immutable, but you can change things. Oops. I pretty much use a production repo only for reading things. But I have used it, for example, in products where I had the atomic running in production and I needed to correct some datums or something. So I just did that from the production
01:43:13
Speaker
REPL because that only applied to the production database. So yeah, it is a dangerous operation and you need to be careful, but you can mess up a database also not using a REPL. So, I mean... Seems like that's what someone did, you know. It's a convenience, you know. But yeah, I can also see the downside if you have a team of
01:43:40
Speaker
200 developers and you don't know touch the rebel and mess things up. Yeah, I can see that being a problem Usually I've not been in such big teams yet that this was a problem. So yeah
01:43:55
Speaker
Maybe at Newbunk they are not doing this, I don't know. Maybe everybody has access to production rebels there, who knows. They're only using production rebels. Exactly, there's nothing else. They develop on the production rebels. Exactly.
01:44:14
Speaker
You don't need to have multiple environments or anything. No CI, nothing. Just start coding and then push that too. No, Ctrl C, Ctrl K, and then you're done. Is that the next talk for something? Yeah, yeah. In the terminal, right? Yeah, in the terminal. Anyway, so, wow. Hopefully, I think

Conclusion and Gratitude to Michiel

01:44:40
Speaker
Our previous recording was still reasonable. So we're not starting in the middle of the episode for the people who are listening. But thanks a lot, Michiel. Thanks a lot for joining us again, and all the great work that you've been doing, which is basically helping the entire spectrum of closure developers, the people who are just starting with closure, and the people who are intermediate people, or advanced beginners like me, and also people who are super experienced, and also rest of the people like Ray.
01:45:12
Speaker
So I think this is one of the biggest initial complaints about Clojure is this around the developer tooling and ecosystem.
01:45:25
Speaker
Because we all remember the days when we had to use slime instead of any closure related stuff. I don't remember those days. So that was good old days without even lining and being on the horizon. I was using slime in Emacs on Windows. Oh, wow. OK.
01:45:48
Speaker
exactly so but but you know with the tools like the stuff that you're building like condo and babashka and well are the the whole uh supporting tools which are providing these tools you know these are helping like significantly everybody so thanks a lot for that and and i don't think there is anybody who
01:46:08
Speaker
in the closure development doesn't use your tools anymore. But as I said, I mean, if you're not, then I hope you already paused and then installed it when we said that you are supposed to. So if not, go ahead and do it now. Yeah, thanks again, Michael. It's a pleasure to have you on the show again. And hopefully we'll have you back again for the third appearance.
01:46:32
Speaker
Yeah, I'm happy to come back anytime. And yeah, thanks for inviting me. It's really great. Thanks for all the compliments. But your brain is extremely fecund. What the fuck does it even mean?
01:46:50
Speaker
It means it's like a fertile, a fertile place. Yeah. Okay. So I just thought I'd put that in there. It sounds very rude, you know, but I think it's very nice. Thank you, Ray. Your brain is fucking too. So thanks, Michael. It's always a pleasure. I mean, you know, I think it's just such a great developer and such a great blog. So yeah, thanks very much.
01:47:18
Speaker
Likewise. Thank you. So that's it from us for episode number 69 minus one. And hopefully we'll see you in the episode 69 minus zero soon in a few weeks. Enjoy.
01:47:34
Speaker
Thank you for listening to this episode of DeafN and the awesome vegetarian music on the track is Melon Hamburger by Pizzeri and the show's audio is mixed by Wouter Dullert. I'm pretty sure I butchered his name. Maybe you should insert your own name here, Dullert.
01:47:52
Speaker
If you'd like to support us, please do check out our Patreon page and you can show your appreciation to all the hard work or the lack of hard work that we're doing. And you can also catch up with either Ray with me for some unexplainable reason. You won't interact with us, then do check us out on Slack, Closureion Slack or Closureverse or on Zulep or just at us at Deafened Podcast on Twitter.
01:48:20
Speaker
Enjoy your day and see you in the next episode.
01:48:57
Speaker
we just we just click record and then start talking and then okay if it is bullshit it is bullshit