Episode Introduction
00:00:15
Speaker
Welcome to Dafern and luckily this is not a live show so you don't know that we are horribly delayed because of me. This is episode number 82 I think. It says 82 up there.
00:00:30
Speaker
It says 82 up there because I typed it. So I don't know if I typed it. It's like one of those Wikipedia articles. You just go and then edit it yourself and then claim that that is the truth. So yeah, let's let's let's let's cycle on 82. So there's episode number 82 and this which I as usual a bit late to the recording.
00:00:51
Speaker
And we have Ray. Ray, are you still in Belgium? There's no news of me moving. Unless you know something, I don't. So you got boosted as well. I got boosted. Yeah. Oh, nice. Nice. Yeah, me too.
Meet the Guests
00:01:09
Speaker
Yeah. You've been transposed to another location. Undisclosed. Yes. Especially for deaf and recording.
00:01:17
Speaker
Anyway, hey, wait, who's the noise? Yeah. Hello. My name is Ben. And I love Closure. Well, that's it. You did everything you're asked to do and then you can take your money and then that's it. That's the podcast. Yeah. I felt backwards into Closure and, uh,
00:01:47
Speaker
But best happy accidents, I have to say. Nice. Welcome Ben. Welcome to the episode. Thank you. Thank you. So yeah, so which undisclosed location are you based in? I am in the undisclosed location of Israel. Yes. And currently located in the small-ish city of Herzliya, right outside of Tel Aviv.
00:02:17
Speaker
And that's where I live. That's where I work. There is plenty of Israelites in the enclosure community now, right?
Closure's Popularity in Different Cultures
00:02:34
Speaker
Yeah. Yeah. Not because of the Texan. That's a proportion. There seems to be a lot more Israelis than Indians. In closure. So it's usually a function of like the.
00:02:51
Speaker
the company that is the hub for that. If you look at the Google Trends, for example, you'll see that most Google searches for closure are centered around Herzliya. Just like in Finland, you'll see that most closure Google searches are centered around Tempere.
00:03:08
Speaker
Well, that would mean people in those cities are getting stuck with closure a lot. You know, there are real closure queries from where I live from, because that means everybody is expert at closure. They don't need to Google for this shit anymore. Yeah, they don't need help. They know their shit. Yeah.
00:03:28
Speaker
Also, most closure programmers are using DuckDuckGo now. So, you know, it's off the list. Yeah, that's true. Yeah. Yeah. Anywho, so somewhere near Tel Aviv, I'm not sure if I can pronounce your town's name properly. It's okay. It's, it's almost as hard to pronounce as my surname. Well, yeah, but your,
00:03:56
Speaker
from England, I presume. So for you, it's quite easy to pronounce, but that or like the origin
The Mahabharata's Modern Relevance
00:04:03
Speaker
from which it was simplified, which was Schlesinger, which is even harder to pronounce, especially when you just see it written down. So around three generations ago, it was changed because it was hard to pronounce and to read. These people can't spell our names, so I'm going to make it easier for them.
00:04:26
Speaker
Well, I think, you know, you, you Indian folk have a common problem. We don't know. We don't have the problem because good evening. My name is Michael. We're all named Michael from Texavort. That's much easier. Yeah. And then you try to read the Mahabharata and
00:04:50
Speaker
It's like a couple of minutes to get through the title. But I got it. I got it. Not bad. Not bad. But usually, I think even Indians don't read Mahavartha. That is the thing. So it's like an epic of maybe you heard it, right? I mean, it's like the epic of epic.
00:05:13
Speaker
poem of a fight between generational people fighting over who wants to be the king and all that shit. The tale is all this time.
00:05:26
Speaker
Yeah, exactly. It's one of those Homeric poems sort of thingy. And I think the tricky part of Mahabharata is that there is no one written source. It's always like people keep adding some local flavors to it. So everybody who starts to tell the story, they're like changing and twisting and adding their own characters to it and extending it. But anyway. And what's nice about it is that unlike the Homeric epics or the sagas
00:05:56
Speaker
that is one of the oldest epic poems which is still being told today yeah yeah yeah there's still a continuity of like bardic tradition yeah back from the days this is i think it's it's because probably uh our religion is so much tied to these characters and as long as the religion is there you know it's all like
00:06:21
Speaker
It's kind of inevitable not to bump into this shit because everything is compared from that one or every morality. You know, a story is linked to that story or Ramayana or Mahabharata, you know, one of these things. And the whole of the Bhagavad Gita, which is part of Mahabharata, not at the end, but near the war.
00:06:39
Speaker
the big war thing. So that's the central philosophical discussion in that one is still relevant. So you cannot ignore Mahabharata if you're an Indian. So anyway.
Programming Culture Predictions
00:06:57
Speaker
That concludes the Indian theology part of this podcast. More coming, given there are billions of Indians. We will eventually reach that conclusion that every
00:07:14
Speaker
programmer who is writing closure becomes Indian, you know, by virtue of number of Indians. Yeah, it's gonna happen. There's a joke in Israel after the 90s, we had a
Cultural Observations in Israel
00:07:28
Speaker
really large immigration wave from Russia. So around, yeah, after the fall of the Soviet Union, plenty of Jews who used to live there and couldn't leave just packed up sticks and came here.
00:07:42
Speaker
And around 10 years later, they started having a joke. They would ask people who were born in Israel, how long have you been living here? Don't you speak Russian? Yeah. So eventually every programmer would have to be up to speed on their Indian. Indian meds and Indian food and everything. Part of the curriculum.
00:08:10
Speaker
Totally, totally. Anyway, so let's talk about your coding journey. We'll get back to all the...
00:08:18
Speaker
the Indian mythological section of it again, I think. Yeah. So this is like new sections of the podcast. Now we've got Indian sort of stories and E-max, you know, some sort of, you know, what else did you expect having an Indian E-max guy on the podcast? I mean, after 80 episodes, it is going to continue to gradually, gradually. Exactly. Now it's converting towards. Yeah. Yeah. I mean, you yourself are actually bringing this hegemony into reality.
00:08:46
Speaker
The singularity is that, you know, writing Sanskrit in Emacs and that is going to be the final, final singularity. Programming in Sanskrit. Yes, exactly. Let's not get there because, you know, there are a bunch of Indians who think that, oh, some years ago, NASA decided Sanskrit is the best language for computing. This bullshit keeps on going on and on. Hey, it's just like arguing with common whispers.
00:09:12
Speaker
Yeah, pretty much, pretty much. It has been standardized in the 80s. We don't need a new language. We don't need concurrency standards. We don't need functional
Ben's Career Journey
00:09:22
Speaker
data structures. There's a web. Yeah, there's a library for that. You can use Fset that has degraded the performance characteristics and it's only logarithmic. We don't need your fancy newfangled data structures. Common Lisp is good enough for me.
00:09:41
Speaker
Yeah, yeah. So why did you choose closure then? So I didn't choose the closure lifestyle. The closure lifestyle chose me. I didn't start having a computer. Your closure throws upon you. Kind of, yeah. I was folded into the shape of a pretzel and then my boss told me.
00:10:08
Speaker
I need to rewind a bit. I didn't start out as a computer programmer. I started out as an electrical engineer.
00:10:18
Speaker
And so that real programmer, an adjacent field, but I found myself in the field industry. Yeah, it's a pretty depressing. So don't study, you know, electronics and have no idea how computers work. Yeah. So I studied electronics. I had to build a basic CPU in the hardware description language in school. Oh, yeah. All that jazz. And I wanted to
00:10:47
Speaker
you know, to work with electronics. And I thought it was really interesting. And I, A, I kind of sucked at it. And B, as a new grad in electrical engineering, I found myself sucked into something else, which was the BLSI industry in Israel, which sucks most of the electrical engineering graduates who find themselves working either in Intel, Apple or Amazon. Yeah. And I was trying to find myself there and
00:11:19
Speaker
As I was still learning and developing, I started to have fun programming. And I was hacking and fighting with the internal tooling. And since I couldn't install anything because I had no pseudo permissions on the machines we were working on, I had to compile everything.
00:11:46
Speaker
In about a year, I find myself compiling GCC in order to build common Lisp to learn how to program in Lisp. And that combined with teaching myself some Python, I thought I wanted to do machine learning. That didn't turn out. But I was having lots of fun with Lisp and Scheme and sick P. And
00:12:13
Speaker
Then, as I was training in Brazilian Jiu-Jitsu, I met my boss!
00:12:21
Speaker
And that's why I said I was being folded into the shape of a pretzel because it kept kicking my ass. And then thrown into a hammock. Yeah. You're the second guest who's been into Brazilian jiu-jitsu. Yeah, but Will is way more serious than me. All right, OK. He's a black belt, and I'm just a dabbler. Oh, not a shame. Because I was just thinking, we should have an episode where you guys fight each other and then do jitsu.
00:12:50
Speaker
I'm tapping out even before that episode begins. So anyway, I got to know Rasha, who's a great guy. And we would chat a bit about technology and stuff. And when I decided I wanted to finally make the change out of electrical engineering, I
00:13:12
Speaker
hit him up on LinkedIn and told him, hey, I heard you were looking for programmers. And he's like, yeah, sure, send me your CV. And then I checked on his LinkedIn, and he's the CTO of the company. He's the co-founder. OK, no pressure. The co-founder is your referrer.
00:13:35
Speaker
So I had to meet those expectations and I came in pretty excited. I think they're still laughing at me about how excited I was during the interview process. And then I started working in AppSpire and I'm still there. And that's how I fell back into closure.
00:13:58
Speaker
And I was, how fortunate is it to go from looking at William Beard lectures and my free time to not even having to cross the streets to go to an interview, literally the next building over and started working in closure.
00:14:18
Speaker
That's yeah, that's super cool. I mean, like you literally could fall actually literally backwards into work. Exactly. So I both fell on my ass physically and metaphorically into that. But before, before starting with closure, so you said you're doing some VLSI stuff. Is it Verilog or VHDL or because those are kind of sort of functional stuff as well. Right. I mean, they're not, I know they're there because I'm,
00:14:46
Speaker
been there a little bit. I don't remember much of that shit, but I did some electronics engineering some time ago. I was doing pretty much the same stuff as well. So I did the physical design part, mostly related to timing simulations and analysis of data. Yeah.
00:15:06
Speaker
and even found some opportunities to utilize what I learned about homo-econicity and cogeneration in a few places because the hardware industry is still the only industry which uses TCL, which is the only other homo-econic language out there besides. With the Python
Discovering Java Through Closure
00:15:28
Speaker
bindings or the TK and TCL? No, pure TCL.
00:15:35
Speaker
Yeah. Nice. Yeah. So that was my life for about three years. Yeah. And then started with Clojure and all the practices and methodologies of software development and stuff like that. And I didn't really do VHDL too much. And it's declarative, but calling it functional or a programming language is a bit of a stretch.
00:16:03
Speaker
Yeah. I mean, it's a very specific domain specific thing anyway. But how did you, because for me it's a really big conceptual jump, right? Because it's like from Mariana Trench to Everest level.
00:16:23
Speaker
Yeah, exactly. Because when you're writing VHDL, you're like really at the guts of the thing and then you're writing in terms of the signals and everything. Yeah, but I wasn't writing VHDL. All the way up. Analyzing tons of data and writing workflows and basically you can think of it as somewhere between scripting and programming.
00:16:49
Speaker
And it also depends which part of the cake you get to work on because chips, silicone chips are such complex and conceptually big.
00:17:04
Speaker
pieces of engineering that you can find yourself working on remotely different things and sitting next to someone who does something else. And you're both working even on the same piece of hardware, the same area in the chip. But you're writing code all day and he's running simulations all day. And how was the
00:17:30
Speaker
So because of AppsFlyer, did you start learning Closure or did you already start experimenting with Closure? Because you said you experimented with Scheme and Common Lisp and all that stuff. Yeah, so I experimented mostly with Scheme and I got into Closure and looking at my foreclosure solutions from back then at the beginning, they look very Scheme-ish and
00:17:58
Speaker
took me some time to learn the Closure style and idioms. And I remember coming into Closure, I thought, oh, that's barbaric. You don't need that. You don't need this. And I came down from that tree in about a month when I realized it was actually useful and usable and readable. You mean things like maps and vectors and stuff like that?
00:18:25
Speaker
Yeah, and even having less pairs of parentheses in con forms and let forms and stuff like that. And since I didn't have any prior knowledge or background in software, I wanted to cover as much ground as I could and to learn all the best practices and to
00:18:52
Speaker
to find all the material I could about software engineering, because I felt like I needed to play catch up. And having done that, I consumed a ton of material that it seems like people just skip, even in the software industry. And very quickly, I found myself becoming the person who answers questions and not the person who
00:19:22
Speaker
waste your time with questions. And, and people come to me for a five minutes question and end up with a one hour answer. What kind of materials are you talking about? Well, I didn't know anything. So it was about closure and about Git and unit tests and, and type systems and the history of programming. And
00:19:52
Speaker
you know, I ended up consuming all the Rich Hickeyback catalog and Kathleen Henny and Joe Armstrong talks. And I skipped holding the introductory material and went right to the greats. Jerry Sussman and William Beard and Nada Amin. So that was like my baseline
00:20:21
Speaker
an introduction to Closure in programming. And that was what I was giving myself to think about in process as I was going through everything. Yeah. Because the common list is, well, quote unquote, native, right? When you don't have any VMs or anything there. I mean, except for the runtime, obviously. But Closure
Closure vs Common Lisp
00:20:44
Speaker
has JVM, and Closure Script has JavaScript.
00:20:50
Speaker
we still feel sometimes like, okay, Closure has this leaky Java thing coming in. Not even leaky because we embrace the host using Closure. Did you catch up on Java-related stuff as well? Oh, yeah. I learned Java through Closure because
00:21:14
Speaker
It was my habit. I hope not close your code base. You mean through closure, but not close your code base. And if you let me write Java now, I would write really weird Java. Yeah. Just like when I was fighting internal tooling and the back at work in Intel or Apple, I just went and read the source. So, all right. I was digging into closure and digging somewhere into closure.
00:21:43
Speaker
and then you reach the Java event horizon. It's like, okay, what do you do? You just go and read the implementation in Java. Well, obviously no one told me it's weird Java. And so then I started learning Java and getting to know the VM and the fine details regarding that.
00:22:14
Speaker
And I find it kind of, it's not wrong and it's not quite, but there's something off about saying that, oh, Closure has the JVM and it's leaky and Common Lisp is native because in Common Lisp, you can always compile against a native C library. And you don't say that leaks.
00:22:43
Speaker
I mean, there is more, that's true, but there is more like, correct me if I'm wrong, because I didn't have any commercial experience with Common Lisp, but the thing is that Common Lisp pretty much stands on its own, right? I mean, there is no, of course you can use foreign, you know, FFI, and so you can use every programming language these days, you know, you want to use C libraries, you know, that's kind of a standard mechanism. But what I meant with Clojure is that Clojure basically
00:23:09
Speaker
It will not exist without having JVM. It's not possible. I think the way of saying it is that the Lisp runtime was made for Lisp, but the closure runtime was not made for closure. Yeah. Well, that's true. There is the closure and closure project, but it isn't the official closure.
00:23:33
Speaker
Yes, but it works. I mean, tools emitter works. You can use it to compile your own closure if you'd like, and you can write extra passes on top of that. No one is stopping you from using that as a compiler instead of closure eval. But if you want to do something useful today,
00:23:57
Speaker
You have to use Common Lisp. You have to use like a foreign library. You have to use the outside world. If you want to use Kafka and Common Lisp, you're going to use the C library. And if you want to use Kafka enclosure, you're going to use the Java client. So I don't feel bad about having to target the JVM or
00:24:24
Speaker
or having to consume libraries written in Java. And when you get right down to it, the JVM is a pretty good platform. And the fact that the compiler enclosure emits JVM bytecode and not an executable, I don't care. It works and I have fun every day at work.
00:24:46
Speaker
Uh, shut up you back there in the crowd as something nasty to say. But what about the performance? That, uh, that is my, uh, my white whale. We finally reached the top male here. Yeah. I,
00:25:13
Speaker
I don't know how I got into that. I think it was by having continual slap fights with some architect who works with us. And I really had after that guy. He is very, very knowledgeable and professional and he knows his shit, but guy probably married Java.
00:25:37
Speaker
That's okay. It's okay to teach. I hope he's listening to the podcast. Thank you for putting Ben through all this pain. So he's working on the performance. I doubt that he is listening to this podcast, but I learned a lot even from his slide remarks and jibes and things because each one of those would send me down a well goose chase and I'd learn something new.
00:26:06
Speaker
uh yeah that and well here here is a form of education as well you know yeah yeah he's like the uh or a motivation for education let's say he's like the stern uh zen master who right keeps hitting you or giving you a weird task to do and suddenly you find that you're rich enlightenment so i so that and
00:26:31
Speaker
He sent me to watch a lot of the Gil Tenas lectures about performance in the JVM and hearing how to measure things correctly and what we're measuring and how we're testing it. And then naturally, what do you do? You start profiling and you start measuring and you find a ton of performance lying on the floor just waiting to be picked up.
00:26:54
Speaker
I found lots of atrocious bad habits and performance mistakes. Everything from calling satisfies at runtime. And I found it inside our code base. I found it in open source code bases, like in the Lambda Forge data log parser.
00:27:23
Speaker
And that was the easy pickings where I started with. But as you get deeper into like, facing profiling and just in time compiling. For the listeners who don't know, the JVM is a bytecode interpreter. It's a stack machine.
00:27:48
Speaker
but it's also it has a just-in-time compiler which can compile all of this down to assembly if the stars are aligned correctly and the circumstances are favorable and I and for some it depends on the phase of the moon yeah and for some pull requests I even found myself analyzing JIT logs in some cases so it's a whole
00:28:18
Speaker
area of research and you need to understand how those things interact with the hardware, not just the JVM and how those things get first compiled into bytecode, how they run on the JVM execution model itself, and then how that maps to hardware.
00:28:42
Speaker
that might behave differently in different JVM versions and need to make sure that the JVM is working as you'd expect it to work and not that someone turned off your JIT to be helpful. And then you ask a question on ask closure and you say, hey, you know that you can go through
00:29:06
Speaker
this API instead of that API and it will be way faster. And they say, well, yeah, theoretically, but why didn't you short circuit using count? And then you run up against the closure process and you have to curb your enthusiasm a bit.
00:29:28
Speaker
Before we go in a bit more detail, given the fact that you're kind of falling into closure, what made you focus on performance rather than aesthetics? Why did performance become a focus for you? Because it doesn't seem like it's the most important thing in the closure
Performance Focus in Closure
00:29:54
Speaker
I'm not saying performance is like, of course performance is important and in certain situations it's critical, but why are you motivated by that particular aspect of the language?
00:30:08
Speaker
So it's one of those things where you also just find by accident. And I read about profiling and things like that. And I was just curious. And I said, OK, let's profile our services, because we process a ton of events every day. And maybe we could do it better. And specifically, we have an event-driven architecture in AppSlire and tens of billions of events every day.
00:30:37
Speaker
Shaving even a few percent of performance there does mean real money at the end of the month. Right, if you're hosting on Amazon or something. Yes, we're running on AWS and plenty of instances. And if you manage to save even a few pennies here, a few pennies there, the law of big numbers comes into play and you can save a lot. So, for example, when you
00:31:07
Speaker
cut out reflection from some critical piece of code, you increase your throughput by 25%. Usually, 25 times. Yeah, so that wasn't even like the meat of the service. And by just cutting it out, they improved throughput by 25%. And that's real money when you're talking hundreds of instances
00:31:37
Speaker
Absolutely. And those are fairly simple savings as well, aren't they? That's a fairly simple mechanism to get savings just to avoid reflection. Yeah, exactly. So that's what I mentioned about performance lying on the floor. You profile something and then you see this big chunk of satisfies or merge or flattens and you ask yourself,
00:32:07
Speaker
What the fuck is that? Why is that here? Who did that? And it's a day of work to get rid of it and suddenly you increase your throughput or you increase your response time. And for example, if you have to serve billions of requests every day, increasing your response time is also something which might be critical.
00:32:30
Speaker
So I was like infatuated with the beauty and the libraries and I was getting into core async and core logic and I was having tons of fun. And then I profiled some service and I looked at the results and was like, guys, what's that? Why are we doing that? And then suddenly everywhere you look, you see a ton of performance problems or just things which accumulated
00:32:59
Speaker
just due to negligence and time. So you have a t shirt saying I see slow card. Sometimes, sometimes so. So now it's, it's almost a joke at work that I'm the performance guy and don't
00:33:18
Speaker
Be careful if you come to me with questions about that. But at what point you think you lose the... Because when you start writing closure, you don't consider these kind of things like putting type hints to
00:33:36
Speaker
avoid reflection or constants or whatever, all these things, right? Because those are ceremonial shit that don't contribute to the logic or contribute to the thought process when you're writing code.
00:33:51
Speaker
Exactly. You want to write beautiful code eventually and have fun. Yeah, exactly. I mean, beautiful code that does nothing. That's the main goal. So nobody will call me in the middle of the night that your code is broken. It does nothing. So there is no bugs. But I remember a few years ago when I was working on a different Closure Project than I'm working on right now, it's like, okay, at some point we went through similar kind of exercise and then
00:34:18
Speaker
we've started debating, okay, at this point, it's better to write Java because it looks like Java anyway. Because I keep throwing all the type signatures all the way and then I don't use any of the dynamic stuff anymore. Like, what is the fucking point? You know, then I can just switch to Scala or Java. So where is the line for you? Where do you see that? Still keep the closure in Nest, but squeeze the enough performance and the trade-off. Yeah. So that's one of the
00:34:46
Speaker
the important parts in measuring and profiling first, because most performance isn't actually there. Besides avoiding reflection, which is just a bad form in general, most performance is lost on iteration and dispatching through more dynamic or reflective APIs. And that can actually be avoided.
00:35:14
Speaker
Because if you know at compile time what you're iterating on, and you can actually know that plenty of times, so you don't need to detract from the readability or beauty or legibility of your code. You just need something that will work with you at compile time, at some form, and let you work with that.
00:35:44
Speaker
That's like the, just when I thought I was out, they pulled me back in and Tommy's talk from ReClosure ended up with me writing CLJ Fast, which is, it started out as just a collection of heuristics.
00:36:07
Speaker
It's like, okay, I can take get in and if I unroll get into a series of gets, oh my God, it's two, three, five times faster. And as I was compiling these heuristics and working more on that, I realized that I was just doing something a compiler would be doing because what compilers do constant elimination and partial application and tree shaking. Yeah.
00:36:33
Speaker
free-faking and specifically loop-enrolling and stuff like that. And avoiding iteration is loop-enrolling. And one of the problems is that iteration enclosure, if it goes through first and rest, it isn't just slow. It also generates garbage because you call rest, rest, rest all the time on a vector, for example, and you keep allocating every time you do. So if you find a way to avoid that, you get faster code, quote unquote, for free.
Introducing CLJ Fast
00:37:03
Speaker
that actually took me back to the fun useless shit, which is, OK, how do I write a compiler? Or how do I write compiler passes to that? Because I don't want to just give someone a collection of heuristics. I want to give them something that they can apply wherever on things that the heuristics don't know yet or don't apply to yet.
00:37:32
Speaker
And how do you write a general purpose closure inliner or a general purpose closure partial evaluator? And that gave me an excuse to get back into things like CoreLogic and Minicandran. And now the faraway goal is porting eval to Canran.
00:38:01
Speaker
in like to call logics. And then I could, maybe I could write like a partial evaluator for a closure code. Let's step back a little bit and unwrap a little bit because, you know, can you talk about, you know, what CLJ fast is and what is actually doing it? Okay. It's, it says in the name, it's faster closure. Yeah, that's understandable. But, but what, but without writing closure, it looks like Java.
00:38:26
Speaker
Yes, exactly. That's the value proposition. How does it do that? The first rule of a macro club is to turn everything to a macro. So you take the only tool that Closure looked at and said, you know what, maybe we shouldn't. And you go all in. Because what do macros do? They let you
00:38:52
Speaker
check in at compile time and just do whatever. So I just check it at compile time and do whatever. So my guess is that there is only one macro in clgf that says fast and then I just need to wrap my entire namespace into fast and then that's it I'm done. So it gives you
00:39:09
Speaker
drop-in replacements for the big offenders, let's say, which are get an update and a sock and merge, select keys. These are the most pathological functions in terms of performance in core closures.
00:39:31
Speaker
And you can use them like you would the higher order functions, but you can't use them as higher order functions. You can use them at call site, you can pass them as arguments. And if you make that compromise, you get all the performance improvements that they promise.
00:39:55
Speaker
which are two to five times speed ups depending on the function implementation. Does that mean they're only usable in my application code? Because usually when you're building closure code, well, not usually, but that's kind of a standard thing that you keep pulling code from different libraries, right? Yeah, so you can't install them. They might still be using underlying thing.
00:40:19
Speaker
Yeah, so one thing I want to try is to add some way of installing them on closer core of ours as inline meta. But one of the problems is that if you
00:40:38
Speaker
can't inline, what do you do then? You can't call yourself because then you'll get stuck in an infinite loop inside the macro expansion phase. So you have to also provide an alternative. But yeah, you can do that. But I need to do that. That's like the logical next step for the project. But I'm kind of afraid of exposing that because either you'll do that in your application
00:41:05
Speaker
and then you'll complain, oh my God, you broke my code because there is some case. I, the programmer, the library developer did not consider, and it was just used willy-nilly. And I don't want that on my conscience. Or the one thing I'm even more afraid of is that a library that you require might install it.
00:41:34
Speaker
And then it's like, yeah, always faster suddenly, which is pretty cool, you know? Yes. But again, it might break your code and you don't need that. So that's something I need to consider and provide a ton of caveats and warnings and maybe like, uh,
00:41:57
Speaker
have really fine-grained control over how those are installed. It's like you install only two specific functions, not all of them at once. But yes, that's the logical next step. I'm just checking.
00:42:17
Speaker
So let's just walk through this, um, like get in, why is getting slow? Because I think that would be a good, um, a relatively straightforward, everyone knows getting. So, you know, why is that a slow, getting to get in? Yeah. Yeah. It's slow due to two reasons. One of them, because it just iterates over a sequence of keys.
00:42:42
Speaker
And if you know the content of the sequence at compile time, you don't need to do the iterations at runtime. You can expand that series of calls at compile time. You already know that. But there's another reason specifically that get in is slower than it can be. And that's because it is defined before reduce.
00:43:08
Speaker
is defined in Closure. And since Closure code is loaded sequentially, it has to use an internal implementation of reduce, which is slower but still correct in order to even work, which is called reduce one, which iterates over the sequence, not using the reduce API, but using first and rest, which, like I said previously, is a bit slower and generates garbage. And you find that people usually pass
00:43:37
Speaker
vectors as the arguments to get in. And reducing over a vector via the reduce API, super fast, really good way to go. Reduce one is slower on a vector than it is on a list. So sucks to be you, but yeah. A quick win patch for Closure Core is to use
00:44:07
Speaker
to change get in to use reduce and not reduce one. That's a really quick one. It's declared defined after get in. Yeah, or change that to get in one and to define get in later after reduce again. Yeah, yeah. Yeah, yeah. And so that's one of the reasons which it's slow. Same for update in same for so in places which take variadic areas, places,
00:44:37
Speaker
which take a small number of variadic arities and then do iteration at runtime. For example, compose. If you compose more than three functions, you don't just allocate, you reduce over the sequence of functions every time you call compose. So that's slow, that's kind of wasteful. And that also doesn't work well with the just-in-time compiler because you
Profiling and Optimization
00:45:06
Speaker
end up with like one function which is composed and that will get optimized numerous times every time you pass through it. Yeah okay so I mean you're really talking about these kind of functions that are sitting inside of tight loops or very heavy loops. Yeah but like what's a tight loop when you map over a sequence of one million elements it becomes a tight loop whether you want it to. Yeah exactly yeah yeah yeah.
00:45:34
Speaker
So it's not the sort of thing that you're worried about at the beginning of the program when you're doing configuration reading and that kind of stuff is what I mean. Or when you know you're not going to be consuming large amounts of data. One show's like in one show in the call line of like, Oh, I'm going to be streaming through.
00:45:51
Speaker
a log file or some other big stream of data. Or consuming from Kafka forever. At full speed, nonstop. Then yeah, that's where these things become relevant. And to fulfill my contractual obligations, I must emphasize these caveats. Don't just go optimizing and shaving nanoseconds off loading a config file.
00:46:20
Speaker
do it on your hot loops and after you measure it or Alex will chase me with a chainsaw or an axe or something. I don't want another talking to you because someone misread my advice with too much enthusiasm.
00:46:42
Speaker
But then I think we should talk a little bit about how you should decide to add CLJ fast to your project or CLJ. It's not just all. I'm going to start a new project. So just add this one and then everything will be faster for some reason. You shouldn't. Yeah. So what should be the first rule of CLJ fast? Never use it yet. You shouldn't. There are two reasons you should even consider it.
00:47:09
Speaker
And one is looking at your Amazon bill at the end of the month and saying, I can't afford this. And only if you're saying I can't afford this, then you should ask yourself, okay, where am I spending this money? Is it on compute?
00:47:32
Speaker
If I'm spending this money on compute and not on storage, not on S3, on instances. Not on the traffic. Exactly. If you're spending it on instances, then you should profile. Don't add CLG fast. Go profile your code and have results, have actionable results.
00:47:54
Speaker
That's a real nice point. And so what kind of things that you recommend to profile your stuff? Specifically, keeping closure in your mind. Because Java, there are plenty of tools out there people are using. And there is a bunch of tooling available, which is mature enough. But if you're using closure, because closure produces a certain way of byte code. And so what would be your recommendation? Well, first, the Java tools are good.
00:48:23
Speaker
And if you're running on the JVM, take advantage of them. And you can even embed CLJ async profiler in your application, which I've done several times. And you can trigger it externally and just start profiling your application for a minute and get back the best profile you can get, which is of a live application. But, you know, CLJ async profiler
00:48:53
Speaker
Java flight recorder, Visual VM, use them. There's nothing about closure, which says you can't use everything there is for the JVM, which is one of the best instrumented runtimes ever. Just use it and find where your problems are before you just start trading everything like a nail with your happy hammer in hand.
00:49:18
Speaker
After your profile, if you find you're wasting a ton of time inside Closure core functions and you want to keep your code idiomatic and you know it's your application code, not some library or whatever, then maybe consider using Clergy fast, but not before. There's a ton of things you can do before you reach for it. It's actually more for
00:49:45
Speaker
I'd say library authors or stuff like that, and where they want to give themselves a more human-friendly API, but still have the best performance that they can. Yeah. That's a nice pitch for your library, by the way. I mean, don't use it. You don't need it. And the other case where you might need it is if you're not meeting any of your KPIs, if you're specifically throughput or response time.
00:50:16
Speaker
If you're not meeting those, then again, go profile and figure out why. Yeah. So thinking about because of the most of the functions, I'm assuming most of the function because I haven't gone through the namespace of CLJ fast. So they are mostly improving upon things that you have in closure core, right? Yeah. The function that we have. So what, why do you think
00:50:43
Speaker
Or maybe let me put it this way. So when do you think closure core should change to these versions of these functions? Besides what I said about get in, which can use reduce, only if better support for inlining is added. Because currently inlining is only predicated on the arity. If
00:51:11
Speaker
inlining could be predicated on the arguments themselves, then it should be considered. But not otherwise.
00:51:27
Speaker
So it's not something that you can just open at PR and well, not PR, obviously, send a patch to the closure core. It's not a matter of, okay, let's improve this to a faster version. It's not just... You could, but you'll be adding and aligning to other expressions where there wasn't before either. Yeah. And it's like,
00:51:56
Speaker
It's a really big patch and that's not where the problem is 90% of the time, not 99% of the time. The problems are usually where people are using flatten or satisfies where they shouldn't not, oh my God, I'm churning, get in like, you know what? And that's eating all my CPU. No, that's not there.
00:52:25
Speaker
But if you're writing a library, for example, and you want to say, oh, my God, my library is so competitive, it's faster than everything else under the sun. So that's the place to inject some stories into your code or something, but not in general, I'd say.
00:52:43
Speaker
But I think one of the things I saw in the get in was, for instance, you could use like a thread first instead of get in. And that seems like quite an easy win and still quite idiomatic. Correct. But are you going to do that under the hood?
00:53:05
Speaker
Or, you know, to just add that to Closure, because if you just add that to Closure, you add this change for all users. And suddenly... Oh, no, no, no. I'm just saying as a sort of heuristic, like you were talking about before. Oh, yeah. Exactly. And this is also why this was the first use case, because it's so apparent how you can do that and why it works, that it's like the
00:53:32
Speaker
case study to start with. But, you know, even one thing to consider, especially when you're starting to look at esoteric stuff like performance, if you add inlining and code expansion to closure, suddenly, methods are going to have bigger bytecode. And not only
00:53:55
Speaker
can in some cases, those things be optimized less well than before. Although given closures performance profile, it's usually not the case. Some methods won't compile anymore because you'll get a method body to a large exception.
00:54:13
Speaker
which you only see in core async sometimes when you write like a huge, huge, huge go block, you might come across such a thing. But who knows, maybe, you know, if you start mixing all of those, and someone had like a big function to start with, that function won't compile anymore. And you don't want that on your conscience.
00:54:39
Speaker
That is something I have never seen. No. At least I've seen it a couple of times. Take a form which is huge to begin with and try, for example, instrumenting it with CIDR's debugger. And you might get this exception. And it needs to be a really big function. So it surprises a bound.
00:55:07
Speaker
so basically it's emacs that usually produces these problems emacs produces no problems only solutions i don't i don't think i need to ask the question but anyway enough for the sake of for the sake of if you require assistance please dial mx emacs doctor
00:55:29
Speaker
so emacs are some other shit so i use emacs for the same reason i
00:55:41
Speaker
fell into programming because I had nothing better to use. It was either that or an edit. I was like, wow, I can download packages which enhance my user experience. My God, I'm using Emacs. I couldn't install an IDE or something fancy like that because no pseudo permissions and also no CPU cores.
00:56:09
Speaker
It's like I had four gigabytes to work with and one core. So there is a precedent for you getting into making things faster, giving you the slowest possible computer so you can make everything else faster. No, relative to Vim, Emacs was a bit slow. Yeah. But the user experience was an improvement.
00:56:38
Speaker
Yeah, so when you grow up in darkness, it was born into it. Nice. So I was born in, you know, in adverse circumstances, and had to make do. So,
00:57:03
Speaker
This CLJ fast thing right now, is it something that you use in most of the critical infrastructure that you have, or wherever you find the performance things at your work? How is it being used right now?
00:57:21
Speaker
It's usually not because that's where the problem is. It's usually people not paying enough money to AWS, that is the problem. Yeah, or you need to tear out some database and replace it with another and it's a huge refactor and that's where the effort should go and not to put in CLG fast and this little place or that little place.
00:57:48
Speaker
I can think of a few use cases, but not plenty. And you could probably offload half of that to transformers in Mali and call it a day. So you usually don't need CLG fast.
00:58:08
Speaker
Because I like to say a lot of it is down to other things, external things to closure, usually networking or databases or file systems or whatever. Yeah. It's like, oh, you're blocking. Change your application code to be non-blocking. Good luck. See you in two quarters.
00:58:31
Speaker
So that's how things usually go, right? And it's fun to run these experiments and to make a line go up and to get the biggest and most beautiful numbers. And okay, now I've done that.
00:58:49
Speaker
And then the industry things and, uh, considerations are different and emphasis is on other things, but it's fun. And I learned a ton of things and, and, and now I'm really good at finding the problem, but the problem is usually not in your get in. Yeah. So we're getting out of getting now. Exactly. But yeah, but that lets me like.
00:59:19
Speaker
get back into the fun things or the beautiful things and the interesting things because, again, I didn't want it to be a heuristic process. So I went around reading about how you compile. What's the general problem? The general problem is compilation. So I started reading about how you compile functional languages. And then I find myself reading papers about compiling Haskell.
00:59:45
Speaker
And it's actually very interesting. And the core Haskell compiler is pretty simple. And it's actually something that, if you give me a month of quiet and intense concentration, I could probably port this to run on tools analyzer. And that would be interesting. And who knows what will come out of that. And there's no rule that says,
01:00:15
Speaker
that you can't use, for example, a different compiler for closure, right? And the approach has always been the core is small, the core makes sense, the core promises you stability. But if you want something else, there's libraries for that community libraries or config libraries or whatever. And no one did that yet. But
01:00:43
Speaker
Who is to say you can't use a community compiler? And there has been attempts to make like forks of closure, but they were ambitious in that they wanted to build on top of closure and to change closure. And I'm saying, let's not change closure. I want your closure to stay written exactly the same. I want everything else to change around it.
01:01:08
Speaker
But then I find myself running up against the wall. I'm just not smart enough to understand those things and type theory. So you're building a compiler tower. That's what your goal is at this point. Yeah. So that's what I'm building towards, but it's kind of rickety right now. But I always find myself falling into some other rabbit hole.
01:01:38
Speaker
to also understand this problem better because, like I mentioned previously, if I write some form of partial evaluator, like something else with evaluation semantics, maybe that will just let me run what can be run. And that is inlining, essentially. This is the part that you do know. This is the part where you don't know.
01:02:09
Speaker
It works great. And there are projects to do that. And there are flashes in darkness of people who started things or proofs of concepts. Tim Baldrige made some initial port of VOW, if you're familiar with it, which implements F expressions in Closure, which are both macros and functions.
01:02:38
Speaker
But they run at runtime and they want to do the opposite. They want something that will run only at compile time. And then there's, again, the eval implemented in canon or something similar. So that's how I keep myself entertained by looking back into the beautiful, useless stuff, which is very complicated.
01:03:09
Speaker
And I wish, I both had more time and more brain power, more capabilities and capacities to even deal with that and understand it.
01:03:25
Speaker
The idea that you're floating or you mentioned that close your language and then close your compiler, like different types of compilers for the same thing, which is something Haskell did as well. There is Haskell specification.
Exploring Compilation Approaches
01:03:39
Speaker
You have GHC, which is a kind of canonical Haskell compiler. And you have several other Haskell compilers as well available. C, another example. There is several different compilers.
01:03:50
Speaker
This is not really that far fetched, right, because, you know, he'll build Cy, which is kind of not a compiler, but is a small closure interpreter, which is a kind of a subset of the thing. So it is similar to that, right? It's not really that blasphemic to say that, OK, we're going to create a different compiler for
01:04:09
Speaker
or closure, maybe not a fork, maybe a complete rethinking of how the compiler should be built with different principles. Because the closure has been built like 10 years ago or 15 years ago, then there is different techniques available, different JVM is changing as well. So there might be other things that we could add without breaking the existing one. So if you want stability, you stick with that one. And then while this new experiment continues, it's not that far fetched, is it, right? Yeah.
01:04:38
Speaker
there's room for new ideas. And basically, the Closure Compiler has no compilation passes. And it's one of the promises in Closure, right? It's simple and dynamic. And you need that, especially for development time. But if you're building a single artifact in the end, which is an application which runs somewhere, maybe you don't need that.
01:05:05
Speaker
Yeah. And there's room for those considerations. Yeah. Because I think it's something similar to, maybe conceptually similar to, you know, there used to be different JVMs as well. Well, still there is, I think there are. Because, you know, jrocket was supposed to be super fast JVM compared to other ones. And there are borderline people are innovating by changing the JVM or runtime, Java runtime, sorry.
01:05:33
Speaker
even faster, different ways of doing this. So you could write Java, compile it, and then commercially, when you're running it on production or something, you switch to JROCKET to squeeze every bit of performance. And this isn't the only, and this is happening today too. Yeah. The GraalVM JVM is a new JVM, which is developed by Oracle.
01:06:01
Speaker
And it has like a different compiler and this compiler should allegedly give you better performance for languages like Closure and Scala because it handles the dynamic code better. I think Mikael is, Mikael Boekend has actually proven that, uh, that Psy runs faster than Closure.
01:06:23
Speaker
Not for all cases, obviously. And this is like a ahead of time compilation. But even if you use the just in time compiler at runtime, you do get better performance.
01:06:38
Speaker
especially for languages like Scala and Closure because it's new and it does new things which you can't necessarily backport to the old just-in-time compiler. It just has a different approach and with different approach come different results.
01:06:58
Speaker
So is this something that you have on your radar? Like you're thinking because you're going deep into how core is built and how it is being compiled. I might even say you're going back to your lower level of distractions that you're used to when you started. So is this something that you think is a reasonable goal for community to rally around on them? Does it give
01:07:27
Speaker
enough, you know, written on investment doing this because as you said, we're not changing the language at all because language is the language. That's it. It's more of the, you know, how do you compile the thing changes? Yeah, I. It's possible, like I. I wrote, even managed to write about half of the beta reduction pass on tools analyzer.
01:07:56
Speaker
So it's completely possible and doable, right? The question is, should we? Yeah, I know it's possible, but is this what the community needs to rally behind right now and to invest resources in and stuff? I don't think so.
01:08:16
Speaker
it's cool. It can be a nice pet project and, you know, something to keep on the back burner and perhaps as a proof of concept and as a beginning of an implementation and to see if it's possible. And if it is, great. And if someone really needs it, then they will invest time in it. But I don't see
01:08:40
Speaker
a burning need for that yet. There are both other problems and other stories. My guess is that these kind of things, and again, my guess is that these things are really useless. And you might find that, for example, you could come up with some tooling on the basis of these things that could help developers
01:09:06
Speaker
Not necessarily something at runtime or actually compile time that really affects running code as such, but I can easily imagine having those kind of additional bits of data that you could inject via that extra compilation phase being something useful at development time actually.
01:09:26
Speaker
even before that, one of the most interesting problems here is type inference. And the CLJ condo, for example, does a partial type inference. And you can even connect this with the tools like Malle. But imagine if you could
01:09:48
Speaker
Because now, for example, Closure, cljcondo doesn't keep track of the types associated with values, like with keys inside a map. It knows that something which returns is an associative collection, but it doesn't keep track of the type for each value. But you could
01:10:11
Speaker
So even without writing a compilation pass, even just a typed inference pass could be incredibly useful. And this is something which is worth investment in. But it needs a type of theory on top of it. It's not trivial at all.
01:10:31
Speaker
No, no, no, no. But it just seems to me that a lot of these things where you kind of invest time and effort into understanding these phases of compilation of how the language is actually generated, these kinds of things will eventually pay you back. And like you say, you might find some bit of infrastructure that suddenly could help plug into linting tools or other kinds of performance tools or code formatting, not formatting, but code.
01:11:00
Speaker
management tools. You know, detecting that this bit of code over here is the same as that bit of code over there and why you got it in two places, these kind of things. Yeah. Yeah. Although there are arguments why you actually would want to duplicate code in some circumstances, but the, one of the coolest products I have seen recently is someone wrote a
01:11:30
Speaker
Hindley millinery type inference on top of Mali. Yeah. And I saw that and it blew my mind. Because if there's like any thing or any work to throw weight behind and like involvement behind and things which could pay dividends, not in a month, but in a year.
01:11:54
Speaker
It's that, I'd say. That is something which, you know, for my money, and this got me excited. It's one of those things. Yeah, yeah. So what you're saying is that, you know, because there is a long dream, isn't there, that maybe is putting, I mean, obviously spec is a kind of type of types on top of language, but you're saying you can do it through inference as well, and then you can actually inject real types.
01:12:21
Speaker
Yeah, but even with predicative types, you can do it. I mean, okay, you have a spec, but how do you propagate the data about your spec throughout your code, right? Because it's okay, you have some type associated with a key, but how do you know three functions down the line that this, because when you access that key, how do you know
01:12:46
Speaker
that you're going to get that type. So you can do it with a spec just as well as you can do it with another system, but propagating this type and now you go back to logical programming and unifying all this data across the code base and also finding where this unification failed and reporting it is like the hard and very interesting and
01:13:16
Speaker
very useful challenge. And we're probably moving in that direction and in the direction of more tools
Distributed Databases and Global Collaboration
01:13:27
Speaker
and things which run on top of the code. Because if we generalize the idea of a compilation pass is just some pass that runs over our code, but everything is a pass that runs over our code. Type inference and linting and
01:13:48
Speaker
compilation itself, we're just passing over our code and we're doing something to it. And how much can we do with that idea and how far can we take it? And also, if you watched Jerry Sussman's talk from the Reclosures, where he talks about the idea of just overlaying more data on your code.
01:14:17
Speaker
and just another layer of data and another layer of data. And those things can perhaps come into play at runtime or just at compile time or both. And where the code represents more than one thing at a single instant. But it's like an architect's plan where you have different layers.
01:14:45
Speaker
Different views, different elevations. Yeah. So maybe we can start treating our code like that or build tooling for the code which do that. And spec kind of does that, right? It's a layer of annotation, which you can activate at runtime and you can activate it at compile time.
01:15:10
Speaker
Or at spec time, you add another time. You add another time slice for the development work cycles.
01:15:19
Speaker
Yeah, I mean, like you said, this is one of these fundamentally annoying problems is because it's kind of like we're waiting on spec two where we're kind of like the tool building exercise hasn't really started in earnest yet. So I think there's a lot of incipient work or a lot of putative work that could definitely be released on the back of that kind of solid foundation.
01:15:48
Speaker
Yeah, but there's also other tooling which we can build. And that's like radically different tooling. And one of the weird ideas I've been playing with and the blog post I need to finish and said, who knows how many words by now? One of the things, for example, which annoy me to no end with Git is that, well, Git is a graph and it's nice and good.
01:16:16
Speaker
it saves diffs and oh great, it saves diffs, it's immutable, it's functional, it's beautiful, but those diffs are in lines of text. But if you treat your code as a graph or as a database,
01:16:36
Speaker
then you can talk about changes in your code as changes in the database. And we already have a plethora of databases for storing graph-like data and treating them as immutable things. And we even have today a language for saving the diffs, which is... What's the name of this project? Diffscript, I think.
01:17:06
Speaker
which reifies diffs in enclosure data as a sequence of instructions. So fuck it, let's throw all the code into oov.
01:17:20
Speaker
into Datomic and our version control will just be the lock because that's all we need. But isn't it something that Richiki mentioned with the codecs, right? I mean with the codec project. When Datomic was announced, I think Richiki was building this
01:17:40
Speaker
storing code as codecs, like smaller structural things inside Datamix or storing. It's basically replacing the whole- The idea is that you make it based on- No, no, codec is this like, didn't really happen kind of like project. It was based around this idea of
01:18:00
Speaker
using functions or vars as the basic model for structuring everything and not lines of text or even repos in fact, to use the information about the whole project, which is where I think that I'm interested in that kind of space as well.
01:18:22
Speaker
Yeah. And, you know, maybe one day we'll make it to the eighties to where Doug, okay. Yeah. Well, everything okay there.
01:18:35
Speaker
Or maybe one day we'll make it to the 80s where a small talk was and we'll finally have an image-based closure where the code is saved in a database and everything is just data. Yeah, well, I love that idea.
Blockchain and Version Control Speculation
01:19:00
Speaker
But there are a ton of barriers for making it happen. How do you share that? How do you collaborate on that? Well, I mean, if you worked in the 90s with Visual Source Save, that's how you used to share it. You just share it on them. You don't check in and nobody can check out. Or you just share it on SharePoint. You just copy the folder somewhere and then
01:19:27
Speaker
My view of this is quite straightforward, Ben. You can have an image and you can share the image. There's no problem because if two people have the same runtime installed, you can do that. That's fine. Just like merging databases, if you like, or having distributed databases.
01:19:47
Speaker
That's definitely solvable. The biggest problem is if people want to use Emacs rather than your image editor, then you have to have an externalized, you have to have a serialization format. And that's just great, isn't it? That's the standard like file based serialization format that is closure now. So we've already got like Eden as a serialization format. So it's relatively straightforward. The problem though, you should lose some of the metadata and some of the image information.
01:20:14
Speaker
No, lose might be a funny word, but that's you can store that somewhere else or emit that somewhere else as a sort of configuration data. So I don't think it's insoluble. Theoretically, you could.
01:20:29
Speaker
you could emit all of it, both in text files and even because then you can run the diffs on the text files and generate the git diffs yourself. So someone could theoretically just download the diff. You could expose it as git, right? Exactly, yeah.
01:20:57
Speaker
But that is still dead code or stateless code. But imagine if you will. Because for that, you won't only need the database. You would also need an editor. So you have a VM. You have an editor. You have a database which replaces your file system. So you have an operating system.
01:21:21
Speaker
you know, of a database with an editor. That's, that's kind of normal. I mean, most of these, no, these, well, these, these IDs basically have databases with editors databases at the backend for indexing and that kind of stuff. That's how, that's how they do all this kind of like, uh, static, static running on a VM too. So the VM.
01:21:46
Speaker
It's like you have a user interface in terms of the editor. You have the database instead of a file system and it's epical. So it's purely functional. And you have the VM, which replaces the, your operating system. And then all you need as a global namespace.
01:22:04
Speaker
And then you could have, because if, for example, like you had a global DNS for everyone, like even if you refer to it through GitHub, reverse DNS. So I could like directly clone this code or even just read the code because we're all talking the same language, like the language of the database. So if we're all talking the same language, I could just query your database for this. Absolutely. Why not? Yeah.
01:22:34
Speaker
Yeah, so that's like the farthest you can take, this idea.
01:22:43
Speaker
Isn't this something similar to maybe, I haven't gotten into it fully yet, but isn't this something that Runar and, so the guy Paul Chizano from Scala community building, like the Unizen language, because that seems to be having the similar kind of ideas around it, right? Content-based, you know, content-based addressing for functions. Yeah, yeah, exactly.
01:23:06
Speaker
Yes, but that's only for your code structure. What you talked about recognizing, for example, that two pieces of code are the same. So you're just storing them the same. But that's like an optimization. But this is like a very, this closes like the gap between the personal development and personal computing on this platform and collaboration. Yes, it's like what you're
01:23:35
Speaker
lacking here as a way, not just a protocol for communication. And by the way, we can all just transmit Eden to one another like civilized individuals. So we, we all communicate data back and forth. And, and we all exist in one unified namespace, which we can just use DNS for. And
01:24:04
Speaker
then you can even think about it as one distributed database where all the data is namespaced. So you create and technically, if you really wanted to get technical, that's also a blockchain.
01:24:24
Speaker
But instead of, but it's people focused and not like money, your trade or whatever focus because it is, it's still like cryptographically verified because that's how you communicate via HTTPS and it's all catch track of because all the communication will probably be written to your database, but
01:24:54
Speaker
Yeah, and you can have a Merkle tree for the Git equivalence to reduce your diffs, et cetera. But the thing that stands at the center of this system is the individual user and not like some distributed transaction log. Yeah. Yeah, yeah, yeah. So this is like an emergent property or an emergent structure you can find in this.
01:25:23
Speaker
But it functions very, very differently than, let's say, existing blockchains, because it doesn't care about trade at all. Trade isn't even a function here. Yeah, yeah. It's just the way it happens. Sharing and globally distributed database, that's it. That's all it is. Exactly.
01:25:41
Speaker
Well, the main reason why blockchains have these characteristics of trading, et cetera, is just because they want to use tokens to allocate resources in a fair way. Because if you've got a distributed environment, you need some mechanism to make sure there's fair play in that environment.
01:26:06
Speaker
You have to think about that i'm not saying that the blockchain is the answer or cryptocurrency answer but but there is a reason why they exist.
01:26:15
Speaker
Yeah, I'm not saying that you don't need them. You need some form of- What do you mean? Making sure that people are getting rewarded for participating in it. Well, everything costs money to host or to manage. At some point, if you've got a globally distributed environment, someone's got to host that thing. And so costs have to be paid.
01:26:38
Speaker
Or even look at it backwards. You need to throttle the process in some way because, for example, you don't want someone to DDoS you. If you remove the so-called financial aspect of it, I mean, that's basically CPAM, that's basically PIPE, that's basically NPM and Maven. Well, but NPM and Maven have financial aspects.
01:27:04
Speaker
Someone's paying for it. Yeah, totally. That's what I'm saying. You can't remove the financial aspects. No, no, no. What I meant is that 99% of the people who are using it, participating it, pushing to it,
01:27:18
Speaker
or not getting rewarded or not thinking about. That's what I mean. So if there is enough minds or enough people who can host this, who can build and then support this because there is no frenzy of, I'm going to mine and then get the shit around and get rich quick sort of shit that is eliminated. Then since the days of CPAN, which is back in the day like 20, 30 years ago, since Pearl started, we do have that distributed
01:27:47
Speaker
even, you know, source forge, and multiple mirrors of Apache software. They're not distributed. That's the problem. They're all central archives. I totally agree. But there is a precedent for removing the financial thing and still, you know, it's totally
01:28:06
Speaker
It's good for the programming community on the whole, right? That's the reason why it is being supported and there are companies behind it. Either they're not removing the financial thing or VJ. It's just being hidden underneath some centralized funding mechanism. Of course. So if we use that one, but change the technological component can be changed to what we're discussing.
01:28:29
Speaker
Sure, I totally agree with that. And let me throw another wrench in this because you're still relying on something which is centralized for this to work, which is DNS. Yes, yes. And it's like if you really want to buy addresses and it's like you're not buying it from a reseller, there's like six organizations in the world which you can buy it from. And if they decide you're not buying one, you're not buying one.
01:28:57
Speaker
Yeah, I mean, DNS is a mafia anyway. It's a cartel. There are blockchain-based DNSes. Yeah, and you can all add me on GNU Net.
01:29:14
Speaker
Well, you know, I mean, I think we shouldn't mock these things, you know, people are doing, you know, the open source world and the Linux world, they are doing lots of interesting.
Distributed Systems and Code Sharing
01:29:23
Speaker
No, my heart goes out to you. Systems peer to peer, IoT stuff, you know, there's a whole bunch of these things. So, you know, there is
01:29:33
Speaker
some potential for that kind of stuff. But I think the main point I'm trying to make anyway is that in order to get it to be distributed, even if you have to have some centralized addressing, which wouldn't be too bad, but then you still have to have this concept of community. Now, whether you do that community in terms of everyone just basically
01:29:56
Speaker
having a kind of social contract, let's say, where there's enough people hosting or providing compute that we don't need to pay for it, which would be, you know, that would be the perfect situation. But it's rare for humans to collaborate at such a scale, you know, that we might dream of, although there's only like 4,000 closure programmers that would ever do it. So maybe you could put it in the terms and conditions. No, it's only 1,000 programmers.
01:30:24
Speaker
who are googling for closure. There are plenty of people who are experts at closure who are not googling for shit. First get them to agree on a dependency inversion framework because we are at about four right now. Let's say we have component and then it wasn't fun enough. So we have mount and then we had integrant and then we have clip. And are there more? I accidentally made one. I made one by accident.
01:30:49
Speaker
for, which was specifically for like streaming dataflow applications. So I made one by accident too. But yeah, so we have four and a half, four and a half going once going twice. Yeah, we're gonna run out of time here. This is so
01:31:17
Speaker
But this has been quite a journey, starting from Indian mythology and ending up in blockchain. There's probably a rule about that, isn't it? Everything, maybe this will give you the future episodes. It's like, you know, Indian mythology, blockchain and Emacs, all these three things have to be in mind.
01:31:36
Speaker
We should have our own words that every online discussion results in Nazis. Every podcast episode. And now we can tie it back with I Am Become Blockchain, the destroyer of the environment.
01:31:53
Speaker
That's exactly what is written in Bhagavad Gita. I'm the creator. Everything is just me. You're just witnessing this shit. Yeah, so don't feel sad.
01:32:11
Speaker
Don't feel distressed because you have to go to war and kill a bunch of people who happen to be your cousins. Just eliminate the entire family tree. Nothing is on you. So don't feel bad about mining cryptocurrencies on your aunt's computer. She doesn't need a CPU.
01:32:41
Speaker
Anyway. Crime and punishment all over again here. On that great piece of financial advice, I think we should face to some sort of a conclusion.
Future of CLJ Fast and Conclusion
01:32:57
Speaker
And you can find all the awesome stuff that Ben is working on. His GitHub will link that one. And the only thing, if you
01:33:09
Speaker
zoned out on this one and a half hour episode. The only thing that you need to remember is that go and replace every core function with CLJ fast function. That's what Ben said. That's exactly what I said. I think maybe you skipped one word there. Yeah, maybe I'm paraphrasing a little bit. Urgently, I think was the word. Yeah. Yes, immediately. Do it fast. Do it now. Exactly.
01:33:39
Speaker
Hey, it's been a pleasure, Ben, talking to you. And I apologize for the delay in showing up. It happens. But it's been super fun. I'd love to see all the ideas that you're putting into CLJFast being picked up. The whole compiler idea is super interesting to me, as well as we discussed.
01:33:59
Speaker
Hopefully, I think we'll see more and more of it. And we end up paying less to Bezos. I hope so. I think we can all applaud that one at the end. Yeah, that's good. So you're doing God's work. Trying to reduce his net worth, a small dent in his net worth by making closure faster than running on AWS. I try when possible.
01:34:29
Speaker
Yeah, yeah. Thank you. Cool. That's it from us. And we'll be back soon with more for Deaf and stay tuned. Bye bye. Bye. Thank you, guys. Good night.
01:34:48
Speaker
Thank you for listening to this episode of DeafN and the awesome vegetarian music on the track is Melon Hamburger by Pizzeri and the show's audio is mixed by Wouter Dullert. I'm pretty sure I butchered his name. Maybe you should insert your own name here, Dullert.
01:35:35
Speaker
and see you in the next episode.