Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#4 - Immutable Persistent Collections image

#4 - Immutable Persistent Collections

defn
Avatar
74 Plays9 years ago
Overview of Persistent Collections - Intro | Follow-up | News - Discussion on community relations - Immutable Persistent Collections List vs Vector - Linked list vs tree implementation Map vs Set - Key can be anything - KV | Unique KV - Relational operations on set are outside of core Seq library - ISeq (first, rest, cons) - Interop with Java utterable - Functions are written to work against the Seq interface - Seq in, Seq out Immutability and Persistence - What is it? - Why is it important? - Implementation Lazy collections - What does it mean to be lazy? - What does it mean to hold on the head? Eager operations - Sometimes you need side effects so you cannot be lazy - Doall, doseq Persistent vs. Transient - Performance SPECTER - Nathan Marz - Ensure output format of collection operations is controlled - Editing operations Community contributed collections - shout out to - Chris Houser - Data.zip - Michał Marczyk - Ctries and AVL - Mark Engelberg - Priority maps - Lean Hash Maps Peter Schuck See the podcast web site http://defn.audio for links
Transcript

Introduction and Podcast Updates

00:00:28
Speaker
Welcome to Deaf in Episode 4. This is Vijay from Holland. Hi, yeah, this is Ray from Belgium. Hello Ray, how are you? Pretty good, pretty good. And enjoying the week's wonders of Brexit. But we're staying in Belgium, so I think it's going to be okay.
00:00:49
Speaker
So we'll still continue doing DeafN. Yeah, we can do it anywhere. Of course, we're online. It's been a pretty strange week for Britain. But anyway, let's get back to our non-political closure podcast.
00:01:07
Speaker
So this week we are going to talk about persistence collections, persistent collections, but before we get into the details we would like to do a quick follow-up and first of all we'd like to thank Mike Fikes who joined us in the last episode to explain the rebels that he is building, plank and replete etc.
00:01:27
Speaker
And we saw that Plank is training again and there has been a lot of activity or at least interest in Plank on GitHub. And we'd like to certainly take the fame of advertising it. Yeah. I mean, Mike was writing great code before and great code after, but obviously the podcast is what did it.
00:01:48
Speaker
Yeah, and in case people didn't notice, then we just need to tell people that the plank is now running on Linux as well, at least the beta of it and also on Windows. So this is one of the questions that we asked Mike and in a week you see this happening.

Audience Interaction and Feedback

00:02:05
Speaker
Yeah, that was good, yeah.
00:02:07
Speaker
Yeah. So people who are not on Mac, you can also get into plank up and running pretty quickly, especially Linux as well. So news to rejoice. And continuing with the same theme, we'll have other guests in the future. So we are very excited to probably have some other interesting guests. We don't want to announce before we get a confirmation, but we want to have different people who are involved in the community and join us in the podcast and explain
00:02:36
Speaker
what they're working on and that'll be a nice thing to do. And if you are one of them, we'd love to hear from you and you can hit us on Slack or Twitter or all the regular channels of Closure.
00:02:49
Speaker
Yeah, I mean, we're awesome together, but a little bit better with other people as well. Exactly. And one thing that I'd like to point out is that we're already having almost an average of 1000 listeners or at least 1000 plays on the SoundCloud. So it's pretty awesome. I think we're like the second biggest closure podcast these days. Definitely, definitely the second biggest. Right after Cognica. I haven't seen Cognica's numbers though.
00:03:21
Speaker
I don't know. Can they really compete with that number? I don't know. I think, like you were saying before though, it's just incredibly, it's actually a bit scary just to be honest. It's certainly very humbling because I never thought that we'd get anywhere near that kind of number, especially on a

Correcting Errors and Listener Engagement

00:03:40
Speaker
regular basis. It's amazing.
00:03:41
Speaker
Okay, three is not that many so far, I guess, so maybe we'll all just tail off after the first 10, I don't know. Yeah, we never know, but we can keep people... Let's stay positive, let's stay positive, let's stay positive. Exactly. But the nicest thing with having listeners is that they give us feedback and we did get a couple of interesting feedback from our feedback from the previous episode, one of it why
00:04:08
Speaker
Jason Gilman. Yes. So in the last episode we said that incorrectly that Adam's repel is called Adam Inc. but he corrected us saying that it's called actually a proto-repel. So the plug-in that is used in Adam is essentially Adam Inc. and it's probably has a small role compared to the repel. Sorry for that but thank you Jason for listening to the podcast and pointing that out.
00:04:35
Speaker
I'll take my 50 lashes Jason, I'm very apologetic about that yeah. We can all sleep a bit better now I think and that's nailed that one yeah. We also, just by the way, there was not really follow-up as such but there was a little discussion on the Twitters about whether Ramsay and NASA had done some work around the ClosureScript REPL. There was some guy asking about that because you'd heard on our podcast that
00:05:02
Speaker
that Ramsey was doing some things via mic, but Ramsey said, oh no, I just ported it, I just ported it to Nord, you know, sort of hand, yeah, you know, spent the five minutes just ported it over. But that, you know, that's got a lot of portability and obviously he's done great work there.
00:05:20
Speaker
Yeah, it's really awesome to hear all this feedback and I'm very happy that people are listening to it and telling us. And this is something, as I was pointing to you before the show, that we have 900 people, or at least I hope it's 900 people, it's not just one person listening to it 900 times. They're really trolling us, that guy's really trolling us. Exactly.
00:05:43
Speaker
But I love that we're getting feedback from people and this is basically podcast is not only just telling people but also learning experience for me and I'm pretty sure it's the same for you as well, I think. Absolutely. We're like little babies in this pond. Exactly. We're getting our feet wet part-time. There are some people doing this full-time obviously they've got called committers and everything.
00:06:08
Speaker
So we're very much, we're trying to educate, but also we definitely want to learn. I mean, there's no doubt about that. We're very open to feedback and very open to learning from the skilled people that are contributing a lot. We definitely don't want to put ourselves up on a pedestal, that's for sure.
00:06:29
Speaker
Exactly, but thanks a lot for your feedback guys and girls and we'd love to hear more if you have anything interesting or did you hear anything wrong or interesting than if you and we're also getting some positive feedback as well and that gives us some energy to continue doing

Community Meet-up and Discussions

00:06:48
Speaker
this one.
00:06:48
Speaker
But we want to move to the next section or whatever we want to call. We've got sections now, yeah. Yeah, so we're getting professional now. We'll get back to it pretty quickly, I think. Sorry, I heard that you were doing a presentation in Belgium, right?
00:07:13
Speaker
Well, we're going to do a little meet-up this coming week in Brussels about live coding. So it's more of a kind of hack night, hopefully. We're going to get people set up.
00:07:29
Speaker
a reasonable you know it's a fairly small audience but but but you know a very very highly curated audience let's say. We're going to come out in a on a Tuesday in Brussels and we're going to do a little bit of live coding and actually we're going to do it with with Boot and Hoplon because we think that it's a bit underrepresented and it looks very interesting it's it's yeah I think it's going to be a fun evening you know and
00:07:59
Speaker
it would be very interesting to get a little bit of feedback from guys and girls and we do actually get a mix of people so I tend to say guys by the way and I mean guys and girls I have to learn a bit because it's a
00:08:16
Speaker
I look like folks for some reason. Yeah, I think that's much better. Yeah, I'm not a fan of folks actually. Folks sounds like 4Q, you know, but I don't know. I don't know. We're living in a difficult world. Guys and girls and ladies and gentlemen and all people are welcome for sure. So yeah, we're doing this thing on Tuesday.
00:08:42
Speaker
But yeah, I think if anybody's in... Sorry, go on. Yeah, go on, Vijay. Yeah. So a shout out to anybody who is in nearby Brussels, I think they should join and enjoy the company of Ray and other people who are making this awesome, I suppose. So... You suppose. Yeah.
00:09:01
Speaker
Supposedly, you know, it's Belgium. If you have read Hitchhiker to the Galaxy, Hitchhiker gets to the Galaxy, it's Belgium, man, Belgium. That's awesome. I think it's one of the reasons why Brexit is happening because of Belgium. I'll try and make it a bit less painful on Tuesday anyway. I'm sure it's a pretty awesome thing. One of the things in terms of like
00:09:22
Speaker
pain. I did want to have a little bit of a shout out about this disturbance and the force that happened a couple of weeks ago where this Ashton Kamerling guy talked about various complaints and had a whole bunch of issues that he thought were beyond the pale and you know it's a terrible situation. I don't know what did you think about that that whole thing Vijay?
00:09:49
Speaker
Yeah, I mean, I think people should express themselves. That's pretty fine. I mean, obviously, we can't speak for the people without knowing them in person. There might be a lot of frustrations behind the blog post. I appreciate the blog post, but I think the tone was a bit too harsh, I think.
00:10:07
Speaker
But in my opinion, I can say that for the both sides, well, when I say both sides, because he was complaining about the closure core committers not responding to the tickets or something. I don't want to go into individual tickets or something because Alex Miller did a great job of explaining the issues behind each of these item by item, you know, he gave the response. So I think he did a pretty good job.
00:10:33
Speaker
But at the same time, from the side of Cognitex slash core, Closure Core guys, it could have been a bit more diplomatic. It could have been more like a positive leadership sort of thing. Yeah. That's what my personal opinion is. I mean, again, you know, I don't know Rachiki in person. Obviously, you know, everybody is doing everything.
00:10:56
Speaker
especially when you're limited in communication on the internet in Twitter or those things, then in my opinion, you should be more careful in terms of what you're gonna say. So I was a bit surprised.
00:11:11
Speaker
I think what happens is that people get frustrated and they hear some of these negative comments come out. But I think, honestly, as the community grows a little bit, you're going to get these odd people, well not the odd people, I mean, I think Ashton has got a voice in the community for sure, you know, and he used it. He's entitled to do that, as you say. He's allowed to complain. People are allowed to complain.
00:11:39
Speaker
I think how we respond to that is important. And like you say, Alex did a great job of responding very patiently and very calmly. I wasn't very happy about the response of Rich and Stu about it to be honest. I thought they could have done a more positive leadership role.
00:11:59
Speaker
But, you know, that's their choice, of course. But I thought it was a little bit snarky, which I don't think we need that. I understand that it's frustrating when people complain. And actually, one of the things that Alex said in his responses was that a lot of the things that are coming up
00:12:21
Speaker
through things like spec etc are actually helping to fix a lot of the concerns that Ashton raised so I felt like it would have been a bit nicer just to have had a positive throw on that and to say hey you know look
00:12:37
Speaker
Although, you know, you might think we're wasting our time here. There's a lot of good coming out of this stuff. Yeah, that's true. Because the community is doing great things as well. Because it's not just the closure core. I mean, there are lots of libraries written by the community, you know, the come coming from ring, and all the way to plank, for example, you know, all these things that we're using day in and day out. It's not just, I mean, of course, I didn't contribute a lot yet. But
00:13:02
Speaker
I can imagine we are standing on the shoulders of the giants you know speaking so everybody is doing their part but I think you see this in every every community they go through this there was an example like this storming norming performing you know in terms of the community probably but everything so every community will they will have their own left pad situation
00:13:28
Speaker
The funny thing about it was I saw this thing from this Rob Pike, there was a similar kind of thing, very recently actually, even just a week or two ago, with Rob Pike saying, oh shit, you people that don't know, I don't know, some floating point operations or something, if you don't know floating points or some other things, then you're not entitled to program anything. And I was like, oh my God, yeah. And apparently he then kind of recanted on this and said, oh well,
00:13:57
Speaker
it was negative and what I really meant to say was you know, you can always learn more and Foundation stuff helps and honestly if you do the foundation stuff will be better But I think this is a sort of thing that we can learn from as a community basically is yeah, we get frustrations I understand that but I think what would be nice would be to start looking at

Transparency and Project Management in Clojure

00:14:21
Speaker
I know Alex did something a while ago, when he kind of joined Cognitec, because he's the kind of community outreach guy, basically, isn't he? Yeah, yeah. And I think he's doing a great job in general. But they started to do this thing about, you know, how do you, how, one of the complaints, I think, or one of the issues is, how does closure work? And how does the community contribute? And there's some stuff around the documentation, but I checked it again. And it's a little bit untended. It's a little bit kind of, you know,
00:14:49
Speaker
Not very well. It's not part of this kind of refresh that cognitive had recently of their website And they don't talk about it very much and I don't know what you think I think it would be I think to make that a bit more Transparent would be quite good
00:15:04
Speaker
Yeah, there has been a lot of discussion online about the way the project is run. I mean, I don't have any complaints about the way it is run because I know the amount of effort needed by multiple people to keep a language and the core library up and running. But the thing that at least I sympathize with the notion that we need to come to terms with how a project is run because people say, hey, this is not real open source, this is real open source or whatever.
00:15:34
Speaker
It is Rich Hickey's project and it is Cognitex project and makes sense and we need to accept that. It's not like a completely community driven sort of thing yet. Even in the community driven projects, I mean you need a benevolent dictatorship. That's one of the things that
00:15:52
Speaker
I think is making Closure great because there is a lot of thought going into how to introduce new features or how to triage the things. But at the same time, it would be much more interesting if Closure leadership of the Core Guys were much more transparent or explicit about these things. So you were suggesting that there should be a nice talk about, or at least a talk at one of the conferences, right?
00:16:18
Speaker
Well, that's the sort of thing that I think would be potentially a good olive branch of the community to say, you know, look, this is how things were organized. Maybe just do a bit more work on these kind of community-oriented things and explain a bit about how it all works. Because, I mean, you say it's like Rich Hickey Shaw, and he started it, obviously, but there's more people involved now. Obviously, Cognitech,
00:16:47
Speaker
There's a group of core developers there as a core team, has been for a long time, so it's definitely not just Richard Shaw. He's the editor-in-chief, that's for sure. Things have to get past him and it's great to have someone with his taste and his experience and all that kind of stuff.
00:17:04
Speaker
being the editor in chief of the language I think you need that and you need a direction and I'm all for that and I think in the end we can see that we've got a really great environment there but I think we could just do a little bit more explanation about transparency about that and I think a conference talk from one of those guys would be really interested

Persistent Data Structures in Functional Programming

00:17:29
Speaker
And of course, we're going to the Euroclosure in October and cognitive effect are hosting it. So I think as a suggestion, a small suggestion, I think that would be my suggestion. If they don't do it, I'm not going to cry. I'm not going to write a blog post about it. But if they could do that, that would be really nice.
00:17:52
Speaker
Yeah, I think it's like a community outreach, quote-unquote community outreach talk or something. That'll be nice. But anyway, I don't want to dwell too much on the news. Let's get to the meat of the show, or the, I don't know if it is called meat of the show, probably. Well, I'm a vegetarian. I don't know about you, Vijay. Well, I'm a vegetarian, so I'm just speaking.
00:18:17
Speaker
So no meat on this show. This is a show without meat. OK, so this is the best vegetarian closure podcast. Yes, yes. OK, now we have the best in some category. Excellent, excellent. So get to the topic of the week, or the topic of the biweekly podcast. So we want to talk about immutable persistent data structures.
00:18:42
Speaker
First, let us see what kind of collections are available in enclosure, because one of the bread and butter, or the metaphorical bread and butter of all these functional programs are basically lists. So, well, list processing, you know. It's where we are, you know. Exactly.
00:18:59
Speaker
So obviously, when you're processing lists, there are there are two ways. I mean, you can say actually three ways of storing the data, right? I mean, you have you have a data structure optimized for putting the data in front of the list. I'm still calling it a list. But and then you need another data structure where it is optimized for appending. So yeah, append optimized.
00:19:23
Speaker
And obviously you want a data structure where you want to be efficient when putting the data at the end and then taking the data from the front. So that's basically a cue. So you have three different ways of looking at the data structures that are list-based. So obviously Closure has list and it has a vector. There are two important things. And I think list is fundamentally everybody who uses, who read a bit of data structures knows what a linked list is. So every node has a link to the other node.
00:19:53
Speaker
So, list is essentially a linked list and then vector is a tree implementation and then you have a way to basically vector you can see it as a indexed elements. So, you have a really fast lookups and you can go to the item pretty quickly. So, those are the kind of workhorses so to speak.
00:20:15
Speaker
Yeah, I think what I like about these kind of things is that when they were introduced, when Rich introduced all these, they kind of explained what the big old notation was around each of these things you were talking about. So they explained, oh, you know, if you want to access the nth record or append something to the end, these are the kind of performance characteristics you can get from that.
00:20:41
Speaker
And that was the really, I don't know about you, but that was kind of the first time that I'd seen that in a language at a kind of, I knew the bigger notation before obviously, well not obviously, but I knew the bigger notation.
00:20:57
Speaker
I didn't see it very, very commonly on data structures. So I think that's one of the nice things about Closure for me actually is that they're very open about that. They're very open about how the data structures have been designed exactly to kind of, to give people a lead on the kind of data structures they should be using and when they should be using them. And I think that's a really, that's a great example actually to me of a very simple but very powerful documentation
00:21:27
Speaker
Yeah, of course, because this is like a one dimensional way of storing the data, not one dimensional, but essentially list is basically you have a bunch of things you want to store them. And you also need to store stuff that especially in enclosure, you have maps and sets. And yes, so a map is a map.
00:21:52
Speaker
It's pretty tricky to explain, right? Okay, you have a key and a value, so you want to have an associative data structure. Yeah, that's what I was looking for. Well, they call them dictionaries with the languages, don't they? Yeah, I think in Python they're called dictionaries. I think in Ruby as well. Yeah, true.
00:22:06
Speaker
So it's basically an associative data structure. So you have a key and then you have a value. So you can have anything you want. Oh, that's another interesting thing in enclosure. If people who are used to the typing, then in a list, you can store anything. I mean, it's not a list of ints. It's not a list of something. So it's basically heterogeneous. So all the data structures closure being a dynamic language.
00:22:28
Speaker
In a vector you can store a string and I don't know a person record or anything you want Yeah, yeah, that's actually quite one of the amazing things actually isn't it that it's not like that You have all these generic things, but you don't have to declare them as any you know or anything like that You can just they are automatically you can you can be homogeneous or heterogeneous as you see fit and
00:22:49
Speaker
Yeah, exactly. And for the maps, well, the key can be any unique thing, obviously. And if you see the way all these things are backed by, obviously, Java interfaces. So if you want to build your own version of it, there are interfaces backed by all these data structures that we have. And then there is set, which is basically a mathematical set. So all the unique elements will be saved.
00:23:17
Speaker
I mean, these things are pretty difficult to explain, right? I mean, how do you define a set as a collection of elements where everything is unique, right? Maybe that's it. Yeah, I think so. I think like you said, it's a mathematical set. I think we'll get that, I think. Yeah, I'm pretty sure.
00:23:34
Speaker
the main thing is that you can put the key in as many times as you want or the key value pair and if they match then they will just get either just dropped or you know they'll be put into the set so that's really good. The other thing I find as well is that with keys and sets is that
00:23:52
Speaker
both the keys and the sets can be keywords the keys and the values can be keywords and that's also a bit it's a bit funky isn't it to have these it's fine to have keywords as as keys but you can also have them as values as well so that's that's kind of funky I mean I think the whole keyword concept is a bit lispy isn't it? I don't see that in other languages
00:24:20
Speaker
Yeah, that's true. And also the nicest part in Clojure is that the maps are functions of their keys. So you can just use keys as a function. So that makes your code pretty clean because you don't need to say get everywhere. Get this key out. You can just say key and then the map and then you get the value out.
00:24:36
Speaker
So that's a nice touch, by the way, if you want to write a clean chord. Yeah, that's a very good point. And of course, the set things have all these relational operations as well. And they're deliberately outside of
00:24:54
Speaker
the core system. And that's one of his complaints actually from the earlier conversation. You don't want to get into the details of that, but you know, but they're very definitely kind of outside of the core system. So I'm not sure. But for the sets, I think from the implementation, not the sets, but the maps from the implementation point of view, I think they are extending the sets. So because maps property is that the combination of key value is unique.
00:25:24
Speaker
And a set is essentially the same that every element is unique. So if you treat an element as a key value pair, so map is essentially a set. So I think in enclosure, map is I persistent map, I think, I for the interface. And then that extends the persistent set.
00:25:46
Speaker
Well these things are I think people can look it up easily and then they get it but mathematically speaking a map is nothing but or not nothing but but essentially fields and behaves like a set because when if you squint then key and value to be single elements then it's basically a set of key value pairs.
00:26:06
Speaker
But the interesting point in enclosure is that all of these data structures are, there is an underlying structure called a sequence in enclosure, right? Yeah, that's really a foundational thing, isn't it? Yeah.
00:26:24
Speaker
Yeah, so that that gives you kind of a you can you can get a sequence out of any of these data structures and then perform the operations that list people are more familiar with basically first and the rest and next and cons and
00:26:39
Speaker
those kind of functions you get them for any persistent collection. So that is one of the interesting features of Closure is that you have the sequence library that you can use and you have the same functions working on any persistent collection that is implementing the seek interface or not implementing but works with the similar structure then you can use fastnext
00:27:04
Speaker
and cons and rest onto the same data structure. I think this is what you corrected me in one of those episodes where when I said the 100 functions data structure thingy. You bring it up again. Yeah, of course.
00:27:19
Speaker
Hey, I'm wrong, so I need to tell people I'm wrong. Yeah, well, you know, I don't mean to harsh on you. It's like the internet thing, right? You know, you just say quoted by Einstein. I don't know. Nobody knows. That's true. But to your point, though, I mean, this is one of the weaknesses of earlier lisps, wasn't it, was that people tended to write kind of functions against their specific data structures.
00:27:46
Speaker
And one of the unifying things around closure was to have this common interface to all of the collections so that you could do this kind of, again, if you code against sequences and if you write your libraries to use sequences, then you get much more reuse for all of those functions, all of those common functions.
00:28:14
Speaker
So it's just one of these like design things isn't it? It's good to be aware of you know as soon as you put yourself into well I'm dealing with you know lists or I'm dealing with vectors then you're kind of boxing yourself in you know so it's much better to have some kind of functions that will take a sequence and give back a sequence
00:28:35
Speaker
And also, all these functions, they're basically if you put sequence in, you get sequence out. So you're working on a different abstraction. But there are some fun things people bump into, especially using all these functions and expect different, they don't expect a different type. But that is one thing I think people need to be a bit more wary about, I think. Yeah, because sometimes you find this, don't you, when you're doing the raffle that you
00:29:03
Speaker
you start to I don't know do some some mapping function or something and then you find that you get back a list where you really wanted a vector so you've got to put a vector in or you get back three lists and you really want to concatenate them but you can't concatenate them in this particular instance you've got to into them or
00:29:25
Speaker
or then you can concat them when you do some changes and that's a bit annoying actually to be honest but it's something you get used to for sure but you kind of you do it you have to kind of that's anything like the REPL development like we talked about last time is that you can do all interactive stuff see what the functions give you see what this mapping function gives you back
00:29:47
Speaker
adjust accordingly and then you're off but but actually the one of the things that that we might talk well we will talk about a little bit there to respect there which looks at trying to unify this a little bit more yeah do you want to just quickly talk about these other bigger functions as well like like zip and stuff like that
00:30:09
Speaker
So there is a closure zip and then there is closure walk and there is XML, all these data structures. Well, all these libraries, they can use the same interface to implement similar behavior. So you can have the first, the next, and all these functions.
00:30:27
Speaker
So that the idea behind is that you have same lesser number of data structures and then you have multiple functions operating on it and then once you have a baseline then you can implement your own persistent collections on top of other things as well. Yes. So that is that is an interesting
00:30:43
Speaker
Concept enclosure though, but so these are essentially the workhorses of closure, right? I mean because closure is so data-centric You have individual Atoms, well, I don't want to call it atoms but atom is a lispy word. Yeah. Yeah
00:31:00
Speaker
individual things that you can or the indivisible things like the numbers and characters etc. and then next step you need is to collection of these things. So the lists and vectors and maps and sets are the workhorses of closure because you know data is one of the biggest driving force behind closure code.
00:31:22
Speaker
But we kept on saying this is persistent. Maybe in the closure vibe, I'd like to, I actually looked up. So I need to define what persistent means. You did some homework. A little bit. This is not really allowed, you know. I understand. Because you're going to beat me. No, no, the only thing that I did. I got caught off guard by that, actually, Vijay, looking things up. We shouldn't be doing that, really.
00:31:49
Speaker
I know, I promise to be stupid. But the one thing, remember, you know, one of the running meme in the Closure Community, maybe not anymore, but every time Rich Hickey starts with the presentation and you'll reach the definition of it, of the word. Exactly. So persistent, what does it mean? So persistent adjective means continuing to exist or occur over a prolonged period. So that's what I see from the definition. That sounds right to me. Yeah.
00:32:19
Speaker
yeah okay so the why are we calling this data structures persistent because the they continue to exist so if you if you create a list and then you append to a list or conjure onto a list then the previous

Concurrency and Immutability

00:32:37
Speaker
So essentially, we're creating a copy of the list because we are interested in immutability. But the idea behind is that you still have the older versions of the data lying around. So when you pass the list around, you can still get access to it. I think it's a terrible word, by the way, in this. I mean, I know that it predates database persistence. So people want to use it. But I think it doesn't explain things very well. I think it would be better if it was something like generational.
00:33:04
Speaker
Yeah, that's true. It feels more normal to talk about it like that, like version managed data structures or generational data structures, because persistent data structures just sounds like they're being kept on a disk somewhere. I know they're not, I know that's durable, I get it, but I think it confuses newcomers.
00:33:26
Speaker
this concept that it's persistent, it will stay forever. Because as you say from the dictionary, from that dictionary definition, that's not what this is. Yeah, yeah. But that's something that I think we need to live with. Right, okay. Every time we need to say persistent. And then, okay, when I say persistence, it doesn't mean database. I know, I know. And it's kind of like, to me, I'm not a fan of that really. I know that it's the right word for this context, but it's not very friendly, I think.
00:33:56
Speaker
But anyway, like you said, it is what it is, but it really means generational. It means that if you add something to the collection or delete something from the collection, then what you get back is the previous version of that collection still exists.
00:34:16
Speaker
Interesting thing with persistent data structures is that they don't necessarily bite. I'm trying to compose my thoughts here. They don't necessarily mean that they're immutable in that sense, but
00:34:31
Speaker
they become immutable because that is the way you implement them. And also the thing is it depends on what kind of persistent data structure you have because you can have a completely persistent data structure. What that would entail is that you can, because as we're saying, every time you modify, you're generating a new version, but you don't necessarily have access to all these versions of the data.
00:34:55
Speaker
So once you create a new version, then the older version is available for reading only and you cannot, quote unquote, modify the older versions. But if you can access the older versions and modify it, then it is like a completely persistent or something like that. But if you cannot access the previous versions, I think it's called partial or... Okay, maybe I need to look up the proper words, but...
00:35:21
Speaker
It's not fully persistent. It's not completely persistent because you can only access the new or the latest version. So that's the fundamental idea behind it. But yeah, one of the nicest things with the persistent data structure because they enable functional programming because you can
00:35:40
Speaker
you are free to send references to a persistent data structure across all the places. So that's why they are pivotal in making immutable functional programming thing. But one of the, well, a little bit of history is that they're implemented as a, based on late Phil Bagwell's paper, they're called, I think, ideal hash trees or something.
00:36:08
Speaker
and they're called hash array map tree. So if you create some sort of a mental model of it, so persistent data structure is essentially like a tree, or at least if I remember correctly or if I understand correctly, they're much more efficient to create a tree out of tree-based data structures.
00:36:29
Speaker
So once you have a kind of a binary tree or something, then it is easy to say, okay, there is a new version coming in. So you have one list, you have the second list and you want to concatenate them. And then what happens is that the new list is created and you have the first list and the second list is essentially a pointer to the original list. So you don't need to copy the whole thing.
00:36:51
Speaker
So that's how they're implemented. But I think it was Phil Bagwell who made the original version and then Richiki ported it to Java, or at least to Clojure. So that is one of the biggest things in Clojure, I think. There was a talk by Richiki almost six years ago about how this thing scales up.
00:37:16
Speaker
Yeah, and one of the advantages is that because usually if you imagine a binary tree and you can scale it up to, depending on the number of nodes that you have at the leaf node, you can scale it up and in Clojure they are basically 32 nodes at the leaf. So that gives you a large scalability.
00:37:37
Speaker
and what would that mean is that your performance is going to be effectively log n i think or log 32 base n i think it's log 32 but i think it becomes log log 1 by the time you get to it by the time you get a million records it's nothing is it basically
00:37:56
Speaker
Exactly. And the number is, you need to have at least a billion records of them to actually go to the 2 power 32 limit when it is going to hit the performance thing. But usually, you're not going to have a billion things in one array or in one list, I hope.
00:38:13
Speaker
I think the point was wasn't it that like you said before like doing these persistent data structures where you have all these generations I think people people felt well not just felt people knew that it was very inefficient in terms of memory use so it made it impractical essentially to have persistent data structures and you know fully persistent data structures like we have in Clojure and
00:38:38
Speaker
So people made all these countermeasures to only have, oh, I'll just keep the previous version and then drop all the old ones. So to have fully persistent collections with it is actually efficient as well. That's the secret source, isn't it, to this Phil Bagwell's paper. It's not a secret source, okay, you published it.
00:38:59
Speaker
Yeah, of course. But then every other language has, not every other language, I think Scala has persistent collections as well, thanks to Phil Bangwell and Rich Hickey took it over and then we have this 32 node-based persistent data structures that form the foundation of closure data structures.
00:39:20
Speaker
And essentially they're immutable and the mutability is the thing that is bringing in the closures, the other, I don't know, sales pitch that is closure is very concurrency friendly. And you can write very easy, you can write concurrent programs pretty easily because you know, everything is immutable. So you're free to pass around the stuff around that makes things much, much easier. So that's, I think what my understanding is about immutability and persistence.
00:39:50
Speaker
And yeah, I think that the other key nature of these collections is that they're lazy. So you probably don't get up in the morning quickly or something. I'm just going to say one other thing about these immutable data structures that maybe is a bit kind of subtle. Maybe it's not subtle. Maybe I'm just a bit dumb, but I think it was a bit subtle to me anyway, was that
00:40:14
Speaker
The fact is, if you hand over a version of, you know, you run an algorithm over a version of the persistent data structure, you can still keep on adding to it outside of that run. And that's what you avoid in Clojure, is this concurrent modification exception. You can modify that data structure.
00:40:39
Speaker
While the algorithm is going over the previous version of the data structure, that's the magic to me. That is where you win because you see the version that you've got is completely stable. It's very much like Git in that sense, isn't it? You check out the code and you can do what you want with it. It's only when you put it back that it's not quite like Git. Okay, forget it.
00:41:06
Speaker
Yeah, but you have this freedom to pass the data around without worrying about it. Otherwise, if you're writing in Java, then you're locking all the time, or you're synchronizing all the crap, and you're worrying about, oh, damn it, I'm modifying this one here. I need to wait in the other thread, and it becomes much more complex. The more mutability comes into the picture, then
00:41:29
Speaker
it's becoming much more difficult to form the model of the program in your mind and making sure that if the program is robust and resilient for multi-threaded execution. So that is one thing. To be perfectly honest, I think I have
00:41:44
Speaker
Did Rich say this, that he read the book by Brian Goetz about concurrency and parallelism in Java, and then he said, okay, I need to write code. But yeah, it is like that, because I remember writing a program many years ago in Java that required synchronization. I was using some sort of tree map
00:42:12
Speaker
It doesn't really matter, it was to do with PDF generations but I'll give you the skinny on it, you had to do some, what I wanted to do was to basically throw the PDFs, say I had 10 pages, I wanted to have each of those pages rendered on the server side individually and then I wanted them sorted on the way back.
00:42:32
Speaker
but it was very difficult with Java to have like lots of operations all going on at different levels all the time. In the end I had to go very coarse-grained on the locks just because I needed to do something, deliver something. It was very inefficient in the end, I know that, and there was a lot, I could get a lot, I would have been able to get a lot of better performance out of it, but I didn't have time to do what you said, I didn't have time to analyze
00:42:58
Speaker
all of the possible cock-ups that could have went wrong. I did find a few cock-ups in testing, which is why I ended up eventually making the locking coarser and coarser grained to the point where I just locked everything. Honestly, I'm sorry everyone about that. It was pretty dumb, but I had a deadline to meet and it worked, but it was very inefficient. Whereas with this kind of stuff,
00:43:25
Speaker
You know, you're on safe ground, you know, you're putting that stuff in there, you throw it out, it all comes back. It's all lock free, it's very stable, and it gives you those kind of opportunities for optimizing your performance without worrying about those locks, because I think that's...
00:43:43
Speaker
It's a lot of headache to think about all those kind of things and to test them and as you say to really guarantee it and formally prove it is almost impossible and certainly no normal working programmer would do that.
00:43:57
Speaker
I wrote some network programming in my previous incarnation, or I don't know, previous life. Do you believe in that as well, by the way, the vegetarianism? Well, I'm persistent, so there was a version before me. You know, remember how Rich Hickey was introducing the concept of, you know, the ID and the value? Oh, yeah. So, you know, there is a person and then you have person has an age, but has some some age and
00:44:23
Speaker
It's changing all the time. But at that time it is a fact. So that is the core of the atomic. Let's not get there. Well, you're right, though. I mean, honestly, I mean, the whole point about this about this is that it translates very well into the atomic world.

Lazy Evaluation Challenges and Solutions

00:44:39
Speaker
We're not going to go there now, obviously. But in general, that's what the atomic is, isn't it? It's basically this whole thing. But on a disk somewhere or on a cloud storage somewhere. So you have
00:44:51
Speaker
truly persistent data structures.
00:44:54
Speaker
Okay, they're durable, but in the words of Atomic, but yeah, they add durability to this kind of whole collection thing, which I think makes it super appealing as well, because then you can, the notions scale out, don't they? You scale in one kind of, in one thread or one virtual machine with Atomic, it then suddenly scales out to across very many virtual machines.
00:45:23
Speaker
And also, you get the immutability at the data level in the database as well. You can say, OK, because that's how the world operates. So at some point, those are facts. I mean, they don't just change. You just add more information that makes the older information obsolete. I mean, you don't just replace. You don't just change it in place, but there is a new version of it being available.
00:45:45
Speaker
But anyway, so we'll do a talk about that another time, I think. Yeah. Yeah. Yeah. But it's great stuff. So, I mean, but it's good to know that I think just just to round it off is that, you know, a lot of this stuff comes from, you know, a lot of the further work comes from these these foundational things.
00:46:04
Speaker
Yeah, then we have this, of course, you know, these collections are lazy. So by lazy, when we say, you know, it's not realizing the entire collection, because of course, you can read the entire Wikipedia into memory and then put it into a collection line by line. But if you want to process it lazily, then you need the lazy thing. So the idea behind laziness is that you only realize or reify as proper word, I think.
00:46:32
Speaker
to make it into come into existence that's that's what i mean so i guess anyway so if you have a that's not you should have looked it up in the dictionary i promised that this this show is going to be at my stupidity is going to be at a constant level so i don't want to cross it but um
00:46:54
Speaker
Anyway, what I was trying to express is that if you have a collection and you don't necessarily need to realize all the values, you don't need to compute them. So you compute them as and when needed. And that makes it the whole thing lazy. So then also the advantage there is that now you can have infinite collections. You can have all the numbers. OK, give me a range from 1 to infinity. And then because you actually don't need all those things.
00:47:22
Speaker
you can ask it for okay give me first 10 give me first 20 whatever so that that makes things lazy it's an interesting comparison after isn't it because the question then is like what's what's a collection versus a stream of data values
00:47:37
Speaker
Yeah, but I think we are now crossing the boundary and then going into the transducer view of the world. I suppose if you say, yeah, it's just a collection and it is supposed to just give you the next one when you ask for it, then it could be a channel, it could be data coming over the network, it could be anything.
00:47:55
Speaker
so it's not a collection per se. Yeah but I think what you're hinting at is that or what you're talking about it's like this whole lazy evaluation thing is that it can be a collection that you're reading from but that collection can be generated through some function.
00:48:11
Speaker
Exactly. It's the lazy, what should we say? Lazy evaluation. Lazy evaluation, yeah, because what you're really doing is if you make a collection, if I make a collection of a hundred records, then that's not lazy, is it? I mean, it really has got a hundred records in it. Yeah. It's when you want to put a function over that particular
00:48:39
Speaker
finite or infinite collection, isn't it? That's the lazy part. Yeah, that's true. But one of the trickiest parts with the lazy things, I remember when somebody asked Simon Peter Jones, one of the authors of Haskell, so he said, how would you change if you have to redesign Haskell now? He said, okay, I would remove the laziness. Really? Because that is one of the, because the laziness gives you,
00:49:06
Speaker
kind of weird surprises because Haskell is completely lazy at every point, even the function evaluation and everything. And that makes debugging painful because you have no idea when things are going to be called. So if you have a network of functions and you have no control on when something, yeah, what is the execution model of the program? So he said, okay, I would avoid laziness and I'd go for a strict by default.
00:49:34
Speaker
then introduce the laziness whenever you want. So that's an interesting viewpoint of the laziness though. I think you've read SICP as well or looked at, because I remember in the SICP thing they were talking about laziness, weren't they? And they were saying the problem with laziness is that
00:49:55
Speaker
you don't know if something's going to come, actually. When you're trying to unify two streams, for example, you don't know what the order of the arrival is. And we know from the event stream processing world that stragglers are a kind of a problem.
00:50:16
Speaker
Anyway, we're getting a bit advanced there, I think. Or out in the woods, let's say. One of the examples we were talking about before, though, was this comparing a lazy evaluation versus a sequel on a database. Yeah, database, yeah. Like cursors. Like cursors, yeah. Because oftentimes people would compare it like that.
00:50:39
Speaker
but my view of it is that the cursors are kind of like half lazy because you as a program you can say okay give me the next value next value next value and sometimes it does a prefetch and it brings back in a megabyte or 32k or whatever over the wire and it will
00:50:59
Speaker
It seems lazy. It seems like it's evaluating that lazily. But actually it really isn't. Because if you go in the database server side, it has evaluated it. And that's a huge difference to me between what these kind of things are in closure, this model in closure and the database cursor model.
00:51:21
Speaker
Because if I have two infinite sequences in a database, that doesn't exist. I can't generate that at all. The classic one is the Cartesian product, isn't it? If I do a query, you know, select star from m and personnel, and I just do a Cartesian product across the two tables, then, you know, if those two tables happen to be a decent size, the database is going to blow up.
00:51:52
Speaker
Yeah, of course. Then it's going to put everything into the memory and then trying to, but from the reading point of view or the network, you will still think there is a curse of being available. So I'm going to ask for the next one, next one, next one. Well, the thing is, actually the funny thing about that is it will never come back because it has to realize the Cartesian product first. So that's the opposite of lazy, isn't it? Whereas with lazy, you're doing it, you're computing the next value.
00:52:18
Speaker
on demand whereas with the Cartesian product you generate it all and then you give back the first record and you consume it lazily and I think that's the difference it's like the difference between like lazy consumption versus lazy generation
00:52:35
Speaker
Yeah, yeah, that's true. But I think I remember somewhere, maybe it's MongoDB or something, then you can have server-side cursors. But yeah, let's not talk about MongoDB because it's not fun. I mean, somebody was saying that, hey, it's Snapchat for your data. Well, it's not written in Clojure, so it doesn't exist, is what I'm concerned. We're using it at work. I do know about it, yeah. It's interesting database. That's for a Mongo podcast, that is.
00:53:05
Speaker
Yeah. But one of the tricky parts with having laziness is this whole concept of holding on to the head, right? Yeah, yeah, yeah. Because you're trying to realize something and then if you... The head is the front, isn't it? Yeah, exactly. The front of the data structure. Well, front.
00:53:22
Speaker
yeah top first i don't know first yeah exactly so when you say when you see the implementation of this one then what it would mean is that you you have a list and and we were talking about the the sequence abstraction on top of it so you you have the first element and then rest of it is again another sequence and then you ask for the next element then you have the first element second element and then rest of is a sequence
00:53:46
Speaker
Now, especially if you are iterating, which is probably a bad word in functional programming, if you are going over a collection, then if you hold on to the head, what would that mean is that you are having a reference to the data while you are going through the collection, then it will accumulate into the memory and you will end up getting out of memory error or something.
00:54:08
Speaker
So that is something that you need to watch out for. How do you prevent not making too much, not accumulating too much into the memory. So that is an interesting thing to watch out. How are we going to avoid that though?
00:54:34
Speaker
Well, I think one of the ways, there are multiple ways you can identify it. I remember there was a Stack Overflow post by Michal Marzic explaining that you need to monitor it. You can monitor it by using, so again, I think it's called the persistent, I'm sorry, it's not persistent queue, but the JVM has a couple of APIs that you can look out for and you can ask for, has it realized? Is it something? Then you can write tests for it.
00:55:04
Speaker
But it is pretty tricky. But whenever you're writing code, you need to make sure that when you assign a variable, not a variable, but assign a var, you need to know that you're going through the list and be careful in terms of how you're consuming from it.
00:55:20
Speaker
So there are lots of examples on the net. It's very difficult to read out code in the podcast. I think the difficult thing is when you're doing some of these operations, it can sneak up on you, can't it? That's part of the problem. You have to be careful that you don't introduce some function or some operation that accidentally makes you hold on to it.
00:55:48
Speaker
But okay, like you say, look it up on Stack Overflow people. We're punting on that one. It's an issue. Be aware of it and look it up. We're not being very helpful ever. We can't solve all the problems.
00:56:09
Speaker
Yeah, yeah. So don't hold on to the head. Yeah, don't hold on to the head. But actually talking about that kind of thing is that definitely there were some occasions, aren't there, where you do, you know, like, like Simon Pitt and Jones, you want to be eager sometimes. So, you know, if you're going to do some side-effecting things or you're going through some list where you want to print some stuff out.
00:56:29
Speaker
The classic example is you get this list and you think you're going over a database, you're going over some set of operations on this thing, and then you run the function and nothing happens. It's like, oh, what happened here? But actually it's all because it's being evaluated lazily, so it can be a surprise when you first
00:56:50
Speaker
when you first come to the language, then you need to know, okay, actually, this is the benefit of being lazy, but I don't want it in this occasion. So what do you do? Then you have to use this do all or do seek thing, where
00:57:06
Speaker
where you want to force the generation of the list of the sequence operation. So there's just a little comment there. Typically, you don't want to do that, but if you do want to do that, then you have the options to be able to do it as well.

Performance Optimization Techniques

00:57:22
Speaker
But you were telling me something beforehand about some performance trick that you've been using recently with these persistent data switches.
00:57:33
Speaker
Exactly. So there is an interesting, as you're pointing out, I mean, you're using lazy and then sometimes you want to be eager. So you have functions for it, like do all, do seek, you know, to realize the whole thing. So you have an option of going to the other side. And in the persistent data structure world, as we were trying to explain that there is a copy made every time you want to create a new version of the data. So every time you're appending, every time you're concatenating or conjoining,
00:58:00
Speaker
you're creating a new version of it. But there are situations where you don't want to create a new copy because of the performance reasons. So that's where you use a transient data structure or the transient function. So what you can do is create a transient version of the data structure that you want to use. And you have operations that take advantage of the mutability
00:58:25
Speaker
And then once you are done, you can convert it back into a persistent data structure and then give it back to the world. So one example, I think it's in the probably Closure Applied book by Alex Miller as well, is that when you want to load large amount of data,
00:58:45
Speaker
then essentially creating, if you are doing this conge or I don't know, concat or ashosh whatever, then it is going to be a big performance hit because you are loading bulk data and you don't want to keep creating copies, then you want to take advantage of the performance.
00:59:01
Speaker
So it's pretty nice use case for jumping. That is the choice that you have. So you can make it program faster, I think at least four or six times by switching to transient locally within that function to load the data. And then once you're done, you just give a persistent data structure back. So that's an interesting performance trick.
00:59:26
Speaker
So that's cases where you basically want to front load some, obviously the performance is predictable anyway, but the, so what you're talking about here is like you want to front load some data, like from a log or something like that, pull it all into memory.
00:59:42
Speaker
you want to do some small operations on it to maybe pause it or to filter some stuff on it, but you don't want to pay the cost of a generation every time you're doing a source or a conjugate. It makes a lot of sense actually, but I'm obviously nervous of saying that it's
01:00:00
Speaker
that the performance is bad. It's just that you're tuning for a particular use case. Exactly. But that is the choice that you're making when you're using Closure. The thing is it gives you enough defaults are pretty awesome, but then at the same time, whenever you have to reach out to the other tools, there are
01:00:19
Speaker
workarounds available. Yeah. And another example is that the type hints, right? I mean, type hints help you to prevent reflection, and then you don't need to use them, but especially when you need them, they're there. So you can make them use primitives, you can make them use the real types, so it doesn't need to do reflection. So if you're going towards, okay, I want to make it really performant, then you have access to this kind of tricks. Yeah, you can take off, take away some of the defaults to get some big benefit.
01:00:49
Speaker
But like you said, I think what you're saying is a nice trick, though, where you keep it very local to one function, and it just does its work, and it knows that it's safe to do it. Yeah. Because obviously, you don't want to be giving back transient collections, because that would be a horror show, because you lose all of this wonders. No, no, no, no. It should always be local and controlled, so that's the idea. Cool, okay. Yeah.
01:01:15
Speaker
Right well we probably should move on finally just to some general things like from the community because I think we've talked a lot about the kind of core stuff haven't we? Just a quick shout out, there's a lot of publicity around it because he made the keynote at Closure West but
01:01:35
Speaker
Nathan Mars did a great job here in terms of this Spectre library that we were talking earlier on about sometimes you get back different values from the conge or the concat or your source or whatever. And he made the Spectre thing which gives you a lot more control over that. You have a much more ability to specify consistently and easily what the output data structure is going to look like.
01:02:04
Speaker
And I think that's really, it's going to be very nice where you've got big data problems, I think.

Contributions to Clojure's Data Structures

01:02:10
Speaker
And I like the fact that he's taking it to the next level as well and making it open so that you can extend his types also, you can extend the library. So to apply all of the kind of navigational and editing stuff that he's put out there.
01:02:29
Speaker
But I think also we had a look at some other community-contributed collections as well, didn't we? Yeah, yeah, of course. There was one by, obviously, Michal Marzic has been a prolific contributor to Closure and Closure Script. Big shout out to Michal, yeah. Michal, yeah. He is pretty awesome and he was very kind to come to Dutch Closure Days and he gave a talk at one of the previous Dutch Closure Days as well.
01:02:53
Speaker
And so he's one of the guys who is working on the data structures a lot, and he made his own version of the AVL trees, which is basically a tree data structure, but much more what they call like a balanced tree, so the height is optimized, so you have access to the nodes pretty, with less number of hops, less number of average hops, it's basically very balanced things.
01:03:22
Speaker
I wrote down the names of AVL Russian scientists. So AVL is Georgie Adelson-Welski, AV, and L is F. Chenney Landis. I apologize to all the Russian listeners.
01:03:39
Speaker
That's what I would, you know, I tried my best and I really had to pronounce names wrong. But anyway, so that's the AVL trees. And he also did a concurrent lock free data structures, which are based on, which are, which are, which is an idea that is implemented in Scala before called C-Tries. C-Tries, I think they're still pronounced as trees, but written as T-R-I-E-S, I think. That's very confusing. Yeah.
01:04:05
Speaker
Yeah, I think it's a tree, but with IE. Yeah. The AVL thing, by the way, I need to use that one of these days for stream processing stuff, because I think he has these time rank queries which
01:04:22
Speaker
Which I haven't used yet, but I've been very I was I did a blog post about time time processing With a core async but but somebody suggested to me that if you put them into these AVL data structures you can you can get this time ranking for
01:04:42
Speaker
It wasn't really relevant for my use case, but I think it looks very interesting for a lot of this event stream processing, where you have these time windowed series and stuff like that. So yeah, great job by Michel Marchik there. We're definitely a great guy. Often does presentations at the conferences as well. So yeah, very good.
01:05:08
Speaker
and Chris Howser of course in the early days he's did a lot of stuff and he still has this data.zip for filtering trees he's also got a lot of XML support in there that's really good and
01:05:26
Speaker
The other thing I noticed at the last closure west was this guy Peter Schuck. He did the lean hash maps. So I'm actually taking this whole HA-MT thing we talked about earlier on. This hash array map. Tree. Again tree with an I-E.
01:05:52
Speaker
There's been some researchers, a paper by Stanford and Vinju last year, talking about how to do these lean hash maps and they've got some claims at least in terms of iteration and equality checking which are quite impressive.
01:06:10
Speaker
And I know that he's talked to David Nolan about performance, checking these things. He wrote them in ClosureScript, by the way. And they're looking at thinking and seeing whether these improvements in performance for certain operations actually hold for all the operations. So obviously you don't want, you know,
01:06:33
Speaker
You don't have a swings and roundabouts thing. You want to try and win everywhere or at least keep everything kind of stable but have some improvements in some areas. That's what we're looking for, isn't it? Actually, the thing he mentioned which was quite interesting was that it was a lot less code, so he thought that it was
01:06:52
Speaker
apart from anything else, you know, was a cleaner implementation. But it's a very good presentation, I think, very well, well worth the watch. So shout out to Peter as well. Yeah, I mean, I didn't see that, but I'll probably watch it now. Because you get a quick summary or a TLDR for that one.
01:07:11
Speaker
And of course, there is also another, I think it's part of Closure Core as well, the priority maps or the data priority map. So you can have a sorted map, but you can associate some sort of a priority into it.

Episode Conclusion and Listener Engagement

01:07:25
Speaker
So the entries are sorted based on the priority. So there are lots of nice libraries available out there. I think we'll put links on our blog.
01:07:35
Speaker
So people can look it up. Right. I think the priority maps are by Mark Engelberg. Yeah. It's a closure family, isn't it? Him and his son seem to do incredible work around closure. The Engelberg Troop. Right. Okay. So I don't know. I think we've actually just passed the hour mark. So I think we should probably stop now.
01:08:02
Speaker
I suppose so, because otherwise the people who are using our podcast as the workout counter. They're in the shower. It might be over water. It's going to ruin our MP3 player. What are we playing for that?
01:08:18
Speaker
Yeah. All right. So anything, final comments, Vijay? I'm still wondering about what we're going to talk for the next time. I know we talked about the asynchronous models and stuff. So probably we'll pick up on stream programming and concurrency and asynchronicity a bit. But yeah, we'll explain it in the next podcast, I think. But yeah, we'll come up with that.
01:08:47
Speaker
All right. But it's obviously the podcast is available on the regular outlets. It's available soon. If you are listening to this, I think it should be available now. It's kind of a self fulfilling thing.
01:09:05
Speaker
Yeah, so we'll post the notes to Defend.audio and obviously SoundCloud will be there as well and we'd really appreciate if you give us feedback on Twitter or via Reddit or Google Groups as well. I'll post the links and we are also available on Closure Slack and I'm also occasionally idling on IRC if people who are still using it or not, I don't know. I'm never on there, actually. But you're more social than I am anywhere.
01:09:36
Speaker
Maybe online, I think. In person, probably I'm not that social. Oh, come on. Anyway, okay, so that's good. Thank you very much. I think we just have one last credit, actually. We have to do these things and it's an awesome bit of work by Pizzeri. He did the intro-outro music for us. I really like that, actually. I think it's a very nice tune for our podcast.
01:10:01
Speaker
I hope that people enjoy that music when they get the podcast downloaded and get sat in their chairs or their cars or whatever. So thanks to Pizzieri for that. Melon Hamburger is the title of that track. So go and check him out on SoundCloud because he's doing a great job there. So that's it. So DJ, thanks very much.
01:10:27
Speaker
Thanks Ray and it's been fun again and we're going to come up with the episode 5 pretty soon. So see you next time in a minute. Yeah, see ya. Bye. Bye.
01:10:55
Speaker
you