Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Advanced Memory Management in Vale (with Evan Ovadia) image

Advanced Memory Management in Vale (with Evan Ovadia)

Developer Voices
Avatar
2.2k Plays8 months ago

Rust changed the discussion around memory management - this week's guest hopes to push that discussion even further.

This week we're joined by Evan Ovadia, creator of the Vale programming language and collector of memory management techniques from far and wide. He takes us through his most important ones, including linear types, generation references and regions, to see what Evan hopes the future of memory management will look like.

If you've been interested in Rust's borrow-check and want more (or want different!) then Evan has some big ideas for you to sink your teeth into.

Vale: https://vale.dev/

The Vale Discord: https://discord.com/invite/SNB8yGH

Evan’s Blog: https://verdagon.dev/home

Evan’s 7DRL Entry: https://verdagon.dev/blog/higher-raii-7drl

7DRL: https://7drl.com/

https://verdagon.dev/grimoire/grimoire

What Colour Is Your Function?: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/

42, the language: https://forty2.is/

Verona Language: https://www.microsoft.com/en-us/research/project/project-verona/

Austral language: https://austral-lang.org/

Surely You’re Joking, Mr Feynman! (book): https://www.goodreads.com/book/show/35167685-surely-you-re-joking-mr-feynman


Evan on Twitter: https://twitter.com/verdagon

Find Evan in the Vale Discord: https://discord.com/invite/SNB8yGH


Kris on Mastodon: http://mastodon.social/@krisajenkins

Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/

Kris on Twitter: https://twitter.com/krisajenkins

#software #programming #podcast #valelang

Recommended
Transcript

The Importance of Memory Management

00:00:00
Speaker
Memory management is one of those things in programming that we often want to ignore, we want to try and automate it away. And if you've got a garbage collected language, it usually works most of the time, until you become concerned with things like really high performance or really low latency, or really high concurrency.
00:00:21
Speaker
or really long running processes, or really predictable performance, or really constrained hardware. We're now basically covering most servers, most apps, most embedded devices, most games.

Tools for Memory Management

00:00:35
Speaker
There are still a surprisingly large number of areas in programming where memory management is very much our problem to deal with.
00:00:43
Speaker
And even more surprisingly, there are very few tools to deal with it. What have you

Introduction to Vale and Innovative Concepts

00:00:48
Speaker
got? Tool zero is do it manually, malloc and free. Tool one, I guess, is reference counting. There wasn't that much else until fairly recently, Rust came along with its borrow checker and said, hey, here's a completely different way to deal with managing memory at programming time.
00:01:09
Speaker
Well my guest this week would tell you that as great as Rust has been, it shouldn't have just been a new tool. It should have been the start of an avalanche of new ways of dealing with memory and resource management in general.
00:01:23
Speaker
I'm joined this week by Evan Evadia and he's built the programming language Vale as a way of road testing his three favorite techniques you may never have heard

Understanding Linear Types

00:01:33
Speaker
of. Generational references, linear types, and regions. And he's convinced that at least one of those is going to form the future of programming. Maybe he can convince you too. He's certainly going to try as we explore what they are, how they work,
00:01:48
Speaker
what they offer us as programmers, and if they're so great, why don't we have them already? I'm your host, Chris Jenkins. This is Developer Voices, and today's voice is Evan Arvadia. I'm joined today by Evan Arvadia. Evan, how are you out there?
00:02:17
Speaker
I'm doing pretty good. How are you? I'm very well, very well. I'm looking forward to being taken to a certain level of hardware and memory management school by you. Yeah, that's really fun. Yeah. But before we get straight into that, the reason we

Potential of Linear Types and Regions

00:02:34
Speaker
got started on talking about the topic of memory management is a language you've been designing called Vale.
00:02:40
Speaker
My opening question with new languages is always, why do we need a new language? What do you feel is missing from the language world? Oh, what do I feel is missing from the language world? There's a bunch of things and I think the world needs them. A reasonable person would disagree.
00:02:59
Speaker
Yeah, the thing I really think the world needs is someone to explore things like linear types and regions. I think those are a huge missed opportunity in today's languages. It's the kind of thing where you can't really imagine how awesome it would be until you use it.
00:03:17
Speaker
And even I couldn't. And I'm like, I wonder if this would be cool. I don't know. I should build it and find out. And so I did. And I found out. And it's pretty cool. OK. The classic hacker mentality. Yeah, absolutely. Yeah. So I know about linear types a bit from some excitement in the Haskell world about it. And if there's some excitement in the Haskell world, it means it's about as far from mainstream as we can go. Yeah. So Unpack, what are linear types?
00:03:46
Speaker
Yeah, linear types. I'll explain like the academic, like the high level expansion first, and then I'll talk about why it's cool. So linear types are
00:04:01
Speaker
An object that you can't just drop on the ground and by that I mean like, you know Every function you have not have variables x y and z and then if you say return 42 x y and z just kind of go away Right because they go out of scope and garbage collection just takes care of them. Yeah, that's how normal things work Linear type. Let's say if x was a linear type The compiler would say, you know on the return 42 line. It would say hey, you didn't do anything with x You can't just drop it on the ground
00:04:29
Speaker
Um, so x is a linear type and the reason it's called linear is because um You have to use it Exactly once and by just dropping it at the function you use it zero times. So Um, so the question is like why is that useful? That sounds like a headache of the way I described it. You just got a compiler error Nobody wants a compiler error. Um, but uh, it would be really cool for things like uh, let's say
00:04:56
Speaker
I handle for an active thread that's running in the background right if you got a thread running in the background you probably want the result of its computations right but if you just drop it the most common. Reaction to you the coder just dropping a thread handle the function is to shut down the thread.
00:05:15
Speaker
And that's not always useful. And that's not always what you want to do. Sometimes that's an accident. So it would kind of be nice if the compiler told you, hey, you just dropped this thread handle on the floor. Do you want to instead, I don't know, like check its return value, maybe like join it? In other words, like wait on its return. So linear types are good for things like that. Good for the way I like to think of it is tracking your
00:05:40
Speaker
uh, tracking responsibility or tracking, uh, things that you hoped you would eventually do. Um, like, uh, the last example I'll use is, uh, in the seven DRL, which is a seven day really intense hackathon. Um, I did like three years ago. Um, I was making a little Roguelike game and Roguelike games are like little terminal games. They're really low graphics. Um, and so in the seven days, you really get to explore, uh,
00:06:08
Speaker
You know your skills and algorithms and gameplay and in that I had a cache a hash map um of

Linear Types in Practice

00:06:15
Speaker
what are all the goblins on the level and um What was oh, yeah for every location on the level is there a goblin there so it's a hash map of location to an optional goblin reference um, but I
00:06:33
Speaker
you know, past years, I had all these bugs where I would forget to remove things from the cache, right? Because as you know, the two hardest problems in computer science are naming and caching. And luckily, other things solve the first one, but linear types really help with the second one. It made a linear type attached to the original goblin itself.
00:06:51
Speaker
that represented, hey, this goblin is probably still in the cache. So inside my goblin, I called it a goblin in cache token. It's just an empty object. And whenever I tried to drop the goblin, the compiler would tell me, you've got this goblin
00:07:07
Speaker
cash token now and you haven't done anything with it. And I'm like, Oh, that's right. I need to remove it from the cash. And so then the only way I can remove from the cash is to trade in this token. That's the cool thing about linear types. They make sure that you remember, um, to do things in the future, which you hoped you would do. This is kind of like, this is a bit like defer, isn't it? It's trying to solve the same problem in that do this thing in the future, because if I don't remember to, I'll have problems.
00:07:37
Speaker
Absolutely. And the difference between this and defer is that defer works really well for a specific scope. And it's really readable. I really like how languages have pulled off defer. The one thing that I wish defer could do, which it can't really, is to make sure you do things in some other scope than your own. Like, for example, when I made this goblin, that was, you know, like 30 seconds ago in some other deep call stack back then. And like we have long since returned from that.
00:08:05
Speaker
But we still want a way to track and uphold the promise that we'd remove it from the cache. And that's what defer can't do. It can't do things. It can't influence the future past the end of its current scope.
00:08:17
Speaker
OK, that's a very nice way of looking at it, to influence the future, not knowing where the future will take place.

Vale's Opt-in Model for Linear Types

00:08:25
Speaker
Exactly. Yeah, yeah, OK. So is the idea then that you create a language that does this with all memory, all resources, or just opt in for certain kinds of resources?
00:08:38
Speaker
For Vail, it's opt-in. You can make a language that does this for all kinds of resources. In fact, you could make a programming language that literally just uses linear types for everything. It's weird. It requires some mental acrobatics, but it is possible. But no, I think the power comes when you can opt-in for certain resources. I like to, you know, threads, caching, sometimes file handles, other things like that really shine with linear types.
00:09:04
Speaker
Yeah, I can think of a couple of places in my recent programming work where it would have helped on closing connections to databases. Yeah, okay. So why do you think it hasn't gone mainstream then if it's cool and useful? There's a really good question. Like it definitely hasn't gone mainstream yet. Um,
00:09:29
Speaker
I think probably the biggest reason, yes. OK, here's the exact reason it hasn't taken off. It's because it can only kind of work in languages with single ownership. And I mean that in the C++ sense, not necessarily the Rust sense. A single ownership where one reference has responsibility for this object. And for C++ people, that would be the unique pointer class. For Rust people, that's just kind of how things work in Rust in general. But also the box class.
00:10:00
Speaker
For Haskell folks, that would be kind of like the linear type. That's the one. I don't actually know what it's called in Haskell, but the linear type is kind of like the one reference that's responsible for this object where it won't go out of scope until this reference does. And once this reference goes out of scope, it's unreachable. That's kind of necessary for linear types. And the reason I say that is because if you have
00:10:25
Speaker
Let's say you're using Swift or some other kind of reference-counted language. You've got two or maybe three equal references to this one. But it's kind of hard in a reference-counted language to say every time you drop a reference, since that might not be the last one. But maybe it is the last one, in which case you need to check, are you the last reference? And if so,
00:10:51
Speaker
Please handle this linear type that's contained in the object. You would have to do that every time you drop a reference. And that's really hard. So in any kind of language with multiple references to an object, that would be not a great case for linear types. But if you have one privileged reference, one owning reference,
00:11:13
Speaker
then it works a lot better. So what I mean by that is in Rust, you might have a box, which is the main privileged owning reference.
00:11:24
Speaker
Do an object you might have a bunch of borrow references temporarily pointing to it when there's quite a scope you don't have to do anything you're not responsible for the object but when that box goes out of scope you really do need to handle that linear type in the object talking about a theoretical rest that has linear types. Yeah and and in veil it's kind of the same way you can have an owning reference you can have a bunch of non owning reference with your own generational references with a lot to.
00:11:48
Speaker
Those are super cool. But you really do need one reference to be the main owning reference. So all that's to say, this kind of feature would only survive in a language like C++, Pail, Rust, Austral, I think Inco, maybe. And then the reason it hasn't taken off in Rust and C++ is because of the destructor, the way that we think of destructors.
00:12:18
Speaker
People who haven't used a lot of destructors, because I do Java in my day job, a destructor is something that is run. For example, let's say you have x, y, and z variables in a function. And x isn't just an integer. Let's say it's a spaceship.
00:12:37
Speaker
And when you destruct a spaceship, you want to safely turn off its engine in a very specified sequence or else things will explode. So in the destructor for this spaceship X, you have very specific tasks you want to do. However, there was what I believe was a misstep decades ago when they decided that
00:13:01
Speaker
the destructor would have to take zero arguments. The reason they did this was actually a pretty good reason. That's that if someone throws an exception, in the middle of this function, after you have created X and before the return 42, if someone throws an exception or a panic or something, it's going to blast its way through all of the call stack, destroying everything in its path. How can it safely destroy the spaceship if it doesn't know what arguments to pass to its destructor?
00:13:31
Speaker
And so that's why a destructor will need to have zero arguments. However, I think that was a misstep. I think that there should have been a zero argument on panic or on exception function instead of the destructor. And then if they had a specific function like that, that could be called by panics and exceptions. That means that there's no reason your destructor couldn't take two, three, four, five arguments.
00:14:00
Speaker
And that would get you the ability to, for example, if you wanted to, so OK, let me answer that. There's a specific, there's a nuance here. You can do linear types without that, but you can't do something which, and here's what I wanted to talk about, higher RAII. Higher RAII is a special kind of flavor of linear types.
00:14:29
Speaker
And this is where all the coolness of linear types comes in. This is what makes linear types really useful in my mind. If you have, like going back to my past example, if you have a goblin in your level, right? And you have with it a goblin in hash token, which is also a linear type. You want to be able to destroy these two linear types at exactly the same time.
00:14:58
Speaker
The way you can do that is by passing into one's destructor. Let's say you pass into the goblin's destructor, the linear type, or the goblin in cache token, or vice versa. Yeah, this is just a long-winded way to say that it's extremely useful to be able to pass two linear types into the same function. And that gets you a pattern called higherRAII, which helps you maintain a bunch of invariants about these two things being destroyed at the same time, or these three things, or four things.
00:15:26
Speaker
at the same time, or you want to do all these operations at the same time, or a past operation return to linear type that wanted to make sure that you did some sort of future operation. And that token you can take in at the same time as other linear types. That's higher RII. That's what really prevents the bugs, like with the goblin cache and so on. OK, so I think I'm going to need another concrete example to help with this.
00:15:55
Speaker
Yeah, yeah. So let's say that you have a spaceship, right? And you want to make sure that it is eventually returned to the shipyard in a very specific way. I'll explain what that means later.
00:16:20
Speaker
You want in that case you want to make spaceship a linear type, right? You can't accidentally just drop it on the floor um, so The only way like and how else would you destroy right the only way that you know? You look around and you're like, how do I get rid of this spaceship? It's like stuck to my hand, right? Like how do I drop this? And you look around you look in the documentation. It's like, okay Yeah, the only way to destroy this spaceship is to return it to the shipyard and and the shipyard um
00:16:48
Speaker
It has a function called return spaceship or like take spaceship, I suppose, spaceship maybe. But that function also takes some other arguments. If a shipyard is taking in its spaceship, maybe it wants to take in the cargo it got from its mining of the asteroid or maybe it expects.
00:17:10
Speaker
Like the satellite it just retrieved. Like if you want to do a future operation, that future operation is probably a function call and that function call is probably going to take multiple arguments. So that's why we can't just say, yeah, this future operation is going to have zero arguments. And that's why we say if an exception or a panic comes in, it should just call this operation over here. We can't do that because that panic
00:17:40
Speaker
or that exception doesn't know these other arguments that are going to be needed to safely dispose of this, for example, spaceship. This seems like it would be quite nice for library authors, right? Because as an existing library author, I can say, you've got to give me these things before I can create you a database handle.
00:18:01
Speaker
And you'd also be able to say along with that, and you're not going to be able to get this thing unstuck from your hand until you make sure you give me these things. And I can like, I can control my caller's lifecycle. Exactly. Exactly. It's a really good way to uphold invariance. And I saw really, really insightful comments on Hacker News about a year ago. And they said that
00:18:26
Speaker
There's kind of, you know, we talk about correctness all the time in programming languages and correctness is a really, really good thing. But there's two halves to correctness. There's safety, which we all know about, and then there's something called liveness. Safety is the idea that
00:18:41
Speaker
You know, something bad won't happen, right? If you could think of a bad thing, you can think of a way to prevent it. Like, for example, like Haskell and Rust prevent use after free. That's a bad thing. We don't want use after free. Those two both prevent that from happening. Liveness is the concept that something good will happen, right? And so
00:19:01
Speaker
You can think of memory safety as the safety half of things. Liveness is the linear types of the liveness half of things. Once you start programming with them, you start seeing all over the place
00:19:18
Speaker
like you know anytime you're writing in a comment like oh make sure to remember to do this like that's the that's the mental you know uh you start to recognize and you're like oh i need a linear type here right yeah and so you remove that comment and you feel good about it because you just move some complexity you know move the comment
00:19:34
Speaker
And you just got this good feeling that you know this is going to happen. You can't mess this up now. You don't have to rely on your memory. So I see linear types as the other half of safety, making this correctness thing we're all trying to get towards.
00:19:53
Speaker
Yeah. I'm willing to bet there's someone out there training a large language model that spent 17 hours chewing through the data and then forgotten the call that writes to the file with the results. Right. Absolutely. Yeah. So these things could be very, very nice. Save a lot of pain. Absolutely. So it seems like we could take this in two directions. Like how, maybe we should start in user land first. How much does this change the way you have to program?
00:20:24
Speaker
It's a super good question because there's a lot of interesting consequences. This changes, well, you can see how it could make some things easier like I just described. There's also some things that it makes a little harder. And let's see. Some of you may have read the
00:20:46
Speaker
article called What Color is Your Function? And it's about async and await. And I guess I can TLDR that one. When you use async await, if you have somewhere deep in your call stack,
00:21:04
Speaker
that wants to pause execution while it waits for some network response to come back in or some thread in the background to keep running, you would use a keyword called wait, which will suspend the entire thread or coroutine or whatever you're in in a specific way that the main event loop or the runtime you're in and reschedule your current coroutine and add another one. And this is slightly different than threads. This is assuming you don't have threads. Or you have a good reason for not using them
00:21:35
Speaker
problem with that is that this await keyword and the async keyword that often comes with it, they're kind of viral, right? If you have it, your parent function will probably have to have it, and then its parent function will probably have to have it, and then its parent function will probably have to have it. And this is all over the... Sorry, bud. No, I've run into exactly this when I first started using it in JavaScript. It's like, how do I get out of this loop where I'm just tagging everything as async? How do I actually run this thing? Yeah, yeah, yeah. And like,
00:22:05
Speaker
A lot of people see that as kind of inherent complexity and like, you know, you can't really avoid that. That's wrong. You could totally avoid that. Um, this is something I call a viral, a viral, what did I call it? Uh, infectious viral constraint or something.

Impact of Programming Features on Code Design

00:22:20
Speaker
Um, basically it's a constraint that just spreads throughout your code base. Um, even to parts of your code base that just, they don't care. Like, right. Like this, like, you know, I had a function that's like, it's a helper that just, I don't know.
00:22:34
Speaker
takes in a list and calls a given function on it. You know, it's a map, you know, your map function. Suddenly your map function has to have an async variant because what happens if the callback you take in is async and you're calling the callback? Oh, you have to be async. So like this kind of viral thing spreads all throughout your code base. And unfortunately linear types kind of also have this, um, this kind of same behavior, but not in the function call stack.
00:23:02
Speaker
they have this behavior in your data. For example, if you have a list of
00:23:17
Speaker
these linear types that represent your threads running in the background. This list now is linear. If this list lives inside some resource thread manager, that resource thread manager object is now linear because you can't accidentally drop anything on the floor that would indirectly drop that contained linear type on the floor. So suddenly, if you imagine your entire program's data hierarchy, everything up to the top,
00:23:46
Speaker
has to be a linear type if it contains a linear type. And another example of this kind of infectious stuff is like Rust's Barrow Checker. If you have an ampersand mute, like a mutable unique reference, every parent in your call stack has to have these mutable unique reference. And so that's the one downside because we don't like infectious constraints like this. We don't like it when
00:24:15
Speaker
something spreads throughout the entire code base like this feature. And now it's rippled through my entire code base. What have you sold me? Exactly. And a lot of people are thinking like, well, you know, aren't you talking about static typing in general? Like static typing, these types just ripple throughout your code base, right? If you have a function that takes in a, you know, a spaceship and you don't have a spaceship that's got to be rippled upwards until you find some color that has a spaceship, right? Um,
00:24:43
Speaker
The difference is that the first three things that I described, async await, the unique reference in Rust, and these linear types, these can potentially spread throughout your whole program. But static types, you can cut off the spread. You can limit the damage by, for example, taking an interface.
00:25:12
Speaker
or the needed object inside another object that you have access to instead of taking it as a parameter. In static typing, there's a lot of escape hatches and ways you can kind of contain the spread. So that's the downside of linear types is it's one of these infectious constraints. But didn't you say that in Vale it's opt-in? It is opt-in, yeah. So for example,
00:25:38
Speaker
If you have a spaceship, it doesn't have to be linear. But if it is linear, then the constraint spreads. And this is exactly why it's opt-in, by the way. This is the one reason I didn't make everything linear, right? Because I wanted to. Believe me, I really wanted to. I'm like, this is so cool. And I'm an engineer, so I'm like, I see a cool idea, and I want to apply it everywhere, even where it doesn't, you know, it shouldn't be applied.
00:26:01
Speaker
So that's the reason it's not linear, is because I don't like infectious viral constraints. That's also the same reason Vail doesn't have a traditional borrow checker like Rust. Yeah. OK. OK. So I'm dying to get into the under the hood part, but I've got to ask you on a personal level, do you think if you're a day job as Java programming, do you think there's any chance of retrofitting these kind of linear type systems to existing languages like Java? That's a super good question. I haven't thought about this in Java context.
00:26:33
Speaker
I think you would have some success until a certain point. You would be able to annotate certain types as linear if the compiler also came with the constraint where you had to choose which reference was the primary, the owning reference. And then you could see making your way towards this higher RAII linear type nirvana.
00:27:02
Speaker
But they would run into the same problem that C++ and REST kind of run into, which is they expect a zero argument. No, no, they don't. Java doesn't have distractors. Okay, so the only remaining obstacle then would be that, yeah, it might be possible if you could do that. Yeah, I have to think a little further about that. There might be a reason. I can't think of it off the top of my head. Okay, we'll leave that as a research project. Perfect.
00:27:30
Speaker
So we've talked about how that will change things in user space as a developer. But what does it mean for the compiler writer? Does it change how easy or hard it is to build the language? Yes, it does change how we need to build a language, and it makes some things a little more difficult. I think the biggest example was when writing the list class in the standard library and the hash map class and the set class
00:28:00
Speaker
Um, we need to, to write them so that they could gracefully handle containing linear types. Um, and there was this very specific line in the standard library in the list class where I had to write list is a linear type if it contains a linear type. Um, and that was that, that
00:28:21
Speaker
hurt my brain so much, so much. I remember just like slamming my head against the table of this one coffee shop one, I'm like, I know this is possible. Like there's definitely a way to make this work. And, uh, and it, it came to me, um, finally after like my fifth dose of caffeine, um, it was that we don't annotate on the list class that, you know, this is a linear type, if T is a linear type,
00:28:48
Speaker
I made it like the struct, the class it's, yeah, I should say the struct for the list doesn't know anything about that. What we do instead is we say, we enable a zero argument drop function only if contained type T has a zero arg drop function. And that was kind of what made it click for me. It's that we don't really
00:29:16
Speaker
tell a type that it's linear, what makes a type linear or not unveil is whether or not a zero argument drop function exists for it. There was a few other ones like that, but that was the one I most remember, and that's the one I have the most brain damage from. OK. Is that something that gets then exposed in some syntactic way to the developer?
00:29:41
Speaker
I mean, are you just saying you just worry about writing drop functions or is there some kind of, are you talking about predicates, right? You're talking about you could have done it with a predicate that says, if I contain these things with this trait, I have this trait. Is there some kind of constrained predicate thing leaking into user space in Vale?
00:30:05
Speaker
Not into user space, luckily. Well, it depends on what you mean by user space. If you, the library writer, have this line above your drop function, that's pretty much all that you have to do. If you're using the list function, you don't have to have any special syntax to handle the list. You don't have to do anything differently. The only thing differently you have to think of is when you get that compiler error that says, hey, you just dropped this list of spaceships on the floor. What are you going to do about it? That's the only thing the user has to really think about.
00:30:34
Speaker
But did you mean user space like including the standard library and so on? Yeah, I think so. I think you put on two different hats when you're writing the compiler versus when you start writing your standard library. Have you given yourself another burden and another set of possibilities? Yeah, it is kind of a nuisance to
00:31:00
Speaker
think about for some of your types, whether they would need to be linear. There's this upfront complexity, and this is part of the viral spread problem that I was talking about, where you have to think when you're writing a struct, is this going to contain a linear type? Actually, I take that back. You don't have to think about it when you're writing the struct. You have to think about it when you're writing your drop function. Is this going to take in a linear type ever?
00:31:26
Speaker
Um, and if it does, well, will I eventually be able to just add this annotation to this drop function or not? Um, it hasn't been that much of a problem in practice, but also I've only written a total of like 30,000 lines of veil and that's really not enough to get a good feel for the extent of the problem. Um, so far it seems like a good trade-off. I don't know how it would work out in real life. Yeah.
00:31:47
Speaker
I can certainly see the upside. I don't know how programmers will take to it in the large, but you can almost never predict that. You just have to do your best with what tools you think are good. I think that programmers will like the trade-off.
00:32:03
Speaker
because it's really nice to know that you are not going to forget to remove this from the cache. Because if you've ever debugged cache problems, they're the worst. They're awful, right? But if someone's writing something simple, like a command line tool that just transforms this chunk of data to this chunk of data, yeah, this could be a nuisance if they come across it. I see quite often working with Kafka, it has, if you don't remember to disconnect from it cleanly,
00:32:32
Speaker
It works almost exactly the same, except the next time you run the program, you'll have to wait two or three minutes while it figures out if you're the second person connecting on the same group, or if you're entirely new and the old guy died. It's one of those problems, it's not the end of the world, but if you solved it cleanly for everyone, it would make everyone's life just that bit easier. And that's important stuff too.
00:33:01
Speaker
I don't feel we fully unpacked your other favorite cool thing, which is higher RAII. I want you to give me some more on that, starting with reminding me what the acronym is for. Yeah, yeah. So sorry, I wasn't very clear at all. This usage of linear types to track future responsibility, that's what I call higher RAII. So
00:33:26
Speaker
Um, for like, you know, when you had that shipyard that was taking in the spaceship linear type, and also a few extra arguments, that was a certain pattern of using linear types to, uh, remember what you had to do in the future to make sure the compiler enforced you tore down this spaceship in a, in the way you originally expected to. Okay. I've misunderstood. So the hire is referring to the fact that you can pass several linear types at once into the, into the drop command.
00:33:54
Speaker
Higher, I don't really know why. Oh, yeah. OK, so the reason I called it higher RAII is because it's kind of a super powered RAII. RAII comes from C++ as far as I know. I've done a lot of research in trying to figure this one. I have no idea where the term came from, but the first use I saw was C++.
00:34:14
Speaker
It stands for resource acquisition is initialization, which is a horrible acronym. By the way, that means nothing. It's gobbledygook, right? Like I love, I love the concept of RII and I cringe every time I have to explain the acronym to someone. So if you see me just like dying inside, that's why the acronym means nothing. The closest I can like warp my mind to try and make it make sense is that resource, like I try and flip it resource initialization.
00:34:42
Speaker
is acquisition I can't I can't it's it's so hard anyway the concept is that you have a constructor where you take in some sort of some sort of arguments and your object holds on to those arguments so that in your destructor you can use those arguments to tear down something like for example a really common use case is a file handle
00:35:06
Speaker
Back in CLand, we've just got an integer file descriptor. And unfortunately, if you've got an integer file descriptor x, you can just drop it on the floor and you didn't close your file. And that's unfortunate. And then C++ came out with RAII, which is
00:35:25
Speaker
If you wrap it in a class, which has a destructor that's automatically run, it can make the correct call out to the closed file in a destructor that's automatically run. But that doesn't quite capture the beauty of this linear types thing that I've been talking about, where it can take in extra arguments. And it can be called with other linear types handed in. And you can be sure.
00:35:55
Speaker
somewhere else in your program, they can be sure that you will eventually call this destructor with the right arguments. That's higher RAII. It's RAII plus linear types, I guess you could say. And I didn't know what to call it. I'm like, I can't call it super powered RAII because like, well, I could. The marketing department could. Yeah, exactly. It's just higher. I don't know if that was a good color or not. OK, fair enough. It's definitely better than RAII plus plus.
00:36:23
Speaker
Yeah, I did consider that. Yeah. And you're right. You're right. Okay. What, what, where should we go next in the world of veil? Cause I know you're excited about regions. Do we want to go there next?

The Future of Regions in Programming

00:36:37
Speaker
Oh yeah. I love regions. Um, so I, I think regions are, I, I hesitate to say this with too much confidence, but I really do think
00:36:49
Speaker
in 20, 30 years and not because of veil at all, but because of everyone's efforts on regions. I think in 20 or 30 years, regions will be commonplace and they'll be like just the part of the main paradigm of software because of all the benefits they have. So let's see where to start with this one. Let me start in a reference counting world.
00:37:15
Speaker
I was just kind of playing with this the other day. Now, veil isn't reference counted, but when I was thinking of regions, I'm like, hey, this could help reference counted people too. And I'm like, I should tell all my reference counting buddies about this. So here's what happens.
00:37:31
Speaker
Let's say that you have a function where you're writing a little roguelike game and you got your goblins running around and on every single turn, a goblin wants to figure out like, what do I do? And they usually have logic like, is a player nearby? OK, if so, I want to run at the player and be scary. Is the player not around? OK, maybe I want to walk around in a random direction. Is there
00:37:54
Speaker
Is there a garden at my feet? Because goblins love gardening. Everyone knows that, right? And if there is, you might want to do some weeding, like plant a few seeds. Just basically, you want to figure out what the goblin wants to do. And that is, just by its very nature, a read-only kind of operation. You read the world, and you figure out what you want to do. And you do that by comparing weights. You might do some pathfinding. There's a very interesting opportunity in there, since it's read-only.
00:38:23
Speaker
If the compiler and the language know that this operation is read-only, then you can mark it as a peer function. And I can feel like functional programming people all across the world being like, yes, those. We love peer functions. And I'll tell you why they're so cool and why they work really well with regions and reference counting and garbage collection. And that's because if you can mark this, figure out what I want to do function as peer, and the compiler can track that sufficiently well,
00:38:52
Speaker
then it can know that during the function's execution, you might make a few more references to the level or other goblins or the gardens. But by the end of the function, these references will mostly go away. And so in the outside world outside of this function and all the data that existed before this function, their reference counts. This is the weird part.
00:39:19
Speaker
The reference count integers in those objects don't have to change. Since you know you're not changing anything in the world, they don't have to change. And you're probably thinking, well, wait a minute. But what if you made a new reference inside your peer function to this other goblin? And then you return that reference. And it's like, OK, that's the spot where you'd want to dig through the hierarchy and then go reach into those pre-existing objects and increment the reference counts.
00:39:49
Speaker
much less, or much fewer of those than the temporary objects you created during the peer functions execution. And I do have numbers for this now, actually. At least 65% in just a regular sample program I ran of at least 65% of the references were these temporary kinds of references that didn't survive the peer function call.
00:40:18
Speaker
And those didn't need to, if you think about it in hindsight, those didn't need to do the increment and decrement on the objects they were pointing to if they were pointing into the pre-existing data. This is making me think of garbage collection, where we say, OK, well, there's this huge sweep of things where we don't need to worry about any garbage that may be created. Yes, this is very similar to, I forget what it's called, it's like the generational theory.
00:40:48
Speaker
Yeah, it's something like that. But it's a very well-established observation in garbage-collective world that if you have a function, well, no, just in general, short-lived objects, most objects are short-lived. And you use them once, maybe twice, tiny little bit, and then they're just gone. And very few objects need to survive any longer than a few function calls. Regions are kind of a way to go the compiler
00:41:19
Speaker
which objects are gonna survive and which objects are not gonna survive. And that's really nice because for reference counting, you can eliminate the reference count operations on the ones that don't survive. For garbage collection, I'm glad you brought garbage collection up, regions help garbage collection in that regard in that they can, in theory, I think, right, I've got no proof for this, sorry, they can, in theory,
00:41:46
Speaker
give the garbage collector a precise hint about where to do its collection. You can tell the garbage collector, hey, this function is a pure function. And garbage collector, you should copy the returned object out of that pure function's own private heap and then just blast away the heap. And garbage collection is really good at doing that.
00:42:16
Speaker
But this particular hint could make it even better. With careful use of this hint, you could remove these giant spikes.
00:42:30
Speaker
that build up over time. For example, without this, you might have a latency spike every five seconds, I don't know. But with this, you can smooth out those spikes by giving the garbage collector more targeted information about what regions no longer exist and what objects should be copied.
00:42:51
Speaker
Okay, so the upshot of this then is that we reduce the amount of time spent garbage collecting and the amount to kind of stop the world pausing. Yes. For something that sounds like it's not going to leak into programmer space. This is entirely under the hood in the language. Is that fair? Unfortunately, no. Yeah, this would require you annotating the function as pure and then
00:43:18
Speaker
Depending on who you ask, it'll also require in the function signature. No, sorry. The function signature won't change except for that peer. Inside this specific function, nothing will change either. But let's say that you then call another function.
00:43:38
Speaker
which is handling both, I'll call it peer data from the outside world, and also some temporary data that was created inside your peer function, it has to keep those separate and has to know which is from which, quote unquote, region. And so those functions that you can call might have to have region annotations or some other way to conceptually keep those data separate that the compiler can know at the end of your peer function
00:44:09
Speaker
the exact boundaries of the data that it can just throw away. Yeah, so this could require some annotations. There are some languages out there, I think 42, and that's spelled F-O-R-T-Y, and then the number two. Okay, that counts as a number of languages that support this.
00:44:32
Speaker
Yeah, no, no. Yeah, the 42 creator reached out and he's like, Hey, Evan, I love veil, except I don't love the annotations that you said you might have to have. And so we debated for a long time and he made some really good points. I still believe that annotations are a low cost way to get a lot more flexibility out of the system. But if you value simplicity, then I can definitely see why 42 its approach is very few values syntactic simplicity. 42 is approach could be really nice. Yeah.
00:45:04
Speaker
I'm okay with the syntax for it. The thing that worries me, if you say that this model is going to come in to be one of the standard paradigms in the next 20 or 30 years. Oh, yes. I can complete that argument. Okay. I'll give you my question and then please do. So it's like, how are you going to teach people both how this works? They've got a good mental model and the benefits of doing so, because there's a cost to learning more than there's a cost to typing annotations.
00:45:33
Speaker
Yes. And I can imagine, well, I've experienced a lot of us kind of hear like, wait, annotations. Uh-oh. Like, is this like bar checking? Because that's really hard, right? Like, we don't want to learn bar checking. Well, a lot of us don't want to learn it. I love bar checking. But its annotations are really hard. Because whenever you find that you have to have annotations, you're in a case where the compiler can't figure it out for you.
00:46:00
Speaker
And the answer to that is that region annotations are actually really easy and really simple because they don't come with this extra rule about, they don't come with the extra alias ability, XOR immutability rule. They don't come with the rule that every object has to have one writable reference or multiple reader references.
00:46:27
Speaker
So really, regions are just like any other static typing concern. If your function takes in a marine and a spaceship, this is equivalent to the difference between those two types. Consider a function that takes in an A apostrophe spaceship versus a B apostrophe spaceship.
00:46:56
Speaker
Those are just one extra minor step in the world of static typing. Those aren't a whole paradigm shift. And another way to answer that would be that this is inherent to what we're doing already. We kind of already mentally track what objects are in what regions. If you look at a pure function, or just look at a function,
00:47:25
Speaker
And someone asks you, is that pure? In other words, does that function modify anything from that side of the world? You'd be like, no, no, obviously from the name. It just says get. And if you're in Haskell, the answer is always yes. And then you can read the function and just look at any particular parameter and know
00:47:48
Speaker
Sorry, you can look at the function and any of its local variables and you can kind of Reason out like is this from before the function existed or is this a temporary variable or is this something that's gonna get returned? It's very easy to answer locally like what those are and there's no viral spread of complexity. There's no extra rules that come along It's it's really just annotating You know what era this data comes from Okay
00:48:12
Speaker
And is it, I'm getting the impression from this that if I add pure and it's right, then I just get some free memory performance boosts. If I add it and it's wrong, the compiler tells me I'm wrong. If I don't add it at all, it still works. I just don't get the memory boost. Yes. That's specifically how veil works. Yeah. Um, that's a particular property of veil that I've been chasing since the very beginning. Um,
00:48:41
Speaker
I believe that 90% of your program doesn't need to be optimized that hard. And in that 90%, you want to prioritize things like flexibility, simplicity, stable APIs, and so on. Maintainability, you want to make it map onto the real world and how humans conceptualize as much as possible because that's part of simplicity. But for that 10%, that's where we'd want to add annotations like pure. We'd want to use a few more linear types for performance reasons that I haven't really gotten into.
00:49:10
Speaker
That's when you want to you want to be able to opt in to these These X this extra 5% of performance or sometimes more than that Yeah, I really believe an opt-in I believe in compiler hints like that to make certain areas much faster
00:49:29
Speaker
Yeah, I can totally see the appeal of that. So perhaps we should talk about what your motivations are with Vale, because on the one hand, you make it sound like you want to be the guiding light for the next 20 or 30 years of programming. On the other, it sounds like this is just your memory research playground. Maybe it's both and more, but tell me.
00:49:55
Speaker
It's not much of that first one. I want to be one more hint to the world. There's a lot of people working on regions like Verona. Microsoft has a really cool project called Verona, which is blending regions and bump allocators and garbage collection in a really cool way that's going to have a lot of performance. There's that 42 language, like I mentioned. There's a bunch of us languages that working together will be the guiding light, I hope.
00:50:21
Speaker
And, uh, and there's, uh, other languages like austral, um, who's pioneering mixing, uh, borrow checking like from rust and linear types. Um, and so that's, I'm really looking forward to that. So I'd say it's more of the ladder that, um, this is just kind of my research project and it gives me something to blog about. Cause as you've seen, I've loved blogging, uh, mostly as just like a vehicle for delivering my snarky side notes into the world. Um, I've seen that to your bug.
00:50:49
Speaker
Yeah, if anyone's read my blog posts like like they can tell like I have so much fun writing these little side notes these footnotes um Yeah, so I think you asked why veil uh The reason i'm working on veil is because like there's so many things that people don't realize are possible in the world today and like um, we kind of tend to get stuck in this mindset where you know
00:51:14
Speaker
Uh, like, like, you know, before rest came along, we're in the mindset of there's garbage collection, there's reference counting. When there's things like C++, there's just unsafe with some other features, but unsafe. And then rest came along and it kind of proved us all wrong. Right. Where there's, there's now this, this third memory safety approach. Right.
00:51:31
Speaker
Um, but then we kind of just fell back into the same mindset, right? Like for, for something like, when did Russ come out? Like it, I think it's one point I was 2013 and it was in beta for long before that. So I'm just going to say like 10 to 15 years for 10 to 15 years. We've been in this mindset that there's just nothing else out there. There's just three memory safety approaches. And that's kind of sad to me because like, you know, I, I know that I've known that that's kind of been false for a long time. Right. And it's, and, uh, and also it kind of,
00:52:01
Speaker
It kind of takes all the fun out of the exploration when everyone believes that the problem is solved. And so I want to show people with Vail that there are a lot of other ways to do that. And that's why Vail, it's three main ways of doing things, are generational references, which are the fourth memory safety model. It uses linear types under the hood, which are arguably the fifth memory safety model. And it uses regions, which are arguably the sixth memory safety model.
00:52:31
Speaker
Anytime someone's like, there's no fourth memory safety model. I'm like, well, here's three more. So now there's six. So that's, that's, that's one of the big motivations for writing veils because like, I wanted to have something that I can blow people's minds with on the memory of safety front. And then, and then it turns out in, in exploring that there's 10 more past that. And I'd just been collecting them over the time. And I'm like, Oh, this is, this goes much further than I thought. That's, that's the main reason I'm working on veil. Uh, the second reason is that, um,
00:53:00
Speaker
There's other features besides memory safety that are really cool that I think people will really like if they knew they were possible, like perfect replayability.
00:53:09
Speaker
take non-determinism

Deterministic Execution and Debugging in Vale

00:53:10
Speaker
out of your entire language and make it so the entire language is predictable, then you can get really cool benefits like the ability to have a beta test or you have a special program, a specially compiled program out to your beta testers who have opted into this particular feature where it records their inputs
00:53:33
Speaker
And if you have a recording of their inputs and you know the entire language is deterministic, in other words, there's no randomness creeping in from the language, then that means that they can just send you that .log file, and then you can just hit play, and you can reproduce their program. So if you've ever had some frustration dealing with repro steps at work, it's just gone. But that problem, it's gone. Like, they send you the .log file, and you just hit play, and you're done.
00:54:01
Speaker
And that was actually kind of tricky because non-determinism can creep in in so many places. The worst place is just memory on safety, right? So suddenly you have to have memory safe language. The second hardest part is threading. And so now you have to figure out a way, how do I make this program run deterministically, but also have threading, right?
00:54:24
Speaker
And there are ways, and I won't go too deeply into them, but I'll leave that as an exercise to the reader. OK, I think I think I want to unpack a bit of that, because I've had that kind of experience in the early days of Elm, where it had a kind of time traveling debugger that worked by recording all the messages that made a state change on the system.
00:54:47
Speaker
Yes. It kind of falls down as soon as you start doing networking stuff. And because it's compiled to JavaScript, they didn't really worry about threading stuff. How do you actually get rid of enough non-determinism for this to work? Yes. Okay. So I'm glad you mentioned JavaScript because that is the one language that has any hope of pulling this off in today's ecosystem.
00:55:18
Speaker
Yeah, I actually, uh, I thought there was other languages that could do this because if you have a memory safe language that has no threading, you're pretty much like 90% of the way there. Like Python would also be close, but it turns out a lot of languages like Python and C sharp, um, they, they let non-determinism creep in and very specific, like you can count them on your hand and you're like, Oh no, you're so close. For example, C sharp, uh, the string classes batch code function.
00:55:45
Speaker
is non-deterministic between runs. If you've got a string, hello, my name is Evan, and you get a hash screen, you just print it out, you can back it like, I don't know, 123 million and four, right? And then you run the program again, and you get a different number. And I'm like, why? What could be non-deterministic about this turns out it's on purpose for security reasons. And there's ways to do this perfect replayability and deterministic thing without compromising security, which is also cool.
00:56:12
Speaker
Oh, yeah, so the threading thing. The trick to making this work with threading is that you would need to record the, not the timings, you would need to record the sequence numbers of every message you pass in between threads. And what do I mean by that?
00:56:32
Speaker
So think about Golang for a second. Golang is just regular garbage coffee language where you have channels to communicate between the two threads. I won't talk about mutexes yet. Let's pretend there's no mutexes yet. You just send a, hey, this spaceship fired this missile message from thread A to thread B.
00:56:58
Speaker
So if you want to keep things deterministic in the presence of threading, you would attach a counter, an integer to that message. We sometimes call them a sequence number in the networking world. And it's basically the source thread saying, hey, this is the 15th message I have sent on this particular channel. And
00:57:24
Speaker
If you, uh, are in recording mode for this program, uh, when you receive that message, you just record from, you record into your recording file that, Hey, this next thing I'm doing is I am receiving message number 15 from this channel. And right. Yeah. So if you, if you are in replaying mode, um, and you have, you know, you're listening on a bunch of channels at once.
00:57:54
Speaker
Uh to keep things deterministic you would just look at your log file from the previous run and be like, oh, okay I should be paying attention to Uh this thing over here this channel over here waiting for message number 15 specifically Um, and you might be wondering like why do we even have the integer right? Uh the integer is for
00:58:13
Speaker
Um, when you have a many to one channel, you have multiple people writing into the channel. The, um, the integer helps you know, like who it's coming from and what to wait on and what particular message to wait until the arrival of. So that's kind of how you would, uh, have perfect replayability in a threading world. Okay. That sounds good, but it sounds like you've also just created a headache for anyone writing a network library. Now they have to.
00:58:43
Speaker
write it in two modes, whether they're faking sequence numbers or actually doing the job in hand. Is that true? No, actually. Any library authors don't really have to do anything to support this. The way we deal with things like networking and reading and writing files or reading and writing configs or user input is that
00:59:07
Speaker
The language itself will intercept the data coming in from the FFI boundary. So what I mean by that is, let's say that you are opening a file. Under the hood, that's going to be a call out to Bs. Well, it's going to be a call out to a wrapper, as we do in any language. We make a little wrapper library around Cs, FOpen, and FRead, and FClose, and FWrite. And that wrapper library
00:59:34
Speaker
Is so that you can ffi out to c you can call out to unsafe c code you can call into other languages code, right? Um The language knows about that ffi boundary. It has to know about it because it has to be special especially, you know crafted Sorry, the language has to support talking to other languages and the language knows very well. What's going on there? So, uh in veil that's I forget the keyword for that. I think it's just x-turn like like c is
01:00:01
Speaker
So when it sees X turn on a function, it knows that's going to be out in C or Rust or Zig or something like that. And it also knows that if you are making a special build for this replaying and recording, it should also insert extra assembly instructions that will copy any data that is coming in from FFI.
01:00:22
Speaker
So if you're doing a function call out to C, then there will be extra instructions that will copy the return data that's coming from C. If C is calling into U, into Vail, then these extra instructions will copy all the parameters. And that means that the language knows the specific, well, yeah, any language will know the specific format of any data coming across the wire.
01:00:48
Speaker
And for the case of fread, it's just going to be a buffer of bytes. And so the language will literally just copy the buffer of bytes straight into the recording file. And sometimes that can get big. And so I'm kind of tossing around the idea of enabling, you know, ignoring files, which, you know, are just constant static. Like, you know, a lot of files aren't going to change between runs of your program. So there could be white lists like that. But the short answer is that
01:01:14
Speaker
the language can specifically automatically record what comes over the FFI boundary.
01:01:21
Speaker
OK, and then at debug time, you flip another switch that says, OK, replace all those external calls with a replay function. Exactly. Yeah, yeah. OK. You know what this reminds me of? I see a lot of people in the kind of Kafka stream processing world and the actor model world. And they're both trying to do this thing where
01:01:46
Speaker
They've got a network of data coming in from the outside world, transforming, maybe having concurrency issues, and figuring out how to debug that without completely rerunning the whole pipeline. Do you think you could, does Vail have anything to teach people debugging that world? Give me a second, because that's kind of blowing my mind. Hold on.
01:02:13
Speaker
I think it could. I don't know much about the Kafka world because we use different technologies at Google. But it seems to me that recording messages and their sequence numbers is probably a solved problem in the Kafka world. Whatever solutions you currently use probably involve recording the messages and putting them on a file somewhere.
01:02:37
Speaker
And I wouldn't be surprised if the missing piece in that world was the non-determinism of languages. Like, if fail didn't exist and you asked me that, you know, that question, like, knowing nothing about Kafka, how would you design perfect replayability in a Kafka world? I'd be like, you can't, it's impossible. Languages suck because they're all non-deterministic. And like C-shark's, you know, C-shark's string functions, strings,
01:03:04
Speaker
hash code function just returns a random thing every run and like, why? You could tell I'm scarred by that. And you know what, side rant here, they do it on purpose. Like, like, goes hash map or dictionary, I mean, it, it intentionally returns a random sequence every time you iterate. And I'm just like, please.
01:03:24
Speaker
Anyway, I should be kind. They do that for a good reason. It's to protect against hash attacks, because there's certain exploits where if a attacker crafts a certain input that they know will end up in the HashMap in the same bucket, then they can just run memory out and reduce your runtime. Sorry, maybe not run memory out, but make it so your application goes to a crawl because they just turned your constant time HashMap lookup into a linear time linked list search.
01:03:54
Speaker
Anyway, but that's not a problem. You can still have that as long as you record all of the inputs to your program. And you don't. Well, I'll leave it at that. Anyway, so back to this Kafka thing. If I didn't know about Vail, I would say you can't because languages are non-deterministic. You can't guarantee that this Java program or this C sharp or Go program will return the same thing every time.
01:04:21
Speaker
You could get close, and you could have some success if you have really strict guidelines on not using certain non-deterministic things. Like you'd have to use a specialized map function. Sorry, specialized map class in Go. You'd have to somehow avoid string hash codes in C sharp, and that's really hard. I've tried it. JavaScript would have some hope here. So if you had Kafka with JavaScript, you could probably get pretty close with that.
01:04:53
Speaker
But Vail's contribution to this area and this question is that it shows that languages can be non-deterministic. And so I guess if you somehow note for any given operation what were all the packets that went through, what are all the machines that they went through, and you collect the recordings for all those machines,
01:05:19
Speaker
then you could perfectly replay anything even in a distributed server world. There's hope for a better world of debugging. I think I've heard you use this term, Heisenbugs. Bugs that aren't there when you're looking.
01:05:35
Speaker
Oh, I hate those. Okay. Heisenbugs. Heisenbugs. So anyone who's done enough multi-threading stuff has dealt with Heisenbugs. Heisenbugs, multi-threading is just hard to debug because the CPU just decides based on the phase of the moon and what day it is and butterfly effects in Japan, whether or not it's going to schedule this thread at the same time as this thread.
01:05:58
Speaker
And it's only when this thread is scheduled at the same time as this thread that this bug appears. And so when you're trying to reproduce a problem that the user sees, and if you can even figure out that it's because these two specific threads run at the same time, that's when the bug happens. How do you reproduce that? You can't control the CPU and what threads are put onto what cores.
01:06:20
Speaker
So there's kind of two hard parts there. It's even identifying that that's the problem and then reproducing it once you know that's the problem. That's a Heisenbug because you think the bug is there, you know it's there, but every time you look at it and every time you try and debug it, it just disappears because you have no control and it depends on the phase of the move. Yeah. Yeah. I'll add in there part three to that, which is when you are pretty sure you've solved it. Are you absolutely sure you've solved it?
01:06:45
Speaker
Yeah, but if you have the recording from the user and you can use that to reproduce, then you can see if your fix worked.
01:06:54
Speaker
And modulo large file sizes, you could keep that in your test suite to prove that scenario. Yeah, yeah, that would be good. So it seems like there is a lot in your language research playground of Vale. I think we should wrap up. So should I be pointing people to Vale to play with, learn from, co-develop with you?
01:07:19
Speaker
Um, so I wouldn't suggest that anyone uses it for anything serious because you know It's still very prototyping and the compile times are unfortunately still pretty slow. Um I would point them at my blog because uh, I like writing stuff and I've heard people like reading it Um, that's where I kind of tell the results of everything i'm doing and the directions i'm going to explore So yeah, I'd point them at the blog
01:07:43
Speaker
OK, yeah. There aren't that many people working in language research that are that readable. There are a lot of academics writing academic papers which aren't famed for their readability. Your blog is a nice way to get into this stuff, so I shall link to that in the show notes. Evan, thank you very much for joining us. It's been fascinating. Thanks for having me.
01:08:06
Speaker
Thank you, Evan. Now, I love it when you can't quite tell if someone's trying to push the boundaries of what's possible or if they're just playing with ideas, often because the two go hand in hand so well. I'm thinking of, there's a great book about Richard Feynman, who was a physicist and an inventor at Tinkerer. I think that's a spirit we could all learn from. I'll put a link to a biography about him in the show notes, because I really enjoyed that.
01:08:33
Speaker
Also in the show notes, of course, you will find links to Vail, some of the other languages we mentioned and some of the other sites came up in that discussion. If you scroll all the way to the show notes to find that, you will also pass links to the like, rate and share buttons. If you've enjoyed this conversation, please take a moment to click them and make sure you're subscribed because we'll be back next week with another journey into programming at Play and the future of software development.
01:09:01
Speaker
Until then, I've been your host, Chris Jenkins. This has been Developer Voices with Evan Avadia. Thanks for listening.