Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Taking Erlang to OCaml 5 (with Leandro Ostera) image

Taking Erlang to OCaml 5 (with Leandro Ostera)

Developer Voices
Avatar
2.2k Plays1 year ago

Erlang wears three hats - it’s a language, it’s a platform, and it’s an approach to making software run reliably once it’s in production. Those last two are so interesting I sometimes wonder why those ideas haven’t been ported to every language going.  How much work would it be?

This week we’re going to dig right down into that question with Leandro Ostera. He’s been working on Riot - a project to bring the best of Erlang’s runtime system and philosophy to OCaml. But why OCaml? Is it possible to marry together OCaml’s type system with Erlang’s dynamic dispatch systems? And what is it about the recent release of OCaml5 that makes the whole project easier?

Leandro’s Blog: https://www.abstractmachines.dev/

Why Typing Erlang is Hard: https://www.abstractmachines.dev/posts/am012-why-typing-erlang-is-hard/

Riot: https://riot.ml/

Riot source: https://github.com/riot-ml/riot

ReasonML: https://reasonml.github.io/

ReScript: https://rescript-lang.org/

Leandro on Twitter: https://twitter.com/leostera

Kris on Mastodon: http://mastodon.social/@krisajenkins

Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/

Kris on Twitter: https://twitter.com/krisajenkins

--

#podcast #softwaredevelopment #erlang #ocaml #softwaredesign

Recommended
Transcript

Software Reliability: Two Philosophies

00:00:00
Speaker
Once we've shipped our software, we want it to keep working. That's a given. But there are two main schools of thought on how you guarantee the software is going to work once it's in production. The first is get it right before you ship it. That's professionalism. You add tests, you add types, you add a QA department, and you make sure it's rock solid before you press deploy.
00:00:25
Speaker
The other approach, very different, is the Erlang philosophy. You just say, it's going to crash sometimes, so let it crash, and then put your engineering effort into dealing with crashes so gracefully that nobody notices. Now there are a number of projects springing up these days looking at that choice and saying, why not a bit of both?
00:00:48
Speaker
So this week, we're going to turn our attention to Riot, which is the brainchild of Leandro Astara. And it's his attempt to bring the runtime system of Erlang to the compile time system of OCaml. How do you do that? How do you take the best ideas from Erlang's beam and make it work with one of functional programming's favorite compilers? How do you make it perform? And what is it about OCaml 5 that makes that approach more feasible than ever?
00:01:18
Speaker
I can promise you that Leandro is going to make it enormous fun to find out. So let's get

Meet Leandro Astara

00:01:23
Speaker
going. I'm your host, Chris Jenkins. This is Developer Voices, and today's voice is Leandro Astara. My guest today is Leandro Astara. Leandro, how are you?
00:01:47
Speaker
Hi, Chris. Happy to be here. Thanks for having me. Glad to have you. I gather your house sitting in Stockholm this week. Correct, correct. I am a cat sitting for a friend, actually. So, Leia the cat, a little white cat with like the chromia, two different colors. Oh, two different colored eyes. Yeah, like David Bowie's syndrome. Exactly. Kind of like a Bowie cat. I might swing by here and there. Hope that's okay. Awesome. Yeah. Yeah, we love guest visits by cats. Nice.

Experience in OCaml and Erlang

00:02:14
Speaker
So I've got your, God, you're doing a lot of interesting stuff in the world of OCaml and Erlang and the thing. Thanks. Yeah. I mean, there's, there's quite a few projects that have been happening on that space. I've been working in both ecosystems for nearly a decade on different areas, but open source stuff more recently, right? Yeah. The latest one is Riot, which I think this is one of the things we'll be touching on today.
00:02:42
Speaker
Absolutely. That's a long time to be in that space. You must be an early adopter of Oak, kind of Oak Hamill, definitely Erlang, right?
00:02:50
Speaker
Oh, I don't know if it's in that order, because I feel like Erlang has been much more stable for building systems for longer. Not to say that OCaml isn't, but for example, I remember back in 2014, 2013, I was thinking, I was sort of stuck in that rut of the web developers realizing for the first time, everything I do is move JSON in and out of a database into a forum on a web.

Erlang vs OCaml: Stability and Practices

00:03:14
Speaker
It's like, do I really want to do that?
00:03:16
Speaker
There has to be more to this. My problems maybe are not big enough, right? And I told myself Erlang because I thought that...
00:03:23
Speaker
not having proper academic qualifications as a software engineer or a computer scientist, it would be harder to get access to bigger problems unless I taught myself the tools that are normally used to solve bigger problems. So I went through the Learn News from Ireland book and it was pretty obvious to me that this was definitely a tool that was for larger problems than I had dealt with before compared to Ruby.
00:03:50
Speaker
Yeah, and I think even then, there were already quite a few early jobs for people to pick up. It's still a niche. We're not talking Go here, we're not talking Java, right? But it was already pretty well established in the sense that it had practices around how to start a project, how to grow it, how to ship it, how to maintain it.
00:04:11
Speaker
like Erlang coming from the telecom industry, where they had very strict requirements from the industry to be able to ship something far away into a switch and be able to reprogram it remotely. They needed all of this sort of dogma around how to build something.
00:04:33
Speaker
Established practices, let's say. Let's say established practices, VCPs, best current practices. Yeah, exactly. So I feel like Erlang has been more established from that angle than OCaml has. That's interesting. We kind of associate Erlang with the whole scalability fault tolerance thing, but the idea that it's established a lot of productionization

OCaml 5: Concurrency Enhancements

00:04:55
Speaker
stuff. Yeah. I'd not thought of that, yeah.
00:04:58
Speaker
I'm not going to lie, it's not the kind of productionization. Did I say that word right? Yeah, you said it right, but I think I might have made it up. No worries. It's not a very pretty story, to be honest. It's kind of like an engineer kind of story. You have the tools, you have to figure out to assemble them together. But just having
00:05:21
Speaker
OTP, right, which used to stand for Open Telecom Platform, which is like the big framework of philosophies and actual code, right, for designing Erlang systems. It's something that most programming languages don't have, right? Most languages evolved that. But Erlang was like, hey, you want to use language? Here's a big book of how to build systems in Erlang. Yeah, yeah. So we're going to get into exactly that, the fact that most languages don't have Erlang.
00:05:49
Speaker
and what you've been doing about that. But in order to get there, I think we need to start with OCaml. What tempted you away from Erlang into OCaml? Ooh, that's a good question.
00:06:02
Speaker
I think one of the reasons that I jumped ship or let's say I had one feet on each ship was that airline, even though it's super scalable in terms of runtime performance or how many

Why OCaml Over Erlang?

00:06:15
Speaker
concurrent users you want to have, I think it has problem and it's a problem that every dynamic language has.
00:06:21
Speaker
And that is, the more code you have, the more complexity you have, the more cognitive load your developers need to be able to juggle. So as your system grows in terms of complexity, like features you have there and the interactions between them, but also the amount of code you have to deal with, like different dimensions of scalability, you need more and more tests, for example, to ensure that some basic things are not going to blow up in your face.
00:06:49
Speaker
It does have a lot of tools, and I guess we'll talk about them at some point, to prevent those failures from taking the entire system down, right? But that still doesn't mean that a single person can onboard into a code base large, let's say, 500,000 to a million lines of Verlank, and comfortably make a small change, like confidently make that change, knowing that, oh, okay, everything's gonna be okay, right? You might have to run batteries or tests. They don't show the absence of bugs, they just show the presence of bugs, right?
00:07:17
Speaker
So, from that point of view, at some point, when I was working at Klarna as an airline programmer, a colleague of mine, Daniel McCain, excellent engineer, by the way, he kept pestering me to learn Haskell. He said, you should be learning Haskell. He said, why? What is so good about Haskell? And we even had a couple of sessions where we chatted about Haskell and we wrote a little basic interpreter in Haskell. And I was like, yeah, I guess I can see how this works, but it didn't quite click for me.
00:07:47
Speaker
And through that conversation, I started looking into other programming languages that were typed, right? And obviously, you know, you have Haskell, you have the scholars of the world, right? And I found Idris, which was maybe... Idris.
00:08:02
Speaker
Yeah. Wow. Not many people go on that journey. Yeah. Okay. Especially not straight from dynamic to something like that, right? Yeah. Like I had experience with typed languages such as like C before, but nothing like Haskell or Idris. So I wrote a little bit of Idris and I thought this is really cool, right? Like the ability to have data that carries proofs, right? Like the auto-proof feature that they had was like mind-blowing. To this day, I don't think I fully comprehend it, right? But
00:08:32
Speaker
Well, it's kind of a research language, so you're allowed not to understand the whole thing because the author doesn't. Exactly. They're still researching a lot of things. I think the main idea I got from that was that instead of
00:08:46
Speaker
building reliability in your systems the early way, which is having supervisor trees that make sure that the application is reliable or resilient to failures that will happen. The Idris approach says, we're going to build an application that by construction, by design, can never fail.
00:09:03
Speaker
or, you know, handles all of the possible errors. So when, when I took that idea, I was like, Oh, I love that. How, how can I use that? What else is out there? And so I found real world or camel just dialing down the type safety is something that I could understand. Yeah. Yeah. Idris might be, might have fair claim to be the, the peak of type systems, astronautiness.
00:09:27
Speaker
Yeah, yeah, I think so. And as you come slightly back down to the mountain to breathable air, you probably do get to OCaml. Yeah, it's like in that spectrum, OCaml is like the go of functional programming languages. And I'm like, yeah, I'm going to stay there. So what happened next with your OCaml journey?
00:09:48
Speaker
So I started learning OCaml, and I started doing some interviews in OCaml. And I had one interview that went very poorly, because I didn't know as much OCaml as I thought I did. So they asked me, OK, let's just build a transaction lock between these processes. And I had to Google how to open a file. I know it's embarrassing.
00:10:10
Speaker
Yeah, but over the years, I've been trying to use more and more OCaml in different places. Years later, I joined a startup and we had to build tooling internally. We built a system in our line and we had to build some tooling to be able to synchronize data, essentially a model across the backend and the clients. We had to do a lot of
00:10:35
Speaker
let's say, mix and matching of external data sources into our own data model. And since I had been working before that at Spotify, and I have been exposed to ontologies, I thought we should build an ontology for this. And we built a code generation tool in ReasonML, which is just a new syntax, or I guess not so new anymore. But it was a new syntax for OCaml that was more approachable. Yes, it's just syntactic sugar on top of OCaml, ReasonML. Correct, yes. Essentially, it's just like, what if OCaml had semicolons?
00:11:05
Speaker
The most important question in programming. What if we added semicolon? Exactly. It would semicolon fix everything. And it turns out it kind of did for a lot of people, right? OK. So we built these tools there, and we were able to maintain sort of the generation of error code and also JavaScript code, right? All from that tiny little project. And that was really nice. And since then, I've been on and off in different companies using either recent email or a camel and more recently rescript, right?

Rescript for Web Projects

00:11:35
Speaker
What's your script? I've not heard of that one. Glad you asked. So OCaml eventually got this new syntax. It was called Reason. But the community in Reason had sort of two purposes. One part of the community really wanted to be closer to the web. They wanted just to bring the OCaml type system to the web.
00:11:54
Speaker
And the other part wanted to use the recent syntax to write native programs. And the tool chains kind of started diverging. And at some point, there was this hard split, a whole rebrand, where people said, you know what? We're going to grab whatever we have right now, and we're going to start building exclusively for the web. And we're going to hyper-optimize the language, the syntax, the tooling, so that adopting the language in a web project is just, mwah.
00:12:21
Speaker
Shevsky's, right? And they did. It's just a blast of language. Like today, if you ask me, hey, Leandro, let's build a new startup and we'll do web things. I'll be like, yeah, we're going to use Rescript for that, for sure. OK, cool. And that presumably just compiles to JavaScript?
00:12:39
Speaker
Correct. Yes. So there's this whole compiler that used to be called Bucklescript. It's called Melange this day. It's a source-to-source translator and it compiles to super idiomatic JavaScript.
00:12:53
Speaker
It's kind of scary. When people look at it and go, whoa, how did he do that? Because it even includes support for JSX syntax. So you can do your typical React style components and everything. And it translates those correctly. And you go, holy crap, this is excellent. That sounds like a lot of fun. That sounds like something we could dedicate a whole podcast to. Yeah, absolutely. I will give you directions of who to talk to to bring for that. Cool. But you stayed more in the back end world, right?
00:13:22
Speaker
A little bit. To me, the distinction, it's more about what kind of domain problems you want to solve and what customer problems we want to actually get into. Because just a lot of customer problems, you say, no, I don't want to solve that as a company. So most of my career, I work with startups. So I've gotten used to have to move around and say, OK, now we need you to do infra or DevOps or back-end or data engineering or this and that.
00:13:49
Speaker
In fact, today, my day job is being a product manager. So I literally don't write any code today. No. Yeah, exactly. No. I'm actually very happy. But anyway, so I think I did a lot of backhand work because it's a place where you have the most freedom to make bad decisions, right?
00:14:09
Speaker
In the front end, there's a lot of constraints from expectations from the rest of the web community. You want to use React, for example. It comes with a bunch of different... Everybody still does things their way. But there's a bunch of ideas and frameworks and sub-frameworks and libraries that just put a lot of constraints on the back end. You have a lot of expectations about the right way things should be. Exactly.
00:14:38
Speaker
Yeah. In that sense, I suppose the backend has fewer stakeholders to worry about the way you're solving the problem. Yeah. Yeah. It's like, as long as you give me an API and the API is doing the thing, then I don't really care how you do it underneath for the most part, right? Yeah. Yeah.

OCaml's Multi-threading and Concurrency

00:14:54
Speaker
Okay. So we're going to get into the Erlangness of OCaml that you've been working on.
00:15:01
Speaker
There's one more step I think we need, which is you told me that this was changed significantly by the introduction of OCaml 5. Correct. So tell me what's interesting about OCaml 5. Right. So prior to OCaml 5, OCaml 4 and below have been historically single threaded, right? If you wanted to run an application, you will run on a single OS thread, even though there is something called thread in OCaml, which is not really a thread, it's just another thing. And please don't use that.
00:15:33
Speaker
OCaml 5 introduces two things. The first one is what we call domains, which are actually OS threads. They have a little more, like, I guess they call something differently because there's some more magic to them, right? That allows the garbage collection to work well across them and so on. But basically, you can say, hey, OS, I need a thread. And then you get a second thread. And now you can actually run parallel code between those two cores. And the second thing is that it introduces
00:16:02
Speaker
effect handlers. These used to be called algebraic effects, but apparently, I've been corrected from the presentation I gave a couple of days ago about Bob Conf. They're not called that anymore. Now they're called effect handlers. The algebraic part just was dropped. Is that pure marketing because the word algebraic is scary?
00:16:21
Speaker
I don't think anything about OCaml is pure marketing to be honest. Fair enough. Yeah. OK. Effect handlers. Tell me what an effect handler is. All right. So an effect handler is basically
00:16:34
Speaker
A try-catch, right? That's the easiest way I can put it, so everybody in every language can be like, oh yeah, I grasp what they do, is to say, you have exceptions in most languages, most of the time you can do race or throw or something like that, and then you raise that exception, and as long as you have a try-catch somewhere above that, like in the hierarchy of the program, then you'll be able to say, oh, I will catch that exception, and then I will do something.
00:16:58
Speaker
Now, the difference is that once you catch that exception, you can do something and then you can say, let's continue the code exactly where the exception was raised. Now, when you do that, if you
00:17:12
Speaker
generalize it or change the name from exceptions to effects, then you can perform an effect anywhere in your code. As long as you have the strike catch, which we call handlers, then you'll be able to intercept that effect, do some work, and then eventually, if you want, resume the code exactly where it was left.
00:17:30
Speaker
OK, give me an example of that that isn't exceptions. Yeah, absolutely. So for example, I think the example I want to give you is how we use handlers in Riot. But we'll get there. Let's say you have a function called random. And it will give you a number between 0 and 1. You can implement that as an actual function, or you can implement it as an effect.
00:17:55
Speaker
So if you implement it as a function, if you want to replace what that function does, you kind of have to have something like a mock or inject a function into your work, right? So you can replace it. And so if you want to write a test where you want to see what happens when the random returns zero, right?
00:18:12
Speaker
then you can do that. Now, let's say we don't have a function, but we have an effect. So we call it perform random number, right? Now, that code without a handler doesn't do anything. In fact, it fails in the same way that raising an exception without a try-catch does, right? Unhandled exception, unhandled effect. Compile time error.
00:18:32
Speaker
No. Oh, OK. No. Not compile time. Even in OCaml, exceptions are not typed. Obviously, there's a type of exceptions. You can pattern match them with types. But they're not tracked in the signature of the functions, right? OK. Which is different from other languages, like even Java, where you say, this has checked exceptions, right? Or Swift. I think Swift does it super nicely, where you can say, this function throws. And then if you call that function inside of a function that doesn't throw, then you get a compile time error saying, hey, you need to tell it that this function also throws, and so on.
00:19:02
Speaker
Anyways, now we want to grab that code that has an effect and we want to run it for a few different tests. In one test, we just wrap it with an effect handler that when it intercepts the random effect, it returns the value zero.
00:19:23
Speaker
OK, yeah. And now we continue the test, the code. So now, random, that performing of that effect gets replaced by the value 0, and you continue the execution of the program.
00:19:35
Speaker
You can see how this can be used to mock things in some ways. It can also be used, for example, to replace the kind of random number you have. You could go from something that's pretty naive to something that is cryptographically safe without changing your program at all. It's something that needs an external key to be plugged into the system.
00:19:58
Speaker
I think it's a super interesting way of detaching how the effects are performed from where the effects are needed. And that lets you customize your program in interesting ways. Yeah, there's a hint of dependency injection in this. But it plays out differently in code.
00:20:16
Speaker
Yeah, it doesn't look as much as, well, I will declare my dependency. It's more like you will have a function called random that under the hood just calls perform this effect, right? Yeah. So to you, the user of this library, the random library, you're just calling random, right? But the provider of the library gives you this effect handler that you wrap your program around. And this is why you'll see a lot of the newer concurrency libraries for camel coming with a top level like run function.
00:20:46
Speaker
Because that function installs the handler. It's like saying, you know, we have that try catch around your entire program, right? Right. So it's like building up the program and then when you're ready to run it, saying what context it runs. Yeah, exactly. Yeah. OK. OK. I'm with you. I'm with you. So those two things, why do they? So threads and effect handlers.
00:21:11
Speaker
Why does that lead you to think now is the time to start porting my favorite bits of Erlang to Oak Hamill?

Riot's Competitive Edge

00:21:19
Speaker
Right. Yeah. So the main reason is that we didn't need the multi-threaded parts to do this. But now that we have it is that we can do it really well. We can actually, for some use cases, maybe even be competitive with the Erlang VM, which I think it's really interesting.
00:21:40
Speaker
Effect handlers allow us to say you have a function, which will be your process. And when you want that function to receive a message, for example, we will intercept that effect and we will suspend your process.
00:21:57
Speaker
which means we will not immediately continue the execution of your process, your function. And that allows us to say, as a function executes and a function corresponds to a process, so it could be like a recursive function, for example, right? As it executes, eventually, we will know when it's time to say, all right, enough, suspended, let's let a different function run now.
00:22:22
Speaker
Oh, so effect handlers allow you a way to write something like an await keyword as an effect. And that cracks open the let's make a nice actor model inside of Camel. I'm with you. Yeah. Yeah. Yeah. Okay. That's okay. So take me through what you're doing with that. How much of Erlang are you trying to build? That's a,
00:22:50
Speaker
Yeah. So I can tell you what we're not trying to do. That would be a shorter list.
00:22:57
Speaker
So we started the riot maybe last September. And it was like, OK, let's see if we can make a process. The core abstraction of Erlang, if it suspends, we can resume it. Great. Let's see if we can send in messages. And we implemented the message signal. So now two processes. We have a function called spawn. You call that with another function, which is the body of your process. And then you get back. When you call that function, you get an identifier from the process.
00:23:25
Speaker
So you can use that to send messages to it, right? And we managed to make that pretty fast as well.

Riot's Messaging System

00:23:32
Speaker
Spinning up a process is just allocating at 140 words, a tiny data structure, right? It's even smaller than in Erlang, if I can brag about it. Oh, really? Because the actors are famously low overhead in Erlang. Yes. Well done.
00:23:46
Speaker
Thank you. I mean, it's still very scary stages, and it doesn't do it like a third or like a tenth of what Erlang does. But you know, it will be interesting to see if a year from now, two years from now, it's still that small in that place. But this is where we are now, right? Yeah. Being real. So we got messages between processes. And we want to type messages. So we made some design decisions there, right? Messages are also just data. And because data, by default, is immutable in OCaml, then sharing that data is pretty cheap.
00:24:16
Speaker
There are edge cases like, what happens if I make a message where I have a mutable value inside of the message? You can do that. Don't do it, but you can do it. Otherwise, you'll send a message and then by the time it arrives, you would have changed. That sounds like a terrible idea, right? Yes. Exactly. This stuff gets much easier when everything's immutable. Exactly. If you cannot change stuff in R-Lank, it becomes much easier.
00:24:39
Speaker
We made that and we made that fast. Then we said, okay, now we actually want to know when two processes are linked together. If one process needs to know that all the process dies, how can we do that? We implemented Monitors, which is a feature of the Erlang VM as well that allows one process to receive a message whenever a process in Monitors dies.
00:25:01
Speaker
Like it's terminated. And then we also implement the links, which also it's like an extension of that that says if you link two processes together and either of them goes down, the other one goes down as well. Right. Yep. So now you can start to build up this supervisor tree thing that Erlang has. Yeah.
00:25:19
Speaker
So now we said, okay, we have monitors, we have links. Let's build supervisors. What's a supervisor? It's just a process that creates more processes and monitors them and then has a strategy for deciding how to keep the other processes, its child processes alive, depending on how they die.
00:25:39
Speaker
Many strategies, I think the only one we implemented right now is one for one. So if you have a supervisor that has like four children and one of them sort of stops, then he will just restart one children. But we can implement other strategies such as one for all, where it's like one of them goes down, everything goes down. The musketeer pattern. Exactly, yes. Exactly. Literally. We might call it that. That's a good name.
00:26:02
Speaker
So we went there, and then once we had supervisors, we said, okay, so what else can we build here? Can we build gen servers? The famed Erlang abstraction for building services inside of the Erlang VM. And it's a little harder. Slow down a bit, because gen service, I'm not 100% sure I know what that is.
00:26:19
Speaker
So a GenServer stands for generic server, and it's a pattern inside of Erlang. Remember we talked about the OTP, the Open Telecom Platform, right? It has a bunch of things, like a bunch of things that you can just reuse to say, oh, well, I'm going to need in my application, which is going to be like a microservice, like Tiny Universe, right? I'm going to need one server for users and the handles like user requests, and one server for like, I don't know, orders or things like that.
00:26:46
Speaker
These are not like HTTP servers, they're just like earning servers, right? And the pattern of saying the server needs a queue of requests that come in, and it may receive requests asynchronously or synchronously, such as send, wait, respond, or just send and forget.
00:27:02
Speaker
That pattern, with an internal state that is carried around in the server, is just very, very common in the systems. So eventually, the Erlang OTP library included this thing called JN server, which allows you with very little overhead to build one of the servers, and it manages state for you, and it manages all of the timeouts for the different calls. It has a bunch of logic implemented for you.
00:27:27
Speaker
Right, yeah, with you. So you just implement like, oh, what do I want to do when this specific request comes? And then you put your logic there, and that's it, right? Right, yeah. Just super powerful. Elixir also uses this a lot, right? But it works. I think it's powerful and easy to use on their end because it's not typed. And when we started typing it, it was like, holy crap, this is hard. There's a lot of tiny little things we need to do here.
00:27:55
Speaker
some maybe even unsafe code inside of OCaml, right? To make sure that we can provide type save APIs to a user while still being flexible. Take me into that, because from what you described, it doesn't immediately sound like typing is going to make that hard.
00:28:11
Speaker
So there's a couple of design decisions that we're making. One of them is that there is a single message type for the entire system that works for the entire system. For the entire system. So there's literally a riot message T. That's the type of all messages in Riot. And it's an extensible type.
00:28:33
Speaker
And this is one of the trade-offs that some people really don't like. And some people go like, oh yeah, that sounds like we can run with this. In my head, it's like the right amount of type safety we need right now. In the future, we may change that. Right now, we're OK with this. Basically, if you need to create a new message or message type, you can add a new constructor to that type. So then the entire system knows how to handle your messages now.
00:29:01
Speaker
and anybody that has access to that constructor is able to create new messages of that kind and receive a pattern match against messages of that kind. In your example earlier, you were saying you've got one server for users and one server for orders. Are you saying that the order server will now have to know how to handle user messages? It doesn't have to.
00:29:29
Speaker
by the definition of the receive function that we use to receive messages in a process, you may receive any message of the typed message, right? So you just don't know what message you get unless you pattern match the constructors. So when you build your order server and you call receive,
00:29:49
Speaker
Unmatching that, you get the pattern match there. You could pattern match on users' messages, right? I don't think you'll get them. I don't think you'll be sending those messages to the order server, right? This happens also in Airline and Elixir where you can send whatever message to whatever process, right? Yeah, yeah. And we have two ways to deal with this right now. The first one is that the constructors themselves have visibility rules in OCaml. Okay.
00:30:16
Speaker
So when you build your module for users, you can say, oh, these are the, you know, create user or like delete user messages, right? And they're private to this module. So nobody else can have a way of guaranteeing development time that user messages will be scoped to the user services.
00:30:37
Speaker
So you may still receive any message, right? But you can say, I only care about these ones. And done. One thing that we're doing with Riot that goes a little bit against the ethos of OCaml is that with Riot, you can let your program crash. And that's OK. Yeah, because you've got this other mechanism for that. Exactly. OK.
00:31:01
Speaker
But the second thing we're building, and we had it in an earlier version, but we removed it and we're going to introduce it again, is the ability to write message selectors. So while the message type is large and has many constructors, right?
00:31:16
Speaker
could specify a selecting function that says, whatever message we have in the mailbox, only give me the ones that fit this pattern or this function, like a filter, like a predicate. And that would allow us, and this is a pattern that, I thought it was a bad idea, and then I saw Gleam is doing it, and I was like, okay, maybe it's a good idea. So, Louis, thank you for that.
00:31:40
Speaker
So we're going to bring it back. And it allows you to say, from this massive type of messages, you constrain down to a very, very small sort of space. Maybe even a single message that you really, really care about. So these are sort of the ways in which we handle messages. That makes sense. So you can narrow it down to just the domain. You just do it a slightly different way than having several message types. Exactly, exactly. Yeah, OK.
00:32:08
Speaker
I've always wondered with this, I didn't know it was called Gen Server,

Mailboxes and State Management in Riot

00:32:12
Speaker
but I've always wondered inside of Erlang, so in this case, inside of OCaml, how you are managing
00:32:20
Speaker
the under the hood, the mailbox thing, and the statefulness of the server and persistence thereof. Tell me some more about that. I wish I could just share some of the slides I had for the presentation. No slides. No, no slides. So can I draw in the air, and then users pretend that they see what I'm drawing? Some people are listening to this on Spotify, so you just have to paint pictures like radio. Paint the pictures. So imagine a square from coordinate 0 to 100.
00:32:51
Speaker
Realistically, a processing in Riot is a data structure. It's essentially a record or a struct that has a couple of interesting pieces of data. It knows its identity, it knows its identifier, it knows the scheduler it currently runs on, and it also contains two different mailboxes.
00:33:12
Speaker
For all intents and purposes, they're the same. They're just used for when you're selecting messages. Sometimes we put some of the messages that didn't get selected into a separate mailbox. We can read them later. But the mailbox is a log-free queue. When you are going to send a message to a process, you only have access to the identifier, the PID.
00:33:37
Speaker
The identifier is essentially an opaque integer. You don't really know that it's an integer, but it's unique for the longevity of the entire system. If you restart the server again, like the whole application, then another process will be used an identifier, a PID. So they're not unique across program executions. They're just unique during a program execution. Right, yeah. Yeah, that makes sense. And that PID is used as a lookup.
00:34:05
Speaker
So there's a small lookup into a big hash table, right? Yeah. Where we say, find me the process that belongs to this bit. And then we get the structure, which is the process. And we say, all right, let's access the mailbox of that process. And we'll put the pointer to the message. I say put the pointer, you don't do that, but OCaml does that, right? Into that queue at the end of the queue.
00:34:29
Speaker
Yeah. There has been thoughts. I've been thinking about how to make this even more straightforward. So maybe the PID actually is a pointer to the mailbox directly. So you just put things in there, which could be faster. But right now, that is not a concern right now. Message passing is not really the bottleneck right now. OK.
00:34:52
Speaker
So that's a little bit how it works. So you make any data structure, any piece of data that fits the message type, and then you call the send function and you put the process ID and the value, the message, and then we put that at the bottom at the end of the queue of the mailbox. Okay, that seems straightforward. What about the state? Is that just a chunk of memory?
00:35:15
Speaker
Well, we don't really handle that in Riot. Riot doesn't have access to the current state of a process.
00:35:23
Speaker
The way that we do that is that when you spin up a new process and we allocate the set of data structure, we allocate a continuation, right? We allocate essentially stopped function, like a whole stack trace that OCaml gives us the ability to stop and resume, right? And that's completely opaque to us. So if you want to manage state within your process, you need to build a recursion, for example.
00:35:48
Speaker
So it's not uncommon to see that you have a spawn function calling to a function init. And that init function set up some initial state, and then it calls something like loop with a state. And then that state in that loop just continuously loops forever, essentially. And in its body, the loop function may do some things like
00:36:11
Speaker
For example, it starts by saying, I need to read messages first. So I'm going to call receive. And then I'm just going to wait there until a message arrives. And then when we get a message like add user, then I will access the state, which is just a parameter to the loop function. And do something with that. Maybe update that state. And then call loop again to go back.
00:36:35
Speaker
Correct, and this is also the Erlang way of, and the Elixir way of handling, and the Glim way of handling state.
00:36:41
Speaker
Then what do you do about persistence between runs? Is there any notion of, I mean, Erlang has like this built-in database system, doesn't it? Yes. You can save your state too, if you need to. Correct. Erlang has three databases. You have ETS, Erlang Term Storage, which is in memory. You have DETS, which is disk-based Erlang Term Storage. And then you have Amnesia.

Riot's New Store and Type Management

00:37:08
Speaker
and Amnesia builds on top of DTS and it adds a lot of things on top, like the ability to do queries, do transactions and stuff like that.
00:37:17
Speaker
OK. We have started building something that we're just calling store, right? And that is kind of like an OCaml term storage, right? Right. We do have a problem, which is that we don't have one type for all OCaml values, right? So we have to recur to a common pattern in OCaml, which is functors, which is different than the typical functors people talk about from category theory in Haskell.
00:37:42
Speaker
The typical functors that people talk about from category theory. They're in coffee shops. They're talking about category. It is not free range functors. These are like lab grown functors. What is an OCaml functor? So an OCaml functor is a very interesting thing. It's a function from a module to a module. So it takes a module as an input and produces a new module as an output. OK.
00:38:11
Speaker
And it's kind of crazy. It's a great way of doing sort of...
00:38:17
Speaker
abstractions and patterns, because you can say, I need a store. And I need the store to be a module that handles some kind of key type, so you can put strings there, for example, and some kind of value type, like, for example, integers. So now we have this table, the store from strings to ints. And you need it to always have, for example, a get function and a put function and some other functions in there.
00:38:43
Speaker
But you would like the store to be able to be customizable a little bit. For example, change the type of the value or change the type of the key. You can try to build this such that the module is generic. And then that means you get these two type parameters inside of a generic to a larger type, which is the type of the whole store.
00:39:06
Speaker
But that means you maybe cannot specialize certain things. For example, how do you do the lookup of a key? Because now you don't really know what the type is. It's going to be parameterized when someone creates a new store.
00:39:22
Speaker
And if you use a functor, you can say, I will give you a store if you give me a key type and a hashing function that I will use to find values of that key type, match them up together. And then you create a store that is very optimized specifically for that specific key type you give me.
00:39:44
Speaker
And this is kind of weird to like explain, I think without code, but it allows you to say, I'm going to give you a specialized version of a module, right? Giving some inputs and the input happens to be another module with some functions or type definitions.
00:39:58
Speaker
Yeah, okay, I can see why you want to take a more generic package, module, set of functions and make a more specific one. I'd not considered modeling that as a function from module to module, but sure. Neither did I. Yeah, okay.
00:40:15
Speaker
Yeah, this is just something I learned and I was very mad at them at the beginning because I didn't understand and I think I understand better now, but I also try to avoid them because functors are powerful but they're one of the type system and you know language features that are hardest to sort of
00:40:33
Speaker
Let's say, get up and running and really grasp. They have a lot of gotchas, they have a lot of tiny things that are complex. If you have a type inside of the input module, and you want to have the same type in the output module, and you want the type system to know that these are the same, you also have to have this annotation that tells the time system this type of quality escapes this module closure.
00:41:02
Speaker
And it's not the best developer experience, to be honest. Okay. Yeah. Yeah. Anyways, so we're using one of those to be able to create these stores so that you can say, I want to save users. Okay. We give you a very easy way to create a user store, right? Yeah. And then that means you don't need Redis. You can just use that thing, right? Okay. Yeah. Okay. That makes sense.
00:41:25
Speaker
While we're talking about persistence then, I have often wondered, and this is my background in Kafka coming through, I think, that you have mailboxes in this system, and the system is expected to crash. If it crashes, it loses the whole mailbox. And I keep wondering, can we not build a version of Erlang or the Beam where the mailbox is more persistent?
00:41:51
Speaker
And if I crash, I can just replay the messages I lost since the last time I saved my state. Yes, I think we could. The volume of messages, though, can be pretty big because the messages may be very tiny and sometimes it may be very large, right? Not necessarily the size of the message, but let's say the meaning of the message.
00:42:14
Speaker
For example, if I'm going to send in Erlang a string, the string may be a really large piece of data, potentially even gigabytes. But because Erlang is super optimized for this use case, then you're just really passing a pointer around, which is nice. If you were to say that, then you will need
00:42:33
Speaker
Let's say a persistent transaction log for the message, but you will also need to be able to persist all the data that's associated with a message, including potentially four gigabytes of XML.
00:42:48
Speaker
And that is like, we wouldn't do that in Kafka either. We would not send a four gigabyte message. You would put that somewhere else first, and then you will send a message. And then Kafka will persist the message. And when you get it, then you can go to the persistent store for the rest of the data and get it, right? Yeah, yeah, yeah. So I think that it would be hard to build a one-size-fits-all solution here. It would be possible to do if you have a very strict check on the message size.
00:43:15
Speaker
Like if you say, we will build a version of Riot that is meant to be persistent, but also running with like tiny messages, right? So if you try to send a message that's big, we go like, nope, you cannot send that. So there is a future where you could have like an opt-in persistent message queue, provided you knew that it came with caveats.
00:43:35
Speaker
Yeah, exactly. I've been thinking about why would it take to make Riot a little more... We could build soft real-time systems in the same way that we could with Erlang, but we have a lot of things that Erlang doesn't have, which I think makes us a little more flexible here. For example, CLI tools in Erlang and Elixir have a problem and they're slow to start.
00:43:57
Speaker
because you need to boot the entire machine and you have all of the setup for the schedulers and everything. And this is a system that's expecting to run for 10 years at a time. So it's completely fine, right? If it's going to run 10 years, 500 milliseconds of boot time, nobody cares, right?
00:44:15
Speaker
But I've gotten used to having fast startup in the OCaml ecosystem and these tools. So Riot ends up having a really fast startup time, right? Within seven milliseconds or so in a modern MacBook, you have the tool running, and you can have something running across all your cores already. So that enables some things that Erlang just cannot do right now. And by design, maybe doesn't even care to do, which is fine.
00:44:40
Speaker
But also means that we have less overhead in some things like if you were to do more let's say CPU bound work. I think all camo is better suited for that than our language right.
00:44:55
Speaker
primarily because we compiled native code directly and ahead of time so we can do a lot more optimizations there. The Erlang JIT is fantastic, and the work Lucas Larson has done with it is just incredible. But it's still not designed for CPU-bound workloads. It's more IO bound. And while Riot is IO bound as well, I think it will be interesting to see how far can we push it down to the real time, like actual real time
00:45:24
Speaker
workload space that maybe are more CPU bound because the overhead of running processes is smaller and the startup time is also smaller and things like that. Are we talking like hard real time there? Like you've got 15 milliseconds to answer this question. I think to do that, we will need to do somewhere, you know, camel itself. Oh, okay. Because right now we don't have a way of knowing like the only point in which we can jump between processes, for example, is when we have an effect.
00:45:55
Speaker
Right. Yeah. So you'd have to guarantee without any change under the hood, you'd have to guarantee that any actor did actually call a weight in a reasonable amount of time. Exactly. Yeah. Yeah. So yeah. We have an effect for that that we call yield. Yep. And if we, if you write an actor in Erlang and you do an infinite loop, like F calls F infinitely. Yep.
00:46:22
Speaker
Erlang doesn't care. It's like it will just run and you can run anything else and it will just schedule it properly. It will just a little bit of dead CPU time there. That's it. For us, if you speed up an actor and you do while true, then you have bricked that scheduler. So we have this effect that is just a function called yield.
00:46:45
Speaker
that you can call anywhere, right? And ideally you can call in your hot loop, right? So that you tell the scheduler, hey, I'm doing a little bit of work at a time. Like, please consider de-scheduling me. Consider context switching to somebody else. Ah, so you imbue your function with politeness.
00:47:02
Speaker
Correct. Some people call it cooperative. I think polite scheduling is a better term for that. They're not so eager or greedy, right? Before I get back into my while loop for the billionth time, do you want to do anything? Yes.
00:47:17
Speaker
Exactly. Would you maybe run for a millisecond before I take over again? Yeah. And I think that works OK. It would be much better if we could have preemptive scheduler scheduling, so that even if you have a while through, then we eventually de-scheduled you, right? You run for more than 15 milliseconds or 15 microseconds or whatever, then go away. But that definitely seems like it would need language changes.
00:47:45
Speaker
I think the main language change we would need is not that big. It's almost like a compiler plug-in. If we could inject a call to yield for you, let's say the boundaries between function calls, that would be all that we need to do.
00:48:07
Speaker
Yeah, we haven't explored much what that would take because we haven't needed it yet. But I think as eventually one day in the future, Riot becomes more stable and gets closer to like a version one, right? Then we would definitely want to have something like that. So people don't have to worry about like, oh no, I call this like, you know, recursive function through these other three functions that I didn't realize, right? You don't have to think about that. It just runs well.
00:48:32
Speaker
Yeah, infinite while loops are easy to spot, but when it happens that A equals B equals C equals A, that can be harder. Exactly, yeah. Well, it sounds like you've got plenty of time to campaign for it to be in OCaml 6.
00:48:45
Speaker
Probably, yeah. That's a very good idea, actually. I hadn't thought of that, so thank you. OK, cool. So we've talked about multiple CPUs. I am wondering, because I think this is a thing that Erlang does, and I don't know if you do it on your project. I don't know if you're planning to. Can't you direct messages to an actor that might be running on a different server?
00:49:11
Speaker
Ooh, good question. So Riot, one of the non-goals of Riot is distribution. Oh, okay. And in Erlang distribution means you have different nodes running and you connect them together and they get connected essentially by a third-party program to the VM, right? That's called EPMD Erlang port daemon.
00:49:30
Speaker
for port manager demo, which sort of keeps a track of where all the Erlang nodes are running. And it tells the DM essentially, do you have any messages for this port, for this other node? Yeah, OK. I'll send a message through. That is not something that we couldn't do. We could do it. But to guarantee type safety across the nodes,
00:49:53
Speaker
That's a problem, right? I think this is where I would say we don't want to do that, and we want to stick a queue in the middle. Just use Kafka, right? That sort of thing. That makes sense. So you've just raised another problem then, because type safety across nodes has almost the same problem as type safety between runs on the same node.

Hot Deployment Challenges

00:50:18
Speaker
Yeah. If I bring my whole system down and bring it back up,
00:50:24
Speaker
What mechanisms have you got to deal with that? Because I need to deploy new code, but the assumption of the whole Erlang way of writing things is that the master process will run for a very very long time.
00:50:38
Speaker
I think this is more like a general problem. It would happen in Java, it would happen in JavaScript. If you have an external queue or a database and you change the data model there, but you didn't change your program to deal with that or vice versa, then you have the problem. At least in OCaml and type languages, we have the ability to say,
00:50:59
Speaker
The user password changed from, I don't know, an integer to a string. That's a terrible way of saying it. The ID changed from an integer to a UID, right? Like a string. Also, probably bad to store this UID as a string. But anyways, I think as a toy example, it would be good, right? If we were to change that,
00:51:20
Speaker
you would have a way of knowing how your application reacts to that change, right, before running it. So I think we have an advantage in that sense from untyped languages where you would have to sort of try and have tests for these things, right? Whereas, you know, Kamal, you just change the type, recompile, and it shows you this is everything that broke, right? So you already know what to do. But I guess what I'm getting at, though, is do you have any plans for hot deployment?
00:51:47
Speaker
Ooh, okay, yes. No, that's also a non-goal. Okay. Yeah, so definitely a non-goal. It would be really hard to do. I think it would be fun to do, and we might be able to do some of it.
00:52:01
Speaker
Like there were discussions at some point in the Erlang ecosystem foundation work groups, right around type safety of how could we guarantee some of this. And I remember I used to work at Erlang solutions and I had chats with Robert reading about this as well. And how would you do some of this?
00:52:19
Speaker
You probably want to have, let's say, a hash of the information of the type, right? So you can compare whether this thing has changed or not, and you will want to be able to have a transition function from the old type to the new type, right? Yeah. To sort of guarantee that that's there. But it just sounds like more trouble than it's worth. In the same sense that I don't know a lot of people that do hotcode reloads in R-Link or Elixir.
00:52:45
Speaker
Oh, really? Is that an oversold and underused feature, are you saying?
00:52:50
Speaker
No, those are not my words. Sorry, yes. No, no worries. I think that it is a great feature, and if you need it, then it's just fabulous that it's there. At Klarna, I remember actually copying a bytecode, like a .beam file into a machine and reloading that manually just to be able to debug something, and it was just fabulous.
00:53:17
Speaker
For most, I guess for most systems, what we have most of the state outside of the program, we try to build them stateless in some way, right? So it's easy to say, well, if it's saved on a database, then you just like cheap a new container, right? And it will be picked up.
00:53:33
Speaker
Because it does have the same problem. It doesn't have the type level problem, but you still have the problem that the type level problem surfaces early. When you have the types, it's like you change the type, now you need this transitioning function between type A and type 1 and type 2, right? Like two versions of the same thing.
00:53:50
Speaker
Erlang and this is also encoded in OTP because clearly they have had this problem because they have the use case where you couldn't bring back the box and reprogram it. You have to sort of update the code there while it was running. You have this idea of a code change.
00:54:07
Speaker
So you get a function that has the information about the new code, or rather the old code, and the old state. And this is a function the new code must implement. So you get a chance to do the handover between the two versions of the same module on a single process.
00:54:30
Speaker
we're talking you're in space and you need to sort of update a module and here's the new copy of that. And the program kind of knows how to morph the data from one. It's just fantastic, right? It's like sci-fi almost, right? Yeah. It's making me think of a scene in the Martian. Yeah. Yeah. Yeah. But yeah, it's, I guess it's the same problem. Kafka's wrestling with this. You get it any way you're doing schema migration, right? Exactly. You're going to have to deal with that. Yeah. Yeah.
00:55:00
Speaker
Okay, so I've challenged you on two non-goals. Tell me where Riot is at right now and where you're taking it next.

Vertical Integration in Riot

00:55:10
Speaker
So one of the things that we've been diligently, almost obsessively doing with Riot is build vertically. So we started building with the idea that we want to have an entire vertical stack from the bottom, from conditional compilation in OCaml that is done nicely, all the way up to we have a web server that does something like live view for Elixir. So we have essentially socket driven or socket based
00:55:40
Speaker
DOM patches sent to a website. Okay, yeah. And to do that, we build literally everything in a middle, right? We build new versions of strings for OCaml that have pattern matching like you would have in Elixir Erlang. We build socket pools, HTTP server, WebSocket server, so new sort of protocol implementations for those things.
00:56:02
Speaker
And right now, we have this phase one of the project, which is meant to have a stack that can do that, right? It might not be the most efficient. It might not be the most complete, right? It will support HTTP one and web sockets, right? It will cover some use cases. So if you want to build like your next sort of toy app side project with it, you can do it. That will be like phase one.
00:56:26
Speaker
On the next phases, hopefully we'll be able to cover more ground, support maybe HTTP2, give you an actual web server that has all of the bells and whistles that you expect out of something like Plug on Phoenix, on Elixir, right? And at a lower level of the runtime, we're hoping to get some things like process dealing working correctly.
00:56:49
Speaker
What's that? I love that feature. Processed stealing was originally built for Riot a couple of months ago, but we have a couple of bugs. The idea is that once you spin a process in a scheduler, other schedulers that don't have anything to do can actually steal that, so you distribute the load across all your cores automatically.
00:57:14
Speaker
OK. And it's that way round. It's not like the server saying, I'm overloaded, you take this. It's the quiet CPU saying, well, I don't have enough to do. I'm taking that task from you. Exactly. Like, if you're busy doing something right now, you probably don't even know how many processes you have to run, right? And you don't want to say, hey, does anybody want a process? Please take it from me.
00:57:39
Speaker
It's the other processes that are just idling that can say, well, you seem to have a lot on your plate, so I'll help with that. It has a couple of subtle bugs in it, so we had to roll that feature back. The nice thing about it is that it
00:57:55
Speaker
It helps you say, it doesn't matter how many cores you have, it doesn't matter how many processes you have, right? If you just keep spawning them to do work, automatically, of course, the process here will keep. Eventually, you will distribute them across all of your cores, and the load will just be as fast as we can possibly make it.

Enhancing Riot's Features

00:58:14
Speaker
Andals low in the middle, of course. But that's nice that it would be self-load balancing in that way.
00:58:22
Speaker
Yeah, yeah, yeah. And I think you can start to see with this couple of features like process that are suspendable, message passing, link supervisors, right, process dealing across all the different schedulers, how we are bringing, we're really bringing the Erlang VM to OCaml. Yeah. And keeping the type system while you're there. Exactly.
00:58:48
Speaker
That'd be nice. Yeah. Yeah. Yeah. I tried to do the opposite before. I tried writing caramel and bring the OCaml type system to Erlang and that was hard. So I'm actually really happy that Louis just charged forward and Colim B1 was released recently. I was so excited about that. That's the way it should be, right? It's all shipping ideas from one language to another until we're all in the right place.
00:59:12
Speaker
Until we all write, you know, this mix of OCaml and we're like... Yeah, yeah. With some Haskell and Idris on the side, please. Maybe sometimes it's really necessary. Okay. Cool. So where are you at for the future? Is this something where we should be encouraging people who are interested to try it out, to come and join the team and submit?
00:59:33
Speaker
Absolutely. We have three ways in which you, watching on YouTube, can contribute. The first one is use it. Build stuff with it, grab Riot. The Okama ecosystem has a little bit of a rough start for newcomers, so be patient with that. We have Discord forums where we can help you with anything that you're experiencing there, but build things.
00:59:55
Speaker
Because when you build things, bugs come out and we get to fix them and we get to make Riot better. Now, if you already write a little camel or are enthusiastic about learning more camel or writing things, then you can also join some of the projects. We have about five different GitHub orgs for the different parts of the stack.
01:00:16
Speaker
with some repos, each of them. And we have about 25 to 30 collaborators at the moment on all of them, which I am super excited about, by the way. That's a decent number, yeah. I'm most excited about the community we're building around Riot than Riot itself. It's like, yeah. Yeah, I know what you mean.
01:00:33
Speaker
There's a lot of good first issues there that we are trying to put in. We put a lot of care into that to make it easier for you to pick up and say, okay, this is the problem. This is some implementation notes. This is maybe how you could test it, right? So we can give you that package and you can just run away and implement it and come back to us in processing.
01:00:54
Speaker
contribute in that way. And then, of course, the last way of contributing to Riot right now is to sponsor us, either on the streams, where we're essentially doing subscriptions, I believe that's the subs, because we do a lot of live stream development for Camel and Riot, and also GitHub sponsors, which we recently crossed the 31st, 32nd sponsor. And Power of Two, love it. Yeah, really good number, right? We're hoping for that 64 soon.
01:01:22
Speaker
And actually, last week, we had one sponsor drop a thousand bucks a month. And I was like, holy crap, someone is really betting on this, right? We even have one user of Riot that it replaced a small Erlang service with Riot.
01:01:38
Speaker
And we're waiting for approval from the company to be able to share more information about that. But I mean, I'm terrified because it's so early. It's like, no, please, what are you doing? But also excited because it shows that maybe there is a subset of Riot that is stable enough for some workloads right now. Which I think is interesting because I don't know what subset is just yet. That's a really interesting time for a project.

Community Contributions to Riot

01:02:01
Speaker
I think so. I'm really excited about it.
01:02:03
Speaker
Well, I probably should leave you to get back on with it. Get coding. And feed your cat. Sounds about right. I do need to feed Leia, yes. She's there somewhere. So, Leandro, thank you very much for talking to us. And I wish you the best of luck on the road to Riot version 1.
01:02:21
Speaker
Thank you so much, Chris, for having me. I will send you emails with people that you should definitely bring on board to talk about like melange or camel and things like that. Right. But again, thank you so much for having me and I hope you have a great weekend. You too. Cheers. Cheers. Do you know, I'm kind of jealous. That is a great time to be at the heart of a project. It's almost better than when it's successful. It's that period before when it's really starting to take off. Thank you, Leandro. And I wish you the very best of luck for what comes next.
01:02:50
Speaker
As we said, if you want to find out more, if you want to try Riot, if you want to support Riot, or if you want to take a look at the other languages we mentioned, like ReasonML and Rescript, all the links are in the show notes. Before you go and check there, if you've enjoyed this episode, please take a moment to like it, share it, and rate it. I always appreciate the feedback, of course, and it helps other people discover this episode too.
01:03:16
Speaker
And if you're not already subscribed, consider subscribing now, because we'll be back soon with another great mind from the software world, and possibly another guest cat. I can't guarantee it, but we'll find out soon. And until we do, I've been your host, Chris Jenkins. This has been Developer Voices with Leandro Astera. Thanks for listening.