Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
The Inko Programming Language, and Life as a Language Designer (with Yorick Peterse) image

The Inko Programming Language, and Life as a Language Designer (with Yorick Peterse)

Developer Voices
Avatar
2.4k Plays5 months ago

This week we take a close look at the language Inko from two perspectives: The language design features that make it special, and the realities of being a language developer.

Yorick Peterse joins us to discuss why he’s building Inko, and which design sweetspots he’s looking for. We begin with memory management, aiming for the kind of developer who wants control, but without the complexities of Rust. Then we look at the designing for concurrency with typed channels, and handling exceptions by removing them and leaning heavily into ADTs and pattern matching.

Mixed in with all that is a discussion on the realities of being a programming language developer. How do you figure out how to implement your ideas? What tradeoffs do you make and what kind of programmer do you want to be most useful to? How do you teach people new ideas in programming, and how “different” can you make a language before it feels weird? And perhaps the hardest question of all: How do you fund a new programming language in 2024?

Inko’s Homepage: https://inko-lang.org/

Yorick’s Homepage: https://yorickpeterse.com/

Ownership You Can Count On (paper): https://inko-lang.org/papers/ownership.pdf

“The Error Model”: https://joeduffyblog.com/2016/02/07/the-error-model/

Kris on Mastodon: http://mastodon.social/@krisajenkins

Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/

Kris on Twitter: https://twitter.com/krisajenkins

Recommended
Transcript

Why Create New Programming Languages?

00:00:00
Speaker
Why do we create new programming languages? And it can't just be to give instructions to computers. If it were just that, we could have stopped at C, I think. We don't create new languages because computers are changing. We're creating them because we're changing. Our expectations of what should be easy, what's normal are changing. Take, for example, asynchronous function dispatch. It's been around for decades. It was always awkward and you didn't use it unless you really had to until JavaScript came along with its single threaded execution model and its massive popularity. We were kind of all forced to learn about callbacks, which were powerful, but painful. And eventually we got promises and generators and async await and the language evolved so that the technique became fairly pleasant.
00:00:54
Speaker
fairly natural way of expressing certain tasks and solving certain problems. But in all that time it wasn't the computer that changed, it was our ability to express ideas through language that changed to make something more usable and more pleasant.

Evolution of Programming Languages and User Needs

00:01:12
Speaker
And that's what interests me about new programming languages. We're evolving as computer users. Since the 1950s, our needs have been changing and our base level of knowledge has been changing. How are our languages and tools changing to support that?

Introducing Yorick Peters and Inco Language

00:01:30
Speaker
Joining me to discuss that and to throw his hat in the ring is Yorick Peters. He's the creator of Inco. Inco is a language that expresses his ideas about how memory management can hit a kind of power and control sweet spot that Rust is missing. How can we make concurrency easier in a typed programming language? How should exceptions work? Should they exist at all? And mixed in with those specific topics, we asked some broader questions about language design, such as, if you're interested in getting involved, how do you learn this stuff? And how do you fund it?
00:02:08
Speaker
And what's the future of open source funding anyway? Let's find out. I'm your host, Chris Jenkins. This is Developer Voices, and today's voice is Yorick Peters.
00:02:30
Speaker
Yorick, thank you very much for joining us. Yes, thank you for having me, Chris. It's a pleasure to have you here as one of one of um one of the world's increasing number of fascinating programming language designers. Thought I'd get you in and pick your brains about it. So I think the starting point for this is is what what's your target audience? what What kind of language are you trying to build and for whom? So the target audience, it's kind of the people that are currently using languages like, say, Go, Erlang, Java. I would say mid-level-ish languages where they they give you more control like compared to, say, Python, Ruby and such, but they don't go all the way to Rust to see that sort of level.
00:03:26
Speaker
um But then people who are using those languages and they're looking for something that doesn't have say the promise that you typically have with a garbage collector, they're looking for um better concurrency support. But they're not willing to go far as far as, say, Rust across its accent language. But it does take a um like a lot of mental ah mental cycles yeah to work with. Yeah, it gives you complete control, but expects you to join it in its model of the world, right? Yes. it's I would say, ultimately, it's kind of typically with these with that sort of gap, it's very unbalanced.
00:04:09
Speaker
in that you jump to a low-level language and the sort of scale of, I would say, ease of use, productivity, et cetera, versus the the performance, the trade-offs, et cetera, it's quite unbalanced. And so Inco sort of sits in the middle of that, where it tries to give you a a language with ah automatic memory management, but without a garbage

Inco's Approach to Concurrency and Memory Management

00:04:38
Speaker
collector. So you don't have to spend know days, weeks, whatever tuning it. um It gives you various built-in concurrency mechanisms so that you get memory safety as part of your concurrency.
00:04:55
Speaker
um And so it ends up kind of borrowing things from different languages such as, you know, Rust, Pony, Erlang. And it's sort of ultimately the idea is that if you are looking for something a little more, ah but you're not willing to go all the way to Rust, Zik, et cetera, I think Inko would be a very interesting choice there. Okay. I see, I'll nail my, um, Master the fence. I'm not even sure what the metaphor is. I'll um show you my cards. I really like Rust and I can see that the cost of learning it is worth it. But I can also appreciate that not everyone wants to go all the way there. So I definitely see ah um an ideas as market for something that gives you some of that control and all of it.
00:05:48
Speaker
The question is how, so maybe we should go through your list one by one, right? how How do you do, what's your memory management approach if it's not garbage collection? So it is single ownership. um It is in that sense, similar to Rust. The the big difference there is whereas Rust uses life but sorry lifetimes and a borrow checker, in code doesn't. The sort of core principle is that at its foundation, it essentially uses reference counting. But then using single ownership, ah the reference counting is only used for borrows. And instead of the traditional mechanism where you just ah release when the v ah the the count reaches zero,
00:06:40
Speaker
The way it works in Inco is that if the owned value is dropped, it will check, hey, is the reference count zero? If so, we're good. If not, it will terminate the the program with an error. And so it's can it's not quite borrowed checking at a runtime um in that Inco allows you to have, say, different ah sorry of multiple mutable borrows. It's more like, hey, are there still any dangling references? And then the sort of long-term idea there is to, through compiler optimizations, remove as many of those actual reference counts as possible um so that it doesn't affect the the performance or at least as little as possible. And thereby presumably stop so much of the semantics of that leaking into programmer space.
00:07:35
Speaker
So if it doesn't, um it's not really like leaking in terms of how you write the code. Like if you're writing code, you're not really think like, Oh, this is reference counted. It's it's very similar to of us in that, you know, you have your own values, your borrows, et cetera. The difference is that Rust checks everything at compile time. And that's, on one end, a big benefit of it, in that if it compiles you know to a certain degree, you know it will work. But that's also where all the mental cycles go. And that's where certain patterns are very difficult or simply not possible in Rust. And Inko's approach to sort of defer some of that to runtime is that
00:08:21
Speaker
um The goal there is to sort of provide a better better balance. You sacrifice some sort of compile time guarantees for an increase in productivity, ease of use, et cetera.
00:08:34
Speaker
um And then, you know, through optimizations, reduce the cost and perhaps through additional compiler analysis, I think you can get, let's say 70, 80% of the cases. where ah you'd have some sort of ball area, you can detect those at compile time. um So you end up with a language, at at least in my mind, is more approachable compared to Rust, for example. Of course, not without its own trade-offs. Is there, for that guarantee that you're giving up, is there any sort of language-based support to try and nudge you back on the safe path?
00:09:18
Speaker
Um, so in, in what times do you mean in terms of, um, it sounds to me like you're saying if I have a value and someone else mutably borrows it and I drop it in my local code, it's going to blow up at runtime. Is there anything like a compiler warning that says you might want to check that you got that borrowed thing released before this point? ah So not in its current form. There is, um
00:09:55
Speaker
um essentially, that doesn't exist. The thing there is, unlike Rust, for example, it is perfectly valid to have multiple mutable borrows, or immutable immutable. There's no, ah what's called the XOR rule in Rust. It doesn't exist. And the the way this the way it is to safe in Inco, is that values are heap allocated by default. So they are in a stable location. So you can move them around while borrows exist. And um for concurrency, it takes a different approach in that the way values are sort of moved between tasks means that um when you move it,
00:10:41
Speaker
There's a guarantee that whoever is moving it, the task that it's moved from cannot have any borrows to it. So you basically know as this value moves between the boundaries of different tasks, the compiler guarantees statically that that is safe. Okay.

Ensuring Safe Data Transfers in Inco

00:10:59
Speaker
So you're, so we're getting into the concurrency thing. You you've got this mechanism, which seemed to me to be a bit like, um, say go routines. or almost a bit like the actor model with like channels of sending data to places, you're guaranteeing that something can go down that pipe safely, a compiler. Yes, it's um it's a mixture of Erlang and Go to some extent, ah and Pony. Pony is perhaps a lesser known language, which it also does actors, um inspired also by Erlang.
00:11:37
Speaker
What Pony did, it put a whole bunch of sort of different um ah reference types on top of them. Capabilities, as they call it. And they describe what you can do with values, how they can move between tasks, et cetera. Inco borrows from that to to some degree, where um it has this notion of recovery. Essentially, what it means is you have the sort of block of code. And the values created inside of it do not have or have very restricted access to values created outside of it. right And so it happens when you um return a value from that block.
00:12:28
Speaker
The compiler can guarantee like, hey, there are no outside references to this thing. and So it's safe to then move it around between threads. So it's this mixture of the the actor model where processes, as they're called in Ingo, they are isolated. There's no sort of implicit memory sharing, no global state. Um, and then it uses this recovery mechanism from pony to guarantee that it's safe to move values between those processes rather than say, what that or the early might do is copying them. Or in case of Russia, if to you say synchronization or, um, what they call scoped threats that have essentially read only access to certain data.
00:13:15
Speaker
Right. Right. So you're building a system where it's safe to throw things to pass the data logically from one process to another and yet still reuse exactly the same piece of data in the same memory space. Yes. So in a, uh,
00:13:36
Speaker
at some point during in-course development, it was using a model closer to a language where values were basically deep copied as they were being sent between processes. um But this poses a problem where, for one, it's quite expensive, depending on how big the data structure is. And certain values you cannot copy safely, or it's not right, and they have side effects. So if I have a file descriptor, for example, and I want to copy that between processes,
00:14:08
Speaker
That copy process can fail if you run out of file descriptors. And so this poses a question like, okay, how at the language level would you handle this? Is this something that users should be concerned with knowing that most of the data they'll be sending will never fail. So you end up with this pattern where you have to handle errors that realistically will probably never happen. And that sort of, over time, pushed me towards this model of trying to figure out, OK, how can we move this data around without copying? And that way, we sort of sidestep that entire problem. And that's sort of how how it ended up with the the the current approach. ah But you can, of course, still copy. Like, if if you want to, you can explicitly ah clone an object and then send it over. OK. That makes me wonder. Like, there you are.
00:15:02
Speaker
um with the previous model thinking I prefer the way that, did you say it was Pony that gave you that idea? um Yes, I don't quite remember how I ran into it. I think I sort of started looking for alternative approaches and I knew about Pony already. And yeah, somehow ended up looking into it. I read this paper that that is based on, fortunately don't remember the title. That's what I was going to ask you. I was wondering how, how it is. You learn this stuff, nevermind deciding which thing to do, but how do you learn to then implement it? So in in case of pony, there's pony as the language and that's the paper it's based on. It's, um, I think it paid from 2012.
00:15:54
Speaker
I honestly don't remember the title. and But i'm I'm fairly sure the the Pony website links to it. And I think Inco's documentation links to it from somewhere as well. But basically, that paper describes this model where you have you know this recovery mechanism along with some other ah extra sort of reference constraints, capabilities, whatever you call them. And then Pony took that and they added additional stuff on top. And so Inco sort of did the similar approach where I took some things from that paper, but then kind of doing the opposite, where you say we were not going to add all these capabilities. Because I think one of the the problems with Pony is that because it has so many capabilities, it's quite a tricky language to use. At least that's the feedback I've seen over the years of people saying, oh hey, this looks interesting, but I can't wrap my head around it.
00:16:52
Speaker
like and So I think I was like, no, I'm only going to take sort of the essential bits needed for its concurrency system. As far as sort of learning, i
00:17:07
Speaker
it's a for some people, what they'll do is they'll read the paper and then they maybe try to implement it. I tend to struggle reading scientific papers. That's fair. That's an honest confession that a lot of us would would struggle to confess to. And it's it's it's part because even though you know English is my second language, it is still not my native language. and And academic English is a whole different level. Yeah. And a lot of papers are very dense in that you read a sentence, but it's actually five sentences condensed into a single one. Yeah. yeah yeah So it's a very
00:17:48
Speaker
time consuming, um, process. And I, I can make it through like a small portion and then I just lose interest basically. So what do you do? Um, I basically bang my head against the wall very long time. The old program resolution to every, yes. I don't have any, uh, secret tricks there. It's, it's a lot of frustration, a lot of trial and error and, uh, trying it over and over and over and over and over. I think in this particular case, because pony existed, there was a sort of reference that is a bit easier to digest. Like you can actually play with and see like, Oh, okay. You know, I can see what they're, what they're doing here. And I don't
00:18:38
Speaker
quite remember how I made my way through it. I do remember that in the end, you look at it as like, oh, okay, actually, this is not that difficult. It's like it's, you spend, you know, five days on something and the end solution is like, let's say 10 lines of code. That's sort of equivalently like, wait, did I spend all this time on that? Yeah. um And I think that's the beauty of this particular recovery mechanism is that it's, so deviously simple. You look at it and you're like, okay, it took a lot of effort to understand this paper describing it and going through it, but in the end, it's actually fairly straightforward.
00:19:20
Speaker
And that's with building Inco and a lot of projects in general, that's sort of my my way of going through it, that I try to find you know existing references, um papers, tools that implement things, et cetera. But a lot of it comes down to just yeah banging your head against the wall over and over, sort of brute forcing your way through it, which I'm sure is more efficient ways of going about it. And I do sometimes wish that I had more academic knowledge.
00:19:57
Speaker
um But I've all also kind of accepted, you know, this, this, this is the way it is. This is a big part of programming, unfortunately. Yeah. Yeah. Often if I want to understand something, I find the only way to get it into my head is by bashing it through the keyboard. Right. And it's like these things where programming is both the art of constructing working software and the process that helps you think more clearly about a domain. Yeah, it's, um, I think a lot of people when they
00:20:31
Speaker
are so sort of getting started with programming, they might think that it's this sort of rigorous, well-defined process, like, oh, you know, we have your hypothesis and then you do your research and then you build. It's like, no, it's more often than not. It's like banging two rocks together until you get sparks and it's like, oh, I made a fire. Yeah. Yeah. Yeah. I often, uh, sometimes I speak to junior developers and they say, I feel stupid and it's like, congratulations, you're doing it right. That's the process. It's. It's funny because my my partner is going through this where she's getting more and more into the program. She has a data science background. And she'll, from time to time, come over with these problems. And she's like, hey, you know I don't understand this. you Am I just not getting it right? Or I feel stupid or whatever. I'm like, congratulations, welcome to the club. This is what it's like. yeah This is your career from you for the next, I don't know how many years. Yeah. Yeah.
00:21:26
Speaker
Look out for those brief moments where you feel like a genius before the rollercoaster begins again. That's where you have to be careful. That's that's where it's like, okay, wait, am I a genius? Or is this my ego getting the the better off me? Yeah. It'll be around soon enough. yeah Okay. So, so I'm, this could be a tricky question. Let's, it could be a test, but, um, you say this recovery album, recovery algorithm.

Inco's Recovery Algorithm and Memory Safety

00:21:51
Speaker
Can you explain it to me? Yes. So at the basement level and in Inkos memory management, we have owned values and borrows. Owned means you know you move it around, you're done with it, it gets stopped. Borrows, they don't. so they They just temporarily give you access. Is this like, I mean, I would imagine that a local variable is definitely an owned variable.
00:22:20
Speaker
So for example, if you do something like, hey, you know ah create a new instance of a user that will typically default to owned. but If it's just new, just create the instance of the object, yes, it's owned. If you then assign it to a variable and then assign it to a new variable, it will get moved so that old variable is no longer um accessible. okay So in that sense, it's that part is exactly like Rust. The way recovery sort of builds on top of that is there is, a in the language, a recover keyword. ah You give it basically a yeah a block, a scope, whatever you like to call it. Within that block, um values defined outside of it, as sort of a basic implementation, they are not available. So if you imagine you you have a variable called a user, and you assign it an instance of a user type, and then you do you know recover.
00:23:19
Speaker
And inside that recover, if you refer to the user variable, the compiler will say, you you can't access it. It's not, in this case, cold quote unquote, sendable. Sendable being a term like, hey, it is safe to move between the boundaries of tasks. right okay Within that recover block, you can just do things as usual. You can create new instances, call methods, yada yada yada. provided, you know, if you call say method on a value, if that value is defined outside of the recovery, it will say, I can't do that. Yeah. That means that when you exit that scope, the only data you can have access to is defined within it, which then also means that if you return something, um then everything that's not part of that is discarded.
00:24:12
Speaker
So if I have a recover block and I create, say, a user, Alice, and Bob, assign it to variables, and I return Bob, then the compiler will know, hey, Alice is no longer in use. OK. What that means is that at the point of <unk> returning a value from that recover block, the compiler knows, because of that, that there can be no outside references to that value, because they will be discarded as you return from the scope. Now, there may be references, ah sorry, borrows inside of that value that you are turning, pointing towards the value itself. That's perfectly fine. And if there are borrows in that value pointing to other values in the recover block, that will trigger an error at runtime in this case, because it will say, hey, you're maintaining a borrow to a value that's being dropped. So in other words, if
00:25:13
Speaker
Let's say you have you know Alice and Bob, and you return an array of, let's say, borrows to Alice and Bob. It will say, hey, you and you can't do that, because while the array will survive the recover block because we're returning it, Alice and Bob will not, because they've been discarded. You've got like dangling pointers, the equivalent. yeah And so you end up with this sort of guarantee that at the end, because there can be no outside references ah to it, and because sort of interior references are contained within the value, it is safe to move it around. And so what the compiler does at that point, at the end of the recover block, it will take that value and it will type it as unique. So instead of a user, it will be a unique user, for example.
00:26:08
Speaker
OK. And that basically means, hey, this value has that guarantee no outside values to it, can point to it. Inco then builds on top of that by, under certain conditions, sort of loosening up those restrictions a bit. So for example, um ra if you have a owned value defined outside of it, ah it will actually let you access it. but in very restrictive forms. So you cannot mutate it. You cannot borrow it in ways that would allow the borrow to be passed around. right and There's some rules that allow you to call methods on those values. There are basically a set of exceptions where if we know, hey, it is safe to allow x, y, is z, we do that. And the idea there is that if you have, say, a unique value,
00:27:09
Speaker
Without those relaxed rules, you couldn't really do much with it. You can just pass it around and you can turn it back into an own value, but that's about it. and So by relaxing the rules in certain cases, you can still call methods on it. And to a certain degree, you work with it as if it was just a regular own value.
00:27:32
Speaker
um And that's sort of the the basic gist of it, in that you have a scope. And at the end, ah so you have a scope, you cannot access data outside of it. At the end, that just means the only data you return was created inside of it. And then through its sort of um borrow guarantees, the compiler knows, hey, it is safe to do this. And thus, we can pass this value between tasks. the The exceptions there are a little more tricky, happily discussed them, but that's where things get a little more difficult. We're definitely agree into that, but let me just check. So it sounds like this recovery area, it seems naturally suited to things like when you create like constructors, right? That seems like a natural fit. Where else is it useful?
00:28:25
Speaker
So in the way you would use this, typically only if you intend to send values between ah processes. right So if you have you know a single essentially a single threaded program, you would pretty much never use this. And that's also part of the beauty that this recovery mechanism is fairly simple to understand, but it does impose a certain restrictions. And the nice thing about it is if you have no need for it, you never have to pay that cost. Right. Yeah. It's only when you need this, we have to say, okay, wait, hold on. And we have to deal with this. So is this when like, i'm I'm going to get into concurrency, I think, when I want to set up a channel between two processes, I'm about to send a message across the wire. Now I will go into this recovery area to create a message I know is isolated from everything else. Right.
00:29:22
Speaker
Yeah, so that that that that's where you basically use it. um The way you'd sort of typically go about that is, at least in my experience, it's quite a rare where you have a you create a value in some random place, and you use it through all sorts of parts of your program. And then somewhere deep down, you're like, oh, we have to send this to a process without you really knowing about that upfront. So what you do is either you create this unique value sort of early on. And you still you you pass it around, you can use it because of how the compiler relaxes certain rules. And then you can just pass it around. And then when you say, hey, I want to have full access to this thing again, you can sort of recover it back into an owned value. The other approach is you can clone it. So if you have a owned value, the way recovery works is that
00:30:21
Speaker
um Because of the these relaxed rules, you can do something like, hey, you know take this user, ah clone it, but do the insider recover. And then the compiler knows, okay, because of these restrictions, we apply that copy is now safe to move between different processes. Okay. um it It can get tricky where if you have a value, you use it in a bunch of places and then at that Somewhere in the middle, you have to turn it into a unique value, and you cannot go back and do it earlier. That's sort of a challenge where um if you have to pass values between processes, you do need to think about that closer to where you create them.
00:31:12
Speaker
But i I found there are not that many cases, if any, where you cannot say, hey, we're creating this value. Oh, we have to share it. So you move it between processes later on. Let's create it as a unique value. And the nice thing there is you, as an author of a library, you do not have to think about this. So if you define a type, and it has a new method, for example, You don't have to think, oh, I want to allow people to move this between processes. So I'm going to return a unique instance of this type. You don't have to do that. You just return your normal thing. And that's because those methods, if you use them inside of recover, the compiler will do that for you. Right. And so you can write entire libraries that just do not care about any of this. And you can still use them in a sort of concurrent setting.
00:32:08
Speaker
Right. And to to reconnect this, the point of all this is that we can get something like a garbage collected experience, but without the performance hit of garbage collection at runtime. Yes. So the, the goal there is to sort of provide a balance between. So ease of use that you get from a garbage collector language. But without the penalty seat, typically pay. Now, of course, that comes with its own trade-offs. It's not something where if you're used to, say, Python, you can't just jump in and immediately start working with it and not have to learn anything new. There's a bit of learning involved.
00:32:53
Speaker
Um, but overall that there's steady, like it, it shouldn't be that you sort of hit a wall and I have to really start thinking like, Oh, you know, how am I going to handle this? This is really difficult. Yeah. Yeah. You didn't want to put someone through those days of bashing against the wall. that Maybe you face. Yes. It's. It's more about yeah po balancing productivity, performance, et cetera, in a better way instead of, hey, we're going to favor performance completely over everything else, or we're going to favor usability, et cetera. Yeah, I think this is one of the um essential hidden tasks of a language designer to say, how much how much will I ask of the user and how much power ah do I give them in return? What's the trade-off of learning versus the long-term payoff?

Balancing User Control and Language Complexity in Inco

00:33:44
Speaker
Yeah, exactly. and that's It's a difficult probably one of the most difficult parts of building a language is that well figuring that out is difficult in itself. But you can ask 10 different people their opinion on it, and you'll get 10 different opinions back. ah Possibly even more, depending on sort of how many times you ask them in a certain time span. Sometimes people wonder why there are so many programming languages, but given the number of opinions about how they should work, I'm sort of surprised there are as few as there are. Yes. and i
00:34:22
Speaker
so I'm not surprised there are as few as there are because it is, you know, having gone through it myself, it's a a difficult process. and sort of What I've seen, there are a lot of people who build a quote unquote programming language, but what they do is they build a parser and then they sort of give up because then they realize, oh, everything that comes after is actually the the hard part. yeah um and it's It's also very time consuming. It's not particularly rewarding in that sense because you can spend five, 10 years building on it and have you know five people using it.
00:34:59
Speaker
yeah So I feel for a lot of people, they'll, you know, they'll play with it for like a day or two. you So, oh, yeah, I'm going to write a language and then they give up and never look at it again. I think also a big part is just a lot of the literature is not particularly accessible. An example of there is I spent, I think two, three, four months maybe working on Incos pattern matching implementation. Okay. Yeah. And most of that time was basically spent trying to sort of digest the existing literature and trying to figure out, okay, how does this actually work? What does this paper actually say? Because a lot of these papers, they'll they'll throw you know formulas at you and lots of basically Egyptian hieroglyphs. And they're like, this is easy. And you're like, I don't understand any of this.
00:35:56
Speaker
Yeah, there there's this whole sub-language that you have to learn before you can understand the papers. And it's very hard to learn that sub-language because everyone writing those papers assumes you already know it. Yes, it's the the papers are written for PhDs, basically. They're not written for you know the average developer that's banging rocks together. um So in that particular sort of experience, I and ended up setting up this GitHub repository that implements two particular algorithms and tries to sort of explain how it would work. One of those is the one Inco uses. And um that proved useful because, for example, the language Gleam, they based their pattern matching implementation on that particular implementation as well. Oh, interesting.
00:36:46
Speaker
um And it's a bit funny because I can go, a while back I went through their issue check and I saw a bunch of bug reports for their pattern matching implementation and I like double check and say, oh Dan, Inco also has these because they're based on the same algorithm and the algorithm has the bug. ah But so that that's where I, sort of one thing I've been doing as part of building a language is try and figure out how can I sort of share this knowledge so that hopefully other people can offer it. They, they have it a little easier. Yeah. Cause there are, there are a couple of artifacts that can come out of building a programming language, right? There's the language itself, but there's also the knowledge to be shared. Yeah. It's, I think the most frustrating thing there is a lot of the times there are certain topics that are generally difficult. Like pattern matching is one of those things where.
00:37:43
Speaker
From a technical perspective, the code is not that complex. But it's difficult to understand ah because it's a recursive algorithm. And those always get you. um But then other parts are very straightforward. And you look at it like, why is it not explained better? So this this thing I recently did, for example, was um implementing a code formatter. And there there's some existing literature on it, um but it's basically some papers and one or two blog posts. so
00:38:21
Speaker
There I also went through the process implementing it, you know wrote an article on it, and in the end you look back at it and it's like, ah okay, that that was actually quite simple. why Why is this not explained better? Yeah. Yeah. One thing I've learned from reading, not a huge number of papers, but enough to realize that the other thing is not all paper writers are created equal. Some of them are much better writers than others. um Yes, I sort of the paper Incos memory match model is based on this from 2006. Interesting of Phil, which was also discussed a while back, they based it on the same paper. They essentially just went the different direction that Inco went. So it sort of branched off in different parts. um That paper was written, as far as I understand, not so much by a
00:39:13
Speaker
academic, but by just a developer, essentially. ah papers The author is Adam Dingle, and he co-authored it with David Bacon from various garbage collector papers. And that paper is very accessible. It's very straightforward to read. And it's almost more like a blog post in that sense. But then you read papers about pattern matching. And it's written by PhDs with a mathematics background. And on the first page, they bombard you in funny symbols. And you're like, oh, god. just this Which is absolutely un-Googleable, right? Yes. yeah I have a book that's supposed to help explain them, because at some point, I got so fed up with it. So it's this little book where it explains, hey, these symbols mean this or that. That works. But the problem is, sometimes people decide to change the meaning of those symbols in papers.
00:40:08
Speaker
And sometimes what happens, they'll write paper A, they'll use a symbol with a particular meaning and then write paper B and the symbol is used again, but with a different meaning. and So there's no sort of reusable syntax. There are conventions, but they're not always followed. ah And yeah, the symbols are not Googleable because you entered them in the search engine and you get some page about, you know, Unicode symbols or something. Yeah. Maybe Wikipedia page if the symbols frequently used, but it's, yeah, it's it's a painful experience, unfortunately. Would you have any tips for someone that wanted to write their own language? Oh, gosh. um
00:40:50
Speaker
I can think of a few. so One thing I did, for example, early in the development of Inco is that I completely set aside the syntax. That's something where every paper on compilers, every course on compilers, they start with parses. I have a couple of books, they're like 800 pages, and the first 300 are all about parsing. And parsing is the most boring topic you can start with. like that That's how you burn out when building a language. So what I did with Inco, in the very early days, the syntax was a lisp, technically as expressions. yeah And i just at the time, the compiler was just in Ruby. And I just used some library that supported parsing as expressions. And I used that to basically prototype what was at that time the interpreter.
00:41:44
Speaker
as well as standard library. Can I get a feel like, OK, now how how do I want to implement these various bits and pieces without having to argue over, oh, am I going to use FN or fun as the function keyword, or am I going to use death? Yeah, death by bike shed. I didn't want to care about that at that point. So I think that's a definite thing to look into. It doesn't necessarily have to be a s-expression syntax. It can be any existing thing. But the benefit about S expressions is that they're trivial to parse and ah quite extendable. Just not very pleasant to read.
00:42:24
Speaker
um Yeah, for other parts, it's it's more difficult because it kind of depends on the subject. So for example, if you if you want to implement pattern matching, I would suggest people look at the material that I published. But outside of that, it's basically like, sorry, this is this going to be difficult. um I think in the context of, say, a compiler, um Inco currently uses LLVM. I'm not a huge fan of LLVM for various reasons. But the alternative is basically you write your own code generator, your own linker, et cetera.
00:43:05
Speaker
That can be very interesting if that's what you want to do. If that's what you want to innovate upon, go for it. But I think for many people, if you if you're more interested in building a language as a whole, I would just stick with, say, LLVM or some existing library um rather than ah try to build everything your own. I guess in other words, what I'm saying is to sort of incrementally make things your own. So you, and sort of know where to choose that. Instead of saying, Hey, I want to write my own compiler, my own standard library, my own linker, e etc et cetera, et cetera. It's like, no, just pick one thing, start there and then sort of gradually work your way through it because otherwise it's going to take 20 years to finish.
00:43:51
Speaker
Yeah. So it's just like regular programming. You choose your version one features and save other stuff till later. Right. Yes. yeah Yeah. Knowing which battles to pick basically. Which battles to pick is half the job of any creative act. So speaking of battles to pick then, so This is another thing that comes back to this idea of how much do you ask of your users versus how much you give them. Because I think there are some things in programming that would have been a real mental leap a while back. And they're now so much more familiar that they're almost expected, like pattern matching. it's like um So like why don't we talk about channels and your implementation of

Concurrency Solutions in Inco vs. Older Languages

00:44:33
Speaker
channels? Because that seems like an idea that's mostly mainstream now.
00:44:37
Speaker
but It's not asking people too much to use channels, but what do you add that's new? So in case of Ingo, the channels stem themselves are as you would expect. like they They're bounded channels. That's it. There's not really anything you could necessarily, I think, innovate there specifically. um Prior to that, Inco didn't always have channels. In the past, the implementation was more like Erlang, where you would send messages directly to processes, and they would have a mailbox, which is effectively a channel just with a single consumer.
00:45:17
Speaker
um and So the implementation itself is similar to where you finance a goal or rust, where it's a multiple publisher, multiple consumer channel. okay um I think if you sort of look at it, okay, you know we we're building a language for concurrency, what primitives should it have? There indeed is a standard set that you have to have, which is either channels or basically mailboxes. I would go as far as to say channels not and and not mailboxes, because the problem with that sort of Erlang model where you have a process and everybody can send it messages.
00:45:58
Speaker
And at any time it can receive them, I said, it's very difficult to statically type this. I spent quite a bit of time on this and ultimately gave up. I think Gleam is trying to do it. They have, have it to certain restrictions, but they, uh, as far as I understand, they have parts where you have to sort of escape to, uh, I don't know if it's fully dynamic typing or gradual typing. but Either way, the sort of escape hatch you have to make use of. Right. And that is because if you can receive any arbitrary point in your code base, the compiler cannot know what the types are that it can receive at that point. Yes. There are some mechanisms like session typing, where it's essentially you describe a state machine in your type system. So you say, hey, we start in state A, we can then receive values of these types, and then we transition to state B, et cetera.
00:46:56
Speaker
But as far as I know, that hasn't really left sort of academic playground, if you will. I don't know of any serious languages that use it. I looked into and concluded that it's just too difficult to wrap your head around. Okay. Um, so that's where I like, I think as far as concurrency printers go, channels are an absolute requirement. And that's because because they are a first-class type. And because you can pass them around, you can solve the whole problem of what types can we receive by just saying, well, you have a channel of an int, so you can receive an int. right there's a So the context is is known at all times.
00:47:43
Speaker
ah Other than that, it There are various other primitives, like atomic reference counting of like immutable data structures, for example, futures or promises, as they're sometimes called. these are It depends a bit more on sort of what language you're building, I guess. right So for example, atomic reference counting of immutable data is quite useful, something where in case of Ingo, I'm occasionally contemplating adding it.
00:48:15
Speaker
It's usually if you have this big data structure that you want to have available to multiple processes, but you don't want to move it around or copy portions of it. You just want to read it. It doesn't matter. Right. Global, static database. Yeah, essentially. um Futures and Promises, Inco U-staff Futures. and It's model. um Sensua Inco has, you have you know your normal types. You have processes. Processes are first class types. You define them like a regular class. You just you know stick an extra keyword in it. They have state. And then the um the way you control a process is similar to a iterator. but Instead of it, you know you start it and it starts doing stuff, you have to tell, hey, run this thing.
00:49:11
Speaker
So the way that works is you have ah methods to defined on the process. And you can call them. And what it actually does, it sort of sends a message to say, hey, start running this at some point. And he internally uses a queue for that. Now, what it used to be is that you would send this message, and you'd get back a future. And you could then resolve that into whatever the underlying method would return. um That was replaced basically by channels. So now the ah methods defined in the process that you can call, they they cannot return a value. The compiler will tell you, like hey, you know can't do this. yeah And that's mainly because futures are quite nice if you have this case of, um run this single thing, give me a value back. But they are kind of annoying to work with if you have, let's say, 10 futures.
00:50:09
Speaker
yeah And you would just want to get data as they come in and present it. Because you need a essentially a polling mechanism. And you end up with kind of the same problems that the poll system call has, where as the number of futures increases, so does the time to oh get the data. Because for 10 futures, that means you have to check 10. For a hundred, that means you have to check a hundred, right? Yeah. In contrast with a channel, you just check one thing and just so you sit there and wait until something comes in. Uh, so it's a much, it scales much better as the number of, uh, uh, producers and consumers and such goes up. Yes. And if you can cope with the fact that you you lose the direct connection between the thing you send and the response you receive.
00:51:05
Speaker
then then' then channels are basically a generalization of futures, right? So essentially, futures are a channel with a size of one. Yeah. Right. ah One-shot channels. Yeah. And so in its current implementation, that's essentially what you do. If you have a case where a process needs to communicate a result back, you just say, well, here's a channel. Send it over that channel, right? And maybe at some point, I'll introduce a dedicated future type for that particular use case. Um, but it's not really necessary. So I think as far as concurrency goes, you absolutely need to have something like channels. And within the type system, you need to have something that you can use to guarantee that, uh, in the safe to move data between your tasks and possibly share them if that's what you want to allow. Because channels potentially could be ah an excellent place to create, uh, data races.
00:52:04
Speaker
Yeah, so in in Go, for example, channels themselves are thread safe in that, you know, if multiple threads put something in the channel, it's not going to blow up, but the data is sent over. It's unsynchronized. So you have to explicitly, you have to remember, like, oh, wait, you know, we can't just send an array over it. We have to send over, uh, well, you can, if you can somehow guarantee it's only used by one case of Go, Go routine at that time. We have to think about like, okay, wait, do I have to use a mutex here? or Do I have to do something else? And that's where if you bake that in the type system, um, that's not something you have to and't think about at runtime. Yeah. Yeah. That's, um, it's, it's the kind of thing where I definitely see a case for the language designer adding support to stop you shooting yourself in the foot. Yes.
00:53:03
Speaker
And that's, I think that's where you see with a lot of new languages, fortunately, is that they are focusing more heavily on that. Whereas if you look at many of the existing languages, Python, Ruby, Java, Go, et cetera, they basically say, hey, with threads, we have some primitives, figure the rest out yourself. And in case of, you know, go, so, um, a Python and Ruby that kind of works because they have a global interpreter lock. A little Python is working on getting rid of it. In case of Java, it i they they don't have a global interpreter lock, but that does mean you you have to keep in mind, like, oh, and this might be used by multiple threads. How are we going to deal with it? Yeah. Go there is and interesting because
00:53:57
Speaker
It has this heavy emphasis on concurrency, but then at least in my opinion, it feels a bit like they kind of dropped the ball because they didn't do anything to really make that actually memory safe.
00:54:11
Speaker
and But yeah, so a lot of new languages today, they seem to be fortunately focusing more heavily on that and trying to bake it into the language so that you you can't really work around it. Yeah. But again, it comes back to this idea that if you If you give people more control, but you want to keep the memory safe, how much do you have to teach them? Do you have like, do you have personal opinions and instinct for how different INCO is allowed to be?
00:54:44
Speaker
um That's a tough one. They're definitely something commonly referred to as a ah strangeness budget. Yeah, this idea that you know there's a certain amount of weird things you can put in your language before people kind of throw up their hands like, you know, this is too much. Yeah. I i don't have and I don't know if there really are any sort of universal guidelines for that. Because it's it's also kind of influenced by sort of what way the wind is blowing, so to speak. So if
00:55:21
Speaker
So because of historical reasons, for example, lisp-like language is a kind of out of favor. So you can build the best program language in the world, but if it's a lisp, people like, eh, you know, not really sure about this, all these parentheses, this looks weird. Yeah. Yeah. Not necessarily agreeing, but I think that's an accurate representation of the state of the industry. Yeah. And so it's, it kind of varies, you know, by time and and various other factors, I think.
00:55:51
Speaker
Uh, yeah, I actually don't have good tips. It's, it's, I would say, you know, it's a matter of taste and guesswork, but that's terrible advice because it's very abstract. Yeah, it's hard to add. Well, let me put it more in a more concrete way. So does this, is this one reason why it seems to me, Inco has very ordinary syntax. I mean, it's exceptionally familiar to anyone that's written Java or JavaScript with classes. Yes, so I think that stems from the sort of original setup where I had an S-expression syntax. For me, syntax is not really an interesting topic in that yeah there are certain things you can sort of explore, but I think for the most part, we've sort of exhausted the
00:56:45
Speaker
things you can try out. so And so if the industry as a whole is kind of settled on ah curly braces as sort of the main thing of say, indentation and sort of class structured setup for types and such. um So because of that, one of the things I wanted was a syntax that is just really simple to parse. and really simple to wrap your head around. Part because I don't like writing parses, so I just wanted to spend as little time possible on that. Getting that sense from you. Right. Part because they are ah written by hand. and I'm not using parser generators. But part of this also, I think, fueled by my experience working with Ruby for many years where the syntax is this weird monstrosity that
00:57:38
Speaker
Kind of like ah a Hydra keeps growing a new head every year. Every year, it's like, oh, there's this new syntax feature, just new this, that. And for a while, I was helping out with this library that um exposed a parser of Ruby for the various different versions. Because Ruby ruby standard library has a parser available, but it's ah directly tied to the version of Ruby that you're running. So you can't say, hey, no, we want to parse syntax for this particular Ruby version up to whatever. It's like, no, it's always the latest. And the API is just a pain in the butt to to work with. And so there's this library and I helped out with it. And the main developer was basically just having sort of an increasing number of mental breakdowns over time, just because the syntax kept changing and they were like,
00:58:34
Speaker
You know, we just fixed this and now we have to deal with this other thing, you know, that when does it ever end? and So you've been bitten so badly, you want to go to the opposite end of the, of the syntax argument. Yeah. And the, the result of that basically is that in Ruby, it's very difficult to derive proper tooling that acts upon the syntax, um, because it's so dynamic and like ever changing.
00:59:03
Speaker
And so with Inca, that's exactly what I don't want. like That's something I want to avoid. i I want people to be able to write different tools and focus on what the tool actually does, rather than how are we going to parse this syntax. um And I think also part of it is just a byproduct of trying to not put too many things in it. um
00:59:37
Speaker
ah put us I think I'm very good at saying no. Like if you look at a lot of languages, um it seems the sort of answer to a lot of proposals is basically like, oh yeah, sure. Whereas my standard points is like, no, who we're not doing this, right? So if you look at Rust, for example, when I started using it in 2015, it was, I wouldn't say simple. But it was definitely less difficult than it is today. um And for example, I recently learned that there is this proposal so that you can embed a cargo manifest inside a REST file. So kind of similar to Markdown, there's this idea of having a sort of front-matter section in your file.
01:00:25
Speaker
yeah And I kind of looked at it. you know I understood why people were pushing for that. But my response, like why are we embedding, in this case, Tom L inside Rust? To me, that seems absurd.
01:00:42
Speaker
um And similarly, it was a proposal to have essentially a generic keywords or something. where you could have functions that are generic over being async or not, things like that. right And they're all features that when I look at it, they make sense. I understand why people reach those decisions. But my response is like, no, I want something simpler. And I'm willing to basically say, look, we're just not going to do x, y, z because of that.
01:01:16
Speaker
It's worth keeping the the footprint of the language small, even if you sacrifice everyone, if even if you sacrifice the chance of pleasing everyone. Yes. So I think my, my sort of approach there is I know you cannot please everyone. Like even if you have the perfect language, somebody like, me actually it doesn't do this, you know, one thing, so I don't like it. and So my approach is like, I'm, I'm not interested in pleasing everyone. I want something that's more approachable and and easier to use, even if that means that certain things are perhaps not as efficient as they could be or as generic as they could be or you know whatever it is. yeah yeah ah can see people I can see plenty of people responding to that. ah Plenty of people who feel that some languages have too large a footprint.
01:02:11
Speaker
and liking that. and that was So before we get off the technical side of this, there is um one language feature, which I'm reminded of by talking about problems and how difficult life is or not, which is error handling.

Error Handling Evolution in Inco

01:02:28
Speaker
Because I know that's ah that's a battle you have chosen to take on. What's INCO's perspective on error handling?
01:02:38
Speaker
Let me see how I break this down. so One, there's an article written by ah Joe Duffy. It's called The Error Model. okay And it's this article in which they basically break down how they did error handling with this project called Midori, a language they were developing at Microsoft.
01:03:02
Speaker
um And it breaks down some of the different approaches they tried, the trade-offs, et cetera.
01:03:09
Speaker
ah Ingo's model is inspired by that, ah although it has changed a little bit. Essentially, the the way it used to be is that a function had a return type a errorottype and And you could throw it. Then it will be an error. You could return it. Then it will be a ah normal value. And then the compiler would ensure that, hey, if a method has a type that it throws, you are forced to handle it at the call site. So there's no implicit sort of unwinding across ah function calls. And you could only specify a single type of throw. So you couldn't have a method that throws you know an in-string, an array, all these different things. It's just one thing. okay And there were a couple of other restrictions. But it basically meant that
01:04:04
Speaker
You wouldn't have this problem in typical languages with exceptions where you call a method and it might throw 25 different errors. Some it might throw directly, some might come from 10 function calls deep. That model I've seen sort of moved away from and now it's a using basically algebraic data types like enums, result types, et cetera. Okay. Yeah. And that's because although that first model is very interesting, it's quite efficient. ah if you If you don't support implicit stack unwinding, because it means you always only have to basically return. That setup, producing an error is basically you set an error flag and you return the value. That's it. There were no stack traces built or anything like that.
01:05:00
Speaker
um The problem with that approach is that it doesn't compose very well. So if you have, say, a case that I ran into quite frequently, you have an iterator and you have some sort of map operation.
01:05:17
Speaker
The code running in that map might want to throw an error, but then the ask the post the question, like when you define that map function, Do you allow that to throw errors? How are you going to sort of propagate those back up? And you basically end up with this pattern where a lot of functions end up having a generic throw type just so that they can throw whatever they provide a closure may throw. yeah yeah
01:05:46
Speaker
yeah and And it's essentially because you now have two output streams, your return output and your throw output.
01:05:56
Speaker
Whereas if you return, say, a result type that can be either OK or an error, that ultimately automatically composes with everything that handles for return values. So you have map functions, your folding, et cetera. And so that's what Inco uses now, basically, because it composes easier and because you know you can pattern match against it. it's It's more expressive. So your perspective is that algebraic data types are better than two separate channels, one for results and one for errors. They subsume them. Algebraic data types subsume that specific syntax. Yes. I am willing to die on the hill by saying that ah algebraic data types are better than exceptions in any form.
01:06:50
Speaker
because of the pullments, because they compose better, you can pattern match against them. Now there is the concern that it has a performance impact in that you have these values that you have to sort of unpack. So every time you handle them, it's like, are we dealing with an OK or an error? OK, you'll do this, do that, et cetera. But that exists anyway, even if you use some form of exceptions, because as you call a method, you need to check, hey, did this throw a value or not? Yeah. And that can be an explicit check. That can be some sort of unwinding magic that's potentially expensive. So there's always a cost there.
01:07:29
Speaker
Um, but that cost, you can optimize the way you can, instead of allocating these result types, you can put them on the stack. Uh, maybe through clever in-line optimizations, you can get rid of most of it in the first place. So it's, I think in the longterm of a language, that's not that big of a problem.
01:07:51
Speaker
Okay. Then a related question. Are you going to say that, um, I think I've almost got you to say that channels should be part of any language. You can say that algebraic data types should be part of any language. Um, I think, yeah, for pretty much most, yes. but In the sense that I will admit that if you have a dynamically type language, their value is a bit questionable. Because you can use dynamic typing in that case. You can say, oh, this function returns an array or nil or something like that. You should have had it anyway, right? Yeah. And also, in a dynamically-typed language, there may be less opportunity to optimize the cost of algebraic data types away, depending on whether or not you have a JIT compiler, et cetera. But at least for a statically-typed language, I would say, yes, it's very much
01:08:47
Speaker
I think you can get a way of not having it, but I think you'll make a lot of people happy by do including it. so Yeah. It's definitely one of those things I look for when looking at a language. and Yeah. I'm not saying I rule a language out if it doesn't have them, but it raises a question mark if it doesn't have them for me. Yeah, I think it's interesting for example, they're sick. They, they have an interesting error handling approach where if I remember correctly, essentially they are like you annotate a function with, Hey, it might throw, I think a value for certain type. And then they sort of generate like error codes. Um,
01:09:26
Speaker
But the problem there is, as far as I know, you cannot attach additional data to the error. So you can say, hey, this will throw 12345, but we can't attach, say, ah some sort of information that describes the location at which the error occurred or any additional metadata. And this is something where I've seen people bring this up and say, you know hey, I like Zic, but it's kind of annoying that I can't attach extra data to my errors.
01:10:00
Speaker
um And so I yeah i think they are not or required, but definitely very nice.
01:10:10
Speaker
Yeah, I would totally agree with that. Okay. So moving on from the technical side, because there is one other big question I want to ask you. We've talked a lot about if someone wants to write their own programming language.

Funding Challenges for Inco and Open-Source Projects

01:10:22
Speaker
There is a big practical question if you want to create a programming language, which is how do you fund it? And there are a number of funding models and I know yours is interesting. So I wanted you to tell me about that. Funding is difficult at the, um, At the foundational level, it's basically going to be donations because whereas in um you know at the 80s and 90s, people might've been willing to pay for programming languages. Nowadays, that's not true anymore. yeah nobody's If you produce a language and you say, hey, you know this is going to cost you a hundred bucks a month or whatever, people are like, yeah, nah, I'm just going to use Python instead. yeah and So that means you basically have
01:11:09
Speaker
as your options, donations, a sort of side hustle of some sort, or sponsorship. and Sponsorship typically only becomes relevant if you're big enough, because some random company's not going to say, hey, you know we're going to fund a bunch of people to work on some random language. So for most cases, it's Yeah, donations or a side business. That side business can be some sort of support constructs where you see with a lot of open source. It can be a business that makes use of the language.
01:11:46
Speaker
um the The challenge there is if you do something on the side that makes you money, I suspect that will become your primary interest because that's what's paying your bills basically. And I suspect it's going to be quite difficult to sort of balance between working on the commercial thing and the the language that it's built on top. Might be more possible if you start hiring people, but then you need more funding, which probably means for venture capital, which sort of completely changes the um trajectory you'll be on. Yeah, yeah very much so. um Like for example, Dino is doing this where they have a ah language
01:12:30
Speaker
And then they have all business builds around it, like a key value store hosting, et cetera, et cetera. And that can work quite well, but I don't think you can do that as a solo developer. I think it's just too much work. So you have to be willing to say, hey, okay, we're going to hire people. We're going to take venture capital. That means we're going to have to go public. That means a lot of the decisions are going to be influenced by shareholders. because they want at some point their money back. yeah yeah but So your language goes a completely different way. Donations are very, very difficult. so like You can build the best thing ever and people will still not pay you money. Sometimes that makes total sense because they don't have it.
01:13:21
Speaker
Perfectly reasonable, right? Other times it just like, well, you know, I already use Python. So, right. The reasons can vary. Yeah. And what I've noticed so far is the sort of traditional ways of going about is you publish stuff, articles, ah you know papers, whatever. And through that you try to sort of constantly remind people like, Hey, you know, this thing exists. Please take a look. But what I've noticed myself is that you can publish something. and Let's say you get 100,000 views or something like that. Of those 100,000, maybe 1,000 will actually click on the link to the website of your project. And of those 1,000, maybe 10 will actually try it out. And of those 10, maybe one or two will join your Discord, IRC, whatever channel. yeah And then of those, maybe one will donate.
01:14:18
Speaker
So the conversion ratio of basically people seeing it ah to people donating is very, very low. but And that means it can take many, many years before you get $100 a month, let alone enough to pay your bills. i I don't have any sort of good ways about dealing with that. i'm I'm still trying to figure that out myself. um
01:14:51
Speaker
and that There are organizations that you can apply for grants. So Inco has said there's a grant we get from a foundation called NLNet, a ah foundation here in the Netherlands. They're sort of annual grants dedicated towards primarily projects that are sort of internet focused. Inco, in that sense, was a bit of an exception. yeah Essentially what happened there is so i I asked for about 8,000 euros. The ID being there, I think it was like 1,200, 1,500 or something a month to taken take into account my bills, taxes, et cetera.
01:15:31
Speaker
yeah And I thought that was a lot of money. I was like, ooh, they're probably going to reject it. So I had a call with them and they basically said, look, normally we don't really do programming language because there's already so many of them. But the amount of requests that were so little that we figured, hey, you know what, we'll give it a try. And I was like, okay, what do people normally ask? And they're like, oh, 50,000 euros. And I was like, oh, okay. Right. But the thing with those grants is they they are typically ah sort of very specific to certain topics. So the European Union has a couple of grants that you can apply to, but they're usually focused on like security or how dealing with artificial intelligence. They're very specific and if you fit within that profile, it's probably great. But for most people, I suspect they will fall outside of it.
01:16:26
Speaker
um And this yeah, this is something I'm um still trying to figure out, um where INCO's funding has sort of gone up and down a bit over the years. I think at peak, the donations were like $40 a month before taxes. um Currently, it stopped at zero because a bunch of people, they they stopped their donations. ah Yeah. i I think a so ideal solution would be some sort of system where governments, they dedicate a certain amount of their annual income to the sort of
01:17:07
Speaker
ah funding ah organizations to what's called an open source tax, if you will. so that There's basically this constant stream of money that isn't purely tied to the goodwill of companies and organizations. but I think it will be many more years before we will see something like that, if at all. And yeah even if we see it, it's going to be maybe one or two countries. I doubt it's going to be widespread. so Yeah, there are certain countries I certainly can't see that happening in, despite the fact that I do think if companies gave back in cash, 1% of the value they get from open source, that'd make a huge difference.
01:17:56
Speaker
Yes. a That's the sort of paradox where, I don't know if it's a paradox, but because it's open source, people don't pay for it. But if you make ah if you charge people for it, they will not use it because they don't want to pay for it. yeah No matter how valuable it is. Right. And you end up with the sort of state where Yeah, a lot of companies are built on top of open source software, but that software is not receiving funding or very little. OpenSSL is probably the best example. of and When Heartbleed happened, people realized, oh, it's basically one maintainer and they've been getting like basically no money or very little for for many, many years.
01:18:42
Speaker
and And now we have this big problem, shoot, and look maybe we should send some money that way. And I believe nowadays they have ah a decent amount of funding as a response to that. But there are a lot of projects where that doesn't happen simply because they don't have that sort of big exposure event where people sort of wake up and they're like, oh, shoot, right? Yeah. And it wouldn't be ideal to manufacture that, right? Yeah, ideal way people would be that companies would have to set aside some money, but it's it's very difficult. I don't think that's going to happen at a global level. um And there is the potential problem that if that happens, companies might say, well, we're just not going to use open source anymore. We're just going to build it all ourselves, right?
01:19:37
Speaker
I think if they did that, they'd very quickly find out that paying for a bit for open source is the cheaper route. Right, probably. so i I and would like there to be more billionaires donating money to open source, as similar to how they all set up their own you know trust funds and whatever.

Current Limitations and Usability of Inco

01:19:58
Speaker
and They just throw some money towards things like the Apache Foundation or whatever exists. right yeah yeah the The unfortunate reality, I think, is that especially if you build a language.
01:20:12
Speaker
you have to be willing to accept that for at least five, maybe 10 years, your income is basically going to be zero, or at least very close to it. Which means you need savings, you need a partner or a family that can support you, et cetera. I was very fortunate I can afford this. in that i ah I set aside money to to fund this and I made various decisions in mind, ah keeping in mind that you know for the next, I don't know how many years my income is going to be very bad, but that's very much an exception. And there's still that sort of thought like, okay, at some point we have to admit that I have to start making money again because because money is not infinite.
01:21:03
Speaker
yeah
01:21:07
Speaker
it's it's it's ah ah It's a struggle, that' that's basically the the gist of it. As an independent podcast producer, that's a familiar story. But until the day that um the billionaires start reaching into their pockets, the ah the utopian future for the present, let's try and end on a happy note. If someone wants to play with Inco, Where do they go and how much, how ready is it to play with? So they can go to the website. Um, there's also this links there to the, the get up repository, the discord, et cetera. Um, in its current state, it's very much usable. Uh, you know, you can write stuff with, it I have a couple of projects written with it. Uh, for example, it controls my, uh, central ventilation system. Cool.
01:21:57
Speaker
um I would recommend people to check out the git repository instead of the the current stable release, because there are a couple of changes there. um Yeah, so it is very much accessible. I think the main limitations currently is that SSL sockets are not supported. um So if you want to do like HTTP connections and such, you have to ah use curl, for example. Okay. But beyond that, it's yeah very much accessible. Worth taking a spin for. Yeah. And see if it is that friendlier step away from Rust, but towards the metal, right? Yes, i I certainly hope so. And the feedback so far has been very positive. So ah the sort of inner critic in me is being sort of kept in the corner, like, no, no, we're doing all right.
01:22:53
Speaker
But we'll leave people to go away and check it out. And if they like it and happen to be billionaires, get in touch with the show and we'll get Yorick's bank details for you. I mean, there's a GitHub sponsors account set up so you can donate straight to GitHub. Oh, even easier. Even easier. How am I going to get my 5%?
01:23:13
Speaker
Yorick, thanks very much for taking us through it. Yes, pleasure being here. Cheers. Thanks Jarek. So if you're listening to this from your computer or your phone, you'll find links to Inco and some of the writing that inspired it in the show notes. And if you're listening to this from your yacht, you can contact us directly with funding opportunities. You'll find our contact details in the show notes. Of course, for the rest of us, the first currency of the internet is interaction. So if you've enjoyed this episode, please hit like, share it with a friend, leave a review on Apple or Spotify. And of course, if you're not already subscribed, consider clicking the subscribe button because we'll be back next week with another great guest. Until then, I've been your host, Chris Jenkins. This has been Developer Voices with Yorick Peters. Thanks for listening.