00:00:00
00:00:01
Writing a CAD Language in Rust (with Adam Chalmers) image

Writing a CAD Language in Rust (with Adam Chalmers)

Developer Voices
Avatar
1.4k Plays16 hours ago

Given how many languages have been written in C over the years, it’s not surprising to see new languages being written in Rust. What is surprising about this week’s guest is the domain he’s writing for: Computer Aided Design (CAD). Could Rust be sneaking its way into the CAD world too?

Joining me to discuss the design and implementation of a CAD programming language is Adam Chalmers. He works at Zoo, developing KCL - a language that looks like JavaScript, runs on Rust, and offers users a seamless hybrid experience of both coding and point-and-click modelling. So, how does that all fit together?

In this episode we look at the design and implementation of a programming language in Rust; how KittyCAD creates that hybrid environment for text-based programming and point-and-click modelling; and how we can learn to write our own Rust-interpreted languages.

Adam’s Blog: https://adamchalmers.com/

Adam’s Guide To Writing Parsers: https://www.youtube.com/watch?v=QF3kMyzMC40

Zoo’s Modelling App: https://zoo.dev/modeling-app

Mechanical CAD: https://zoo.dev/blog/mechanical-cad-yesterday-today-and-tomorrow

A Lego brick in KCL: https://zoo.dev/docs/kcl-samples/lego

Winnow: https://docs.rs/winnow/latest/winnow/

Nom: https://docs.rs/nom/latest/nom/

Factorio: https://www.factorio.com/

Satisfactory: https://store.steampowered.com/app/526870/Satisfactory/

Crafting Interpreters: https://craftinginterpreters.com/

Coding in Antarctica: https://brr.fyi/


Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices

Support Developer Voices on YouTube: https://www.youtube.com/@developervoices/join


Adam on Mastodon: https://mastodon.social/@[email protected]

Kris on Mastodon: http://mastodon.social/@krisajenkins

Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/

Kris on Twitter: https://twitter.com/krisajenkins

Recommended
Transcript
00:00:00
Speaker
Given how many programming languages have been written in C over the years, it's not really surprising that we're starting to see the rise of programming languages being coded in Rust. That seems like a natural progression for the industry. So when this week's guest, Adam Chalmers, got in touch with me to say his day job was writing a programming language in Rust, I was curious and a bit envious.
00:00:23
Speaker
And I thought by talking to him, we might be able to get some tips on how you implement a programming language and how you do it in Rust specifically. But it turns out his story goes a lot deeper than that. Because the first question you ask anyone when they're creating a new language is why? What's the point? why What's it going to do that all the other languages in the world aren't already doing?
00:00:46
Speaker
And Adam's answer is actually very compelling. It's computer-aided design and CAD. And CAD itself isn't new. Mechanical engineers have been clicking and dragging new car designs into being for decades. But it's always been a very mouse-driven pursuit, right? You draw 3D models. You don't code them. You don't express them in a programming language. But you could.
00:01:12
Speaker
You could write code that defines physical things, and that's what KCL is attempting to do. And as we unpack the story of KCL, we get into tips and tricks for writing a language in Rust. We also get into dual presentation user interfaces,
00:01:29
Speaker
where writing some code shows you a 3D model, but dragging the 3D model around rewrites the code to match. And having those two things work together seamlessly is both an interesting UI design challenge and a really great example of the power of a good abstract syntax tree.
00:01:50
Speaker
In the end, programming languages, CAD, all these things are just different ways of playing with data. So let's explore the data structure. I'm your host, Chris Jenkins. This is Developer Voices, and today's voice is Adam Chalmers.
00:02:17
Speaker
Joining me this week is Adam Chalmers. Adam, how are you? I'm good. I'm good. How are you? I'm good. I'm good. you You're joining us is as an Australian living in New Orleans, right? Yes, that's right. That's a big culture change.
00:02:32
Speaker
I mean, there's crocodiles back home and there's alligators over here, so yeah, it's been quite an adjustment. Very difficult. As an Australian, it's very important to have animals that could kill you nearby, right? I just don't feel safe without them, you know? It's very Australian.
00:02:49
Speaker
Okay, so we whilst we could talk about wildlife all day long and New Orleans music culture, we're here to talk about and what you've been doing with computer aided design, right? Yeah, so and I didn't expect with working in an industry that had so much overlap with the hardware world, but for the last couple of years, we're working Working with a company making tools for computer-aided design, so tools for 3D software where you can design parts to get manufactured in the real world, not just ah software that makes bits and moves them around the internet. this i mean This is obviously kind of cool in the age where you can not only design something in a computer, but eventually, presumably, you could have someone print it and post it to you.
00:03:36
Speaker
all in one go. And i I was looking at the CAD world. It seems to me is a fair contender for the oldest non-military use of computers, certainly in the top five, right? It's been around a long time. I think so, yeah. The um chief research guy, Alan, at the company, he was telling me last year in our old hands that CAD software basically started by car manufacturers.
00:04:05
Speaker
in France, trying to actually model the curves that they wanted to have in the car bodies. and you know They used to all this stuff by hand with a big kind of big pencil attached to a string to get a perfect arc, but they were pretty early adopters of computer software to be able to mathematically model out the kind of shapes they wanted to carve out.
00:04:26
Speaker
It does seem like something where, especially if you're like modeling curves, Bezier splines, presumably they came up at a similar time. It makes sense, right? But you've got the enviable position of now translating this into code. Give me your exact role.
00:04:43
Speaker
at your company? The company is called Zoo. and When I first joined them, I joined them to basically be a normal back-end programmer. I just saw a tweet from the CEO saying, we need someone to help us build HTTP servers in Rust. and I thought, well, that's what I've been doing for the last couple of years. so I threw my hat in the ring for that. But then when I got here, ah this programmer, Kirk, showed me a demo of something he'd been working on that he was really excited to show the leadership, which was a kind of programming environment for you to write code. And instead of getting software, you get a model of your eventual hardware.
00:05:20
Speaker
And I thought that was really, really cool. And as we as a company decided to really kind of embrace that project and kind of put a lot of investment into it, I find myself talking about the programming language side a lot with CURT.
00:05:34
Speaker
And as Kurt kind of wanted to move on from writing, writing posers and JavaScript and tree execute tree walking, execute is and everything and get back to building this kind of UIs for, you know, sketching out different parts with the mouse and everything. I kind of found myself pretty naturally taking over a lot of the programming language stuff. And you've ended up in this point where you're now professionally writing programming languages in Rust.
00:06:00
Speaker
Yes. yeah I think I have a very cool job. I'm always very excited to um was very excited to kind of wake up and get to do do this. I think a lot of people would envy that position. Okay, so tell me about, I want to get into the guts of how you write a custom programming language in Rust. But before we get there,
00:06:20
Speaker
What is it about? computere I think about computer-aided design packages and they're very mouse-driven, create a square, extrude it, now you've got a cube and clicking around. what When I try and think of that yeah as a programming language instead of a mouse-driven application, I can't quite get there. So what does a language for designing these things look like?
00:06:45
Speaker
So I think you're absolutely right in that yeah most people when they think about geometry and their CAD designs, they think visually and they kind of think with their hands to some degree. You might have an image in your head and depending on how good of an artist you are, I'm certainly not a very good artist. I need to kind of just sketch it out with my hands before I really know what I'm picturing in my head sometimes.
00:07:06
Speaker
so I think that there's been there have been other attempts to make CAD driven by programming. There's a software called OpenSCAD where you write code and you get a model. and It's a one-way process. You don't use the mouse at all. You write the code and it outputs a model. And I think the weakness of approaches like that is that people want to use the mouse. There are a lot of tools. You're better done visually with a mouse. And so Kurt's big idea when he made this first prototype was that you'd be able to use both. So you can write code, and you can also use the mouse to draw whatever paths you want and to select faces by clicking them, not by storing a reference to them somewhere and having to imagine all the 3D output in your head while you're writing code. So I think the key is to make it
00:07:55
Speaker
both code driven and visually driven. Right. I'm trying to imagine how that's done. And my guess would be you write a programming language that converts to some kind of abstract syntax tree, then you build a visual editor of that syntax tree. And that's the way you can get both going at once. You nailed it completely. Okay. So.
00:08:21
Speaker
The code is the primary representation of your model. Code is what really matters here. But then the UI is really like ah when you press the button to do something, it's really a macro for updating the AST, for changing the code. So when you say,
00:08:40
Speaker
click a couple of points in space to draw a line that connects point A to point B to point C to point A, and we've got a triangle. What it's doing is to parse the code out, and it's got that, as you said, abstract syntax tree. And then when you click to add a point, it's updating the abstract syntax tree, adding a new item to it that represents the start of a path. And when you click the next point, it adds a new line to that code, a new node to the AST representing the edge from A to B, and so on and so forth.
00:09:09
Speaker
And then at every point, it kind of takes an AST and un-parses it or reverse-parses it back up into source code. So as you're clicking and you're adding new points, you see the code updating simultaneously with your clicks.
00:09:22
Speaker
Right, yeah, so it's all about having a parser, a pretty printer, and a UI that are all dealing with the same core data structure. Yeah, yeah, exactly. And then we also kind of store relationships between the AST nodes and the visuals. So when you highlight, you know, when you when you mouse over the code,
00:09:42
Speaker
and you get your little syntax highlighting, you click into a function that shows you visually by highlighting in the the video feed what that corresponds to. So when you mouse over ah line the kind of draw line call from point A to point B, the line from point A to point B also gets highlighted visually and vice versa. So we kind of store this data, we store this mapping between the AST and the visuals so you can understand your code a lot better.
00:10:09
Speaker
Oh, right. So you're ah you're keeping hold of like cursor positions as well along the way. yeah You mean by cursor, you mean like the hardware mouse cursor, or you mean like a cursor into a data structure? I was actually was actually thinking about the cursor in the text editor, like line numbers for the source code. Yeah, so we we have like a, we have what we call an artifact map that kind of links every AST node to that kind of CAD geometry that it produces, and then we also link AST nodes back to source offsets, yes.
00:10:44
Speaker
Right, yeah. I mean, we just have like a, the coordinator itself can kind of associate ah the the code offsets with the AST thing. Yeah. Yeah. That makes sense. That's so more and more familiar from languages that want to give like good error messages and stuff. Absolutely. Yeah. Yeah. Do you also store things like um the camera angle? Because when you're looking at the model from a certain direction,
00:11:10
Speaker
That is not stored in the client. So one little wrinkle here is that this is also this is a not rendering locally. So wherere ah we're an API-driven company. And so when you actually open up our app, the UI and everything is running locally, but it's connecting to a remote kind of rendering and geometry and and up rendering engine in the cloud.
00:11:39
Speaker
And then we stream the video feed of your model over WebRTC back down to the client. Oh, so what looks like a renderer is actually just a video playing of the renderer you've got in your server farm. Yes, exactly. Why? That's a strange architectural choice.
00:11:57
Speaker
So it's it definitely has some challenges, but I think it's worth it because firstly, ah you don't have to as a CAD programmer buy this big beefy gamer laptop that's got a big GPU and 14 fans and probably covered with garish ah RGB because there's no such thing as a professional graphics laptops. They're all just kind of gamer laptops rebranded.
00:12:19
Speaker
and This means that when you're rendering the model, your fans don't have to spin up, you don't have to kick into video card mode, your battery doesn't get drained. so For the people editing it locally, so you can kind of use it's a very lightweight experience to render. I don't know if you've ever how much CAD programming you've done, but they often tend to be really resource-heavy. and so The nice thing about offloading that to the Cloud is that and your machine doesn't have to render it, it just has to render a 2D video feed, which is much lighter white.
00:12:46
Speaker
It also means that you don't have to buy a new laptop when you want to upgrade your graphics card because your CAD model is getting slower. You'll wake up one day and find that we've deployed you to a faster GPU whenever the cloud providers offer new GPUs. And so I think that's a much smoother upgrade path.
00:13:05
Speaker
so I'm going to be able to design my jet engine on a Chromebook one day. Basically, yes. yeah and for For professionals, this isn't quite as big of a problem. you know if you're If you're working at a big industrial design company, your employer can probably pay to get you whatever kind of good hardware you want.
00:13:23
Speaker
Hopefully. I mean think we've all had employers who are a bit stingy and won't spend you know the extra $1,000 to give you $20,000 worth of productivity improvements. I think this is really important for hobbyists and for students, especially. because If you want to use CAD to do this a little around-the-house thing, and know make a ah hook for your desk or something like that and 3D print it somewhere,
00:13:45
Speaker
ah it means you won't have to buy a very heavy weight machine to do this kind of rendering. And so I think that hobbyists and students don't want to invest, you know, thousands and thousands of dollars into a top of line rendering workstation, be very happy about it. Okay, so ah you're doing it for those there theres cosplay people that are making their own Iron Man suit. kind of thing Yeah, yeah, maybe. I mean, the other thing is that when we started the company, we didn't really think there wasnt The founders kind of envisioned the company as really being an API company. We realized that if we're going to be making this API, we really have to have an app that consumes the API. Otherwise, we don't know if we're going to be building the right thing. yeah if we We have to build an app alongside the API to make sure that the API is actually useful.
00:14:34
Speaker
So the idea is that if you don't like using our modeling app, you don't want the code driven CAD thing, or even just you want to make something really specific, you can use PowerAPI to replace having to make your own CAD app. So I met a guy at Rustcon a few weeks ago who was telling me he worked at a sheet metal foundry.
00:14:57
Speaker
okay And they're getting orders from customers about, you know I'd like to get metal printed with this kind of printed metal cast with these kind of dimensions and curve it to this degree and this kind of alloy. And they'd have all these back and forth conversations with customers, but we can't do with that kind of curve. Or we can, but it'll increase the cost by 10x. But if you're willing to make the curve a little bit smaller, you could get you know this much, much cheaper thing. you know We have maximum thicknesses. There's all these kind of constraints.
00:15:27
Speaker
on what fabrication places can actually make. And instead of having a back and forth conversation where you have to schedule a call and you call each other back and everything, ah they ended up making their own CAD app just for sheet metal foundry, which encoded all of the constraints of that particular foundry into the app. So customers would design their thing and you know they'd get little warnings if they were trying to do something that would be impossible.
00:15:51
Speaker
And that's an incredible amount of work to make your own CAD app with your own 3D rendering and geometry engine and And so we thought, OK, there's probably a lot of shops like that that really wants to have their designers get feedback in the design step, not when they explore a model and model an email with someone and the person looks over it and talks to them and thinks, OK, well, here's the feedback.
00:16:13
Speaker
so I think if companies can design their own cat apps you know much easier than having to make their own full back end rendering and everything it could be very very very productive. Okay that speaks to some other things we have to talk about down the line about how you're processing this ast. But let's get there via the detail group so.
00:16:36
Speaker
We've got an AST, what makes it, I assume there are the standard things you would find in any programming language, like I'm a variable, I'm a function. Is there anything novel for the AST at the core of a CAD function, CAD language? I think the, not not really. I think, you know, if you look at it, the goal is to make this approachable for non-programmers. So we're trying to kind of make it,
00:17:02
Speaker
the The pithy summary is it's Haskell that looks like JavaScript. and so the one I think you might be surprised by what there isn't. We don't have loops. We don't have variable mutation. It's more of a functional-style language. and This is really because it's not managing a changing set of things. It's not reacting to new information or state transitions. You're really just declaratively defining a model.
00:17:29
Speaker
So there's no real need for mutation or loops, at least not yet. I have a couple of couple of open like avenues of research about where people might want them. ah Say you're trying to make so trying to you know calculate something. you know You shouldn't have to do all your calculations by hand and then put the the final number into the code at the end. So say you're trying to calculate the right curvature.
00:17:53
Speaker
maybe you'll just want to code the algorithm in in the the same file, which you're defining the CAD model and everything. And so maybe people will want to be able to do general purpose math, and that having a loop or looping if statements in there would be the right way to do that. But I think you can do that all functionally as well. It's really just a question of the UX, because we're going to be, we should, this is the end of mechanical engineers. So we want to make sure that they don't have to take you know a huge amount of functional programming background or anything. so We'll have to see how we go. Yeah, the the design of the surface language must be quite challenging because it's a very unusual audience for programming language. Yeah, although I think the visual editor will help with that because as you can use the visuals, you'll be able to see the code being generated in real time. My hope is that that will give people a really good intuition because they'll be able to kind of have
00:18:45
Speaker
they'll be able to see the translation between the 3D side and the coding side as they're doing it. Yeah, you see someone, you draw a curve and then you see the exact syntax of the curve drawing function. Yeah, the things that, the things really you wouldn't expect to see in other languages is really going to be the standard library. You know, standard libraries in other languages have tools like TCP stream, you know, ah byte-shuffling operations, a bit-shuffling, up bit bit-shifting operations. yeah this Instead of those, we had standard library of functions like you know circle and you know tangential arc and bezier curves and things like that. Do you have to fill it like you have to include something like a constraint-solving library in the language as well?
00:19:32
Speaker
We have some constraints in the library. We have, you know, you can constrain a line to be parallel to other lines or perpendicular to other lines or have the same angle or length or something like that. I wouldn't go so far as to say it's a constraint solving library yet. It isn't, you know, kind of like like a, you know, forgot the phrase, I guess, but integer programming, basically, we are going to say, you know, take this function and, you know,
00:20:01
Speaker
minimize this variable by permuting all these other variables. right yeah but goal finding yeah But I do think that over time, the goal is to include more and more of these constraints solvers and higher level tools. I had a really interesting conversation when I joined the company with one of the mechanical engineers on staff, Josh, where This is why we're still designing the programming language. So I opened up you know open up a text editor and said, OK, pretend this is a code editor and you've seen you know the code changing, but it's just a plain text editor. And I designed a function to draw a gear, to sketch out gear. And the function was parametric in the number of teeth of the gear yeah and the radius of the overall shape and then the height. And so this it's kind of this fully general purpose gear function.
00:20:53
Speaker
And he was kind of blown away by that because it's pretty difficult to make CAD models really well parametric in visual editors. You know what you can say, you know, I will not be able to update the number of teeth here and have it smoothly kind of reshuffle everything to accommodate the new number of teeth.
00:21:10
Speaker
yeah or to kind of say which sizes are relative to which other sizes. but Anyway, like got I scratched up this outline for a parametric gear and he said, great. As an engineer, now what I want, now that I have my fully general gear function, is I want to be able to say, okay, what parameters should I choose? I can put any parameter, but which parameters should I choose? I want to say,
00:21:34
Speaker
you know get a gear that achieves a certain level of torque ah talkrque with a a certain amount of cost. and so you know i'd say okay well You'd have some kind of function you can put in here, which relates these variables to each other.
00:21:49
Speaker
And then I guess we'd have a constraint solver in it, which says, you know I don't know if I'm calling if this is what a constraint solver is. but Correct me if I'm confusing two things, they're wrong. But you know having something in there which can tell you, you know I want these parameters to be minimized or this output value to be minimized or maximized. And so give me the right parameters for that. And so I think once we're very happy with the state of of our app, we're going to build in these kind of engineering handbooks, you know one of those big books where they have the coefficient of every different metal alloy, you know exactly how hallies how elastic they are, how conductive they are. We'd be able to sheing force this metayntic era be able to have a library of these calculations available for you so you can easily check the model and then say, okay, well, now I know what the value of all these parameters would be.
00:22:42
Speaker
but I don't want to choose them myself you don't choose and you have all date already so we can build something and then probably log into an existing solution or if we had to make our own look too bad. But to be able to say yeah what values you have proud as you should get to achieve your goals your engineering constraints.
00:22:58
Speaker
Yeah, because this this gets into there's a lot you can do with a good physical model. but With an object as code, you should be able to say things like, calculate the volume of this and how much that would cost me, and platinum versus tungsten. Platinum's a terrible choice.
00:23:15
Speaker
but Yeah, exactly. exactly so we're We've already got some of this in our API. We have some pretty basic API analyses. Whether you're using the API or using the app, you can take your model and you can ask for queries like it's it's masked, it's white, it's... um center of gravity is the one that I was a little little surprised by. ah The goal is to kind of build these up more and more and more. Before we built the sketching part of the company where you can actually design design ah files and everything, before that,
00:23:52
Speaker
You still have the foundation of the API, and it was pretty limited when I joined the company. What it could do is it could accept files in various formats, it could convert them or export them into other different kinds of CAD formats, and it could analyze them. So you could put up a file you wrote in SOLIDWORKS, you could analyze it to check the weight, and then you could convert it into a file that was appropriate for 3D printing.
00:24:13
Speaker
so and Then you could have a script that checked in you know in CI. ah You could have a CI script that checked in, changes to your SOLIDWORKS files, and recalculated the weight and would maybe fail CI if the weight increased too much. and Then it would you know be able to convert them into the right format and send them to a 3D printer for prototyping. so Every time you save your SOLIDWORKS file, you get at the CI checks and then you get your ah deployment, by which I mean 3D printing.
00:24:37
Speaker
right yeah yeah So the goal is to build these tools and workflows for CAD, and then we can use that to get get the basic infrastructure or service set up, and then we're like, okay, right. Now we can start also doing the design steps.
00:24:52
Speaker
But this is getting to a very different vision, isn't it? Because if that was API based, this is more saying there is a core data structure that describes objects and we can turn it into code and back. We can turn it into a UI and back. We can turn it into 3D printer instructions. We can turn it into an export file. Is the API going to evolve to the point where it looks almost like a refactoring tool for your AST?
00:25:19
Speaker
That's an interesting question. I think for now, see the the representation of the model in our app is just one possible input to our API. you know The goal is that you will be able to use our API for whatever purposes you deem fit.
00:25:40
Speaker
And if you want to use our app with it, that's great. Our app will have really good support for the API, and it'll be built into the UI to have easy buttons to use our API queries and everything. But there's going to be users who don't want to use our app, and there's going to be users who have their own really specific business needs around CAD. And they don't have the expert to use in-house to bring it all in-house. And they don't want to use a general purpose tool like AutoCAD or something with, you know, they want to be able to customize it and do just the things their business needs. Or bring in whatever kind of novel calculations for their field are really important.
00:26:14
Speaker
and so we want to be We really are committed to building an API because that way users can kind of plug their specific needs and have all the logic taken care of the kind of core lifting logic taken care of by our end, and they can kind of focus on their business and their domain of design.
00:26:32
Speaker
Yeah, that makes some sense. i yeah i'm I'm still wondering what I can do with this ah core data structure that you've been making in Rust. But maybe we should still dig into the Rust part, right? Because I was going to say, why choose Rust? But it sounds like you'd already chosen Rust before you chose making a programming language.
00:26:50
Speaker
I think so, yeah. so The founder of the company, Jess, she previously was one of the founders of Oxide, Oxide Computer, which is a big, big rust shop. They're really programming yeah their own hardware, yeah ah making their own firmware for microcontrollers.
00:27:12
Speaker
ah memory modules and everything. And so for them, it was an obvious choice to use Rust because, you know, there were kind of only three games in town, C, C++, plus plus or Rust, and only one of them had any kind of memory safety. So Rust was, and Jess kind of fell in love with Rust when she was there, and had kind of gotten over the initial hump of kind of learning it and wrapping her around the border checker. And so ah we knew that this company was going to involve C++, because if you want to use graphics,
00:27:41
Speaker
And you know if you're using Nvidia GPUs and you want to be doing kind of fast GPU operations, then you probably do want to use C++ plus pluss because the Rust 3D libraries, they're they're there, but they're not really mature enough, in my opinion, to build a productive company on. They're probably still on the hobbyist level. You're the second person I've talked to this month who's dreaming of better Rust support on graphics cards.
00:28:06
Speaker
I actually listened to that podcast feeded that episode a few days ago when I was walking my daughter around the park and I said, see Eden, other people also agree that we need a good Rust level support for CUDA. I'm not the only one. How old's your daughter? ah Five months.
00:28:22
Speaker
Five months okay so she yeah she would probably the ideal age to give that sentence to a child yeah well i can tell she really she agreed that she didn't say anything but she's a quiet type but anyway when you would be interacting a lot with c++ since we needed but when you we didn't want to have to write our api and c++ plus plus i mean that just seems like a bunch of.
00:28:42
Speaker
a bunch of foot guns ready to go off. So the goal was that C++ plus plus code would be in charge of doing the graphics and kind of geometry work. And then we could link that in quite nicely to Rust and you know Rust and C++, plus plus a pretty good interop story. It's a great library called CXX that can link your Rust and your C++ plus plus code with no need to copy data, but just kind of make sure your data has the same representation across both languages.
00:29:12
Speaker
Oh, yeah. You tell Russ to lay things out in memory the same way C++ plus plus would, and then magically they can talk to each other without copying data. Yes. Yeah. that's and So neither way we're going to write the API.
00:29:27
Speaker
in Rust because it could nicely link with C++, plus plus but it's also got, it's one of the few languages that can kind of run the full gap from the low-level C++ plus plus code up to the high level of, I want to export the JSON API schema so that our customers always know that the API schema docs we give them are actually accurate and do reflect the servers they're running in production. At my previous job at Cloudflare, I was kind of I found myself one of the maintainers of the Cloudflare Rust API client. and It was a constant struggle to make sure that it was accurate whenever teams would change something. We'd have to open up PRs and what kinds of people to keep the the API client and the API servers in roughly compatible states. And so when I joined this company, I was very glad to see that API schema generation was a first-class concern. but Because that makes sense if you're if your business model is selling API ah usage.
00:30:23
Speaker
You've got to make sure your API docs are really accurate. yeah yeah yeah It's funny, it's it's exactly the same as you having like a code editor and a mouse-based object editor. The same story, right? Get a core data structure and generate things from that. Yeah. yeah I say this is a job where I've become most comfortable with code generation.
00:30:46
Speaker
So how does that relate to writing a new language is called KCL, right? How does that relate to writing KCL in Rust? So KCL, or the kitty cat language. um So bo what that? What is that relation? but we're We're getting into the realms of code generation, you mentioned, but also I'm thinking like, teach me how to write a programming language in Rust. Give me the basics of it. What tools do I want?
00:31:13
Speaker
So the way I see programming languages, there's kind of four steps. You take in source code, and step one is you break it into tokens, which are kind of a higher level than just, you know, letters C, H, A, R. You have, you know, the car keyword, the equals operator. So source code tokens, step one. Tokens to an AST is step two.
00:31:38
Speaker
then you want to maybe transform the AST in some way. Maybe you're optimizing it. Maybe you're type checking it. Maybe you're eliminating dead code. That AST step is stage three optimization. And then stage four is actually running it. you In an interpreter, you actually execute the AST. Or in a compiler, you take the AST and you generate whatever kind of lower level, whether it's machine code or LLVM, whatever lower level representation you have. So we're an interpreter.
00:32:06
Speaker
so Those steps, roughly, are we tokenize? We take the source code and break into us into a vector of tokens. And we use a great library called Winnow for this. Winnow. Yes. r W-I-N-N-O-W. There's a very much beloved and somewhat venerable Rust library called Nom, which is a parser combinator library. Have you ever worked with parser combinators?
00:32:33
Speaker
love parser combinators. I need to think ah sometimes I do Advent of Code just because it's a great excuse to write parser combinator libraries to parse. They don't even always solve the problem. I just like writing the parsers. There's two kinds of Advent of Code people. There's people who do Advent of Code in spite of the parsing and because of the parsing. Yeah, I believe that. So NOM is a parser combinator library that's been around really, I think, very early since the very close to the start of Rust. But you know the maintainer of it has another job, and it's going to be doing other things. and So I think NOM has become a bit of a low priority, and it hasn't really received updates in the last couple of years. So ah we know it's a fork of NOM that I think brings a lot of really nice improvements.
00:33:21
Speaker
and it's kind of battle-tested enough that it's being used in the core Rust crates, like the Tomor pausing crate, which is used in cargo, the Rust tool, for the Rust build system tool. So yeah, I'm using this Bueno crate for pauser-combinators. Sorry, I had a question.
00:33:37
Speaker
I know, I was just, so the parser combinators libraries I've used don't really make a distinction, don't really make a distinction between tokenization and parsing. Not in the way that something like Lex and Yak back in the day did. It was literally two separate tools for those tasks. Yeah, that's the same thing with Winout. So in the tokenization step, I'm using Winout to parse out ah bytes, it's UTF-8 encoded bytes into my own custom token type.
00:34:07
Speaker
And then in the parsing step, I'm then pausing my vector of tokens into an AST. So is that something that WinO imposes on you or just the way you've chosen to deal with the problem? It's the way I've chosen to deal with it. In previous parts I've written with WinO, I wrote a couple of other parts here and there. I did them both in one step. But when got this when I got the KCL prototype from Kurt, you say, OK, look, here it is in JavaScript. I've translated into Rust. It's now Rust written in the style of a JavaScript programmer.
00:34:40
Speaker
Good luck, go ahead and rewrite it in whatever way you see fit now that there's unit tests and everything. So Peter's written the tokenizer and the parser separately and actually did quite like keeping them separately. Previously I've done them together, but I think having as a programmer inheriting code bases, I've always rather have two simple things than one complex thing.
00:35:03
Speaker
I can see that. Yeah. Persuade me. Cause I always do it in one pass. Can you persuade me to switch over to that style? Yeah. So when you are writing your tokenizer, it's a pretty simple process. You know, you generally don't have very much conditional logic. It's pretty straightforward. I think I wrote the tokenizer for KCL and one, maybe two days. Uh, but when you're writing a poser, it's so much nicer to be consuming a vector of tokens because when you're debugging something,
00:35:33
Speaker
The debug output is just so much more concise because you have a depo deug output saying you know and patrna pars you car x equals four but don't have debug saying, I'm trying to x Oh, yeah yeah. I have hit that where it says encountered the letter F when I was expecting something else. It's like, letter F isn't that useful to me. Yeah. yeah and so The other thing is that these tokens um When I say token, I'm taught more to struct with fields like you know the source code starting and end offset, so you know where in the code editor to highlight this token. There's an error about it. ah The token type, so something like you keyword or identifier or operator or literal, like a number.
00:36:26
Speaker
And then it also has the underlying strings that it was pointing to. And so ah when you're writing these posers, it's really nice to have kind of already got this high level construct to work with.
00:36:41
Speaker
you know Your parser can say, I encountered an error you know between characters 82 and 84. And you can print out what was there. And you can highlight it quite nicely. And you would have to do that anyway you know if you were building the tokenize and parse together. But it just means all of your parsing functions become quite a lot simpler. They don't have to be tracking the source code offsets themselves. you know There's a whole bunch of things that have already been done for you. And sure, you're the one who did them. They're already done by you in a separate and know separate module.
00:37:11
Speaker
But I think it makes for me, it's been easier to focus on just the parsing logic of, given that I've encountered a variable declaration key word like let, what do I expect next? I don't have to think about what character I'm expecting next. But I know, OK, after the word let should probably come an identifier, like yeah yeah length of circle or something like that.
00:37:35
Speaker
Yeah, okay. Okay, next time I'm writing a parser, which will probably be before Advent of Code 24, I'll try it that way. Yeah, I mean, it's, I don't think it's necessary if you're writing, you know, for a simple parser, like if I was just parsing an item of code input, I wouldn't do it this, I wouldn't break them into. But for something more complex, like trying to parse an actual programming language,
00:37:57
Speaker
At the point where your functions are getting long enough, you need to be thinking about how do I break them into smaller pieces. It's helpful to have all this stuff broken up. And especially for performance analysis, it's really nice to be able to benchmark my tokenizing functions, my parsing functions separately. Now my parser benchmarks can just take a pre-tokenized input. And I know that any performance slowdown I see is therefore in the parser, not my tokenizing step.
00:38:24
Speaker
That's interesting. You actually are, for a young language, you're getting into the performance already. Yeah, i mean a compiler i mean yeah we're kind of getting into it as we need to. so yeah We um with we but recently hired a guy to come work for us, and before he before we hired him, he was kind of our number one fan in our little Discord we have, and it was great because he was very passionate, and it was also great because he was just just maybe a little bit annoying as well because he'd say, hey, I'm trying to render this you know huge complicated SVG from one of my previous card files in your app and it just know completely freezes up. like okay Well, haven't really been using it for very realistic use cases yet, and I'm not expecting it to be but very but for a performance yet, but okay. no
00:39:13
Speaker
but really i've really know It be a bad look if this still is crashing in a week's time, so let me take a look at it. so a heatta He sent me this big file where he had generated he wrote a program to generate and ah generate KCL, given an SVG file.
00:39:32
Speaker
right So, we parse out the SVG file, translated all of its lines into how KCL draws lines, and then tried to run the KCL, and stuff was crashing. At that point, I was like, okay, well, it's taking, you 30 seconds to parse and that is clearly insane. This is when I actually rewrote the parser. The parser was JavaScript and then Kurt translated the JavaScript into Rust and did an admirable job. It's for someone who has not had a lot of Rust experience. And then also there's a couple of just common foot guns and string processing and Rust, you know, you're doing some doing something like, you know,
00:40:11
Speaker
calling the .int to get the int character and string, which has like a linear cost to it to traverse the string because you have to do the UTF-8 decoding every time. It smells of quadratic behavior somewhere. So it's about time that it's right wrote a tokenizer myself.
00:40:28
Speaker
And so, yeah, I have to run over several kinds of times of, you know, Hey, Adam, you know, this model is really slow and I need to show it to an investor. And, you know, like I got really embarrassed while pitching the app to an investor because it's slow. So here's the file. Let's fix this. So we've just kind of been trying to take performance very seriously from the start. Because one of the reasons that the company was started was because existing CAD files can be really, really slow. So, right. Yeah.
00:40:57
Speaker
that's one That'll hopefully be one of the big selling points that companies are very fast. and so I really don't want to be in a situation where we've written around CAD kernel on the backend that is built from the ground up for CUDA and NVIDIA GPUs. and It's massively doing everything in parallel. the Existing CAD kernels have to be single-threaded. To have that sting around,
00:41:18
Speaker
at completely ridiculous levels of utilization because my little Wasm, my little Rust code that's one running in ah in Wasm in the browser is just doing all the string processing to process the programs and know what commands to send. That would be a little embarrassing for me, so I've tried to ensure that doesn't happen.
00:41:35
Speaker
Yeah. And you must be under quite serious time constraints. If you're expecting people to write code, you send it to the backend, render it, turn it into video, get it back to the front end in presumably tens of milliseconds. Yep. But policies are pretty fast. You know, uh, it's, it's one reason I think using grass is a great choice for the company. If you care about performance, it's really just nice to have rust across the whole stack. You know, uh, think, uh,
00:42:05
Speaker
I think I was watching an interview with ah Richard Feldman, possibly on this podcast, actually, where he was saying, yeah well you know, I wanted to write rock and rust because I didn't want to write it in a a nice, easy language and hit the performance ceiling and have to write it in rust anyway and just use the fast language from the start. I remember him saying that. Yeah.
00:42:24
Speaker
Okay. So, um, I'm thinking I'm going to get any other rust tips out of you here. i mean we we We got, we got tokenizing and then parsing is pretty similar. and We can kind of keep going through the programming language stack or we can switch into more of the rust stuff. Um, I know I want, I was going to ask you next, um, any quick tips for, cause you've got a pretty print that AST back out into rust. Are you using anything interesting for that?
00:42:53
Speaker
Not yet. we're currently just using
00:42:58
Speaker
Basically, the AST is pretty naive. It is what you would expect from you know your first your first kind of interpreter. you know We have AST nodes. They're all stored. you know They store pointers into the heap to other API nodes. To traverse the tree, you have to follow pointers along the heap. and That's not great for cache locality.
00:43:19
Speaker
but you know Again, given what you said about everything is kind being sent of network to the backend, where it's rendered in 3D, and then we send the video back. Luckily, by the time we're talking about overall like latency for the app, we're talking about the order of milliseconds for transmitting video with the internet. And so I haven't needed to go so far as to really optimize like the cache locality and make sure I'm stack allocating everything yet. right I do expect there will come a point, because as people do more and more and more calculations within KCL itself. yeah Say you want to calculate the curvature of your shapes.
00:43:56
Speaker
in the math that KCL gives you. At that point, that will be an entirely local calculation. We know we want to make sure that if the bulk of your app is doing local calculations, it'll be fast enough. But I'm trying to not get too seduced by interesting-looking performance problems or sitting in the ocean in a nice big rock calling out to me.
00:44:16
Speaker
Just focus on, yeah currently, what is the slowest thing? Let's attack that. um you know We try not to make dumb performance mistakes. know If there's an easy way to make it faster at the moment, I'm not going to wait until it's a problem later down the line to fix it. but yeah
00:44:34
Speaker
The AST, basically, as you the on-pass and everything is pretty straightforward. you know We have the AST, we traverse it, know doing standard tree traversal, and we're building up building up the string as we go. yeah So, you know, keeping track of the indentation depths of each node and, you know, each node can about some strings and it knows how indented it should be. And at the end it kind of goes through and produces them a nice big string. Right. Okay. Do you, um,
00:45:06
Speaker
Before we leave that specific part, do you optimize like differences, or do you re-parse the entire file every time someone changes it in the editor? but Great question. Right now, we just re-parse the entire file because the parser is fast enough. You can just do that in every keystroke, and it's not too big of a problem. yeah But again, I expect that as people build larger and larger and larger programs in KCL, it will get eventually to be a bit of a problem. But honestly, i mean the parser is very fast. The parser combinators really do compile down to quite efficient code. When you're using these higher-level Rust constructs like iterators,
00:45:51
Speaker
I find that the compiler can often give you really nice optimized code because it knows this isn't just a general purpose for loop that could be jumping anywhere and doing anything inside it. I know that you know I'm traversing every single item in this vector exactly once.
00:46:06
Speaker
And therefore, it can do nice things like elide the bounds checks. It would otherwise have to do to make sure that when you're indexing into the array, it's always a valid element of the array. Notice at the start that you're going to do every element of the array exactly once, so it doesn't have to do bounds checks. So yeah Rust makes this kind of list processing stuff quite efficient.
00:46:27
Speaker
Yeah, as you would hope, right? yeah has that kind of yeah but so Luckily, there's a lot of really good Rust stuff out there for building programming languages because there's a really successful programming language written in Rust that I really admire. um And so I can borrow a lot of the tips from that team. Which one's that? It's Rust. ah rusty's written named rust that how did you not know that yeah it's ah so Rust is written Rust. And so you know when I got a Rust Conf,
00:46:57
Speaker
go and get coffee with people who work on the you know Rust's own parsing tools and everything. And i can you know they'll say things like, oh, you know you really shouldn't be you know doing all these heap interactions. We built this library for Rust Analyzer over Rowan. And it kind of takes your tree structure that's nested on the heap. And it kind of packs up nicely and efficiently and puts it all into one flat thing on the stack. And then you can traverse it nicely. So I got these kind of nice performance tips from really big languages within Rust.
00:47:25
Speaker
Right, so Rust itself has been optimised for writing languages? I guess so, yeah. That's interesting. That makes me wonder why it doesn't have its own built-in parser-combinated library. Or like a official one at least. he I think Rust is one of those languages where they try to have relatively speaking relatively sparse standard library because they take their one you know Rust 1.0 promises that we will never break code when you upgrade Rust and so therefore the standard library can never change its API.
00:48:01
Speaker
So if they include something like a parser-combinator library in the standard library, then they can't ever change the API for it. And I guess people making Rust just didn't feel like they were experts enough with parser-combinators to be able to say, for the next 20 years, this API will remain stable. OK, yeah, yeah, yeah. Fair enough. OK, in that case, don't put it like you're taking a release version 2, or 3, or 4, or 5. Different set of promises, right? Yeah.
00:48:29
Speaker
Okay. So take me down the stack a little further then. So we get into your evaluator and it is, you're interpreting, you're not compiling at this point, right? Yeah. Yeah. It doesn't really make sense for KCL to be compiled because it would be kind of compiling with massive scarecords there into API calls to our API.
00:48:53
Speaker
And so yeah we might you know kind of compile it down to bytecode for our own little custom purpose little bytecode machine, which I i did try to write at some point and kind of spent a month or so trying that. And I was like, right this is really proving to be more work than I think it's going to be worth. And I should really just cut my losses here and make the existing interpreter faster rather than having to rewrite everything.
00:49:15
Speaker
so yeah It's an interpreter language, and I think if anyone's going to write their own language, I would say start with an interpreter because you don't yet know what kind of instruction set to target. Firstly, we run KCL ah in this app I've described.
00:49:34
Speaker
which is a web app. You can either can upload in a browser, you can download your electron app where it can do things like read files and save them to your file system locally and stuff like that and save your preferences. But you can also use KCL via CLI.
00:49:49
Speaker
if you know don't want to open up a web browser or GUI. The downside is you don't get this nice bidirectional visual stuff that I've been mentioning to you. you know There's no curves you can move around. But if you're you know more comfortable know writing all your code in VIM and having a little CLI to output it, which I often do for my own little testing purposes and test another language, I'll just have a little text editor in the left half of my terminal. On the right half, I'll have a little watcher script that when I save the file, it runs it through Azure CLI, and it pauses it, executes it, and outputs it to a PNG g file and disk, and then displays the PNG in my terminal using my terminal's little image capabilities. Okay, yeah. yeah
00:50:37
Speaker
so you It's an interpreter. This means that ah we can kind of take our Rust code and we can compile it into x86 for me to run locally on my laptop. but well Actually, it's ARM. It's compiled into ARM64 because I'm on one of these new shiny MacBooks. Then in the browser, we run it via WebAssembly.
00:50:59
Speaker
so Having the interpreter gives us a lot of flexibility for that because otherwise we have to figure out you how we're going to target x86 and ARM and WebAssembly. so yeah yeah okay what Tell me a bit about running it as Wasm. Was that a straightforward?
00:51:20
Speaker
It was relatively straightforward. There are some problems. So when you compile your code to WebAssembly, it's then very easy to interoperate with JavaScript. You can take your Rust code and you can output a WebAssembly library and a little JavaScript module loads it. There's a tool called Wazenpack that does all this for you. It not only compiles your Rust to JavaScript, but it also to WebAssembly, but it also makes the little JavaScript modules. You can really easily import it to your existing JS code bases or TypeScript or whatever version you're using.
00:51:56
Speaker
The problem is that WebAssembly is great because it's a very kind of tightly controlled and sandboxed environment. It doesn't have access to your core machine. It can't read your file system and do whatever system calls you want. And so you can control it very nicely. The downside is that then that means you can't actually do everything you want. So you can't, for example, make network requests from ah WebAssembly very easily. and So what we do, you know this means we can't really use the kind of standard web handling stuff we can do when we compile our KCL executed to the CLI. So instead,
00:52:38
Speaker
basically, you take these functions and Rust that are doing network IO, and when they get compiled down to WebAssembly, the WebAssembly is basically calling out to whatever The managing program is actually the runtime for your WebAssembly. In this case, it's the browser. And so it says, the WebAssembly says, hi, browser, I'd like to send this information over the TCP socket.
00:53:02
Speaker
And the browser is actually then taking that and it's going to do the do that request with its own built-in browser networking stack. So doing whatever kind of equivalent of an HTTP browser fetch or something. And then paste that data back into the WebAssembly virtual machine. Vince is right. You yep you sure did that networking all by yourself, little guy. Congratulations. Here are your bites.
00:53:27
Speaker
So this means that some Rust libraries you expect to work just fine for x86 or ARM do not work when they're compiled web simply or they need special support for it. So um you have to enable like a special JS or wasm flag in your library and then it knows, okay,
00:53:43
Speaker
You know if i'm running locally just make a normal system call to whatever you know whatever the operating systems tcp stock is for running web assembly. Then instead call this kind of standard web assembly interface for handing off i owe to the wrong time.
00:54:00
Speaker
Right, so you end up, I've done this with some embedded stuff, you end up with doing that dance of enabling and disabling library feature flags in cargo.toml? Yeah, yeah, exactly. I ran into this just the other day. It's generally pretty straightforward, but it does sometimes lead to kind of just Google the library name and Wasm and figure out how to get it all to work together.
00:54:23
Speaker
Yeah, I imagine support for that generally across the Rust library ecosystem is getting better and better though, right? It's very, very, very good. I'd say Rust is probably the probably the best language to target WebAssembly and the compiler support for targeting WebAssembly is really, really, really good. And because WebAssembly doesn't have garbage collection and stuff like that,
00:54:47
Speaker
people will have them yeah Previously, people wanted to compile Go or Python into WebAssembly. They had to also compile the whole runtime to manage all these ah objects and garbage collection and yeah threading and concurrency things. And that's totally fine. It just means the WebAssembly blob isn't going to include not only the code you want to run, but all the code for running that code. And so from the start, people have been using Rust for a while because it kind of compiles down really really small you don't have to try and do bring a whole native ah green thread multi threaded concurrent executor stack. Yes i've not thought of that but that does explain why rust is particularly popular in that space yeah and the same thing for embedded yeah you can.
00:55:31
Speaker
ah Yeah, basically we've rushed a lot of things are built into the language and other languages are just kind of standard library ecosystem things. Sorry, I shouldn't say standard library. I mean, common libraries.
00:55:45
Speaker
And the downside of this is that, as you say, well, it's not the standard library, maybe it should be, but the upside is the one you're targeting, you know, one of these more constrained platforms like Embedded Web Assembly. You don't have to bring this massive, massive, massive language runtime with you. You can choose to bring a smaller one, the smaller library that's perfectly scoped to the task you're doing.
00:56:07
Speaker
Yeah, yeah that that makes a lot of sense, especially on the web, right? You don't want to over-optimize download size sizes, but you don't want to force who a user it to download an entire package ecosystem either. Yeah, especially if one of your customers is, say I don't know, in Antarctica, and they get you know several hours of internet per day, and they're you carefully portioned out between all the different different people in the lab.
00:56:34
Speaker
OK, well, that gives me a link, because I wanted to get back into your user space, actually. So if you've got if you've actually got users and in Antarctica, what how does that work with your very much thin client-thick backend server cloud model? Because surely, I mean, the speed of light from New Orleans to San Francisco, fine. Speed of light from Antarctica and back, that's a bit laggy.
00:57:03
Speaker
Yeah, so as you pointed out, latency is a big part of the concern of the company, especially again, if you're building yourself as really fast CAD, then it's really important you get that latency down pat. So the biggest thing we can do to affect latency is deploy the app to a region near the user.
00:57:26
Speaker
right so ah okay yeah I've got to ask, is there AWS Antarctica? There's an Antarctica web services, but I don't think that's what you would know. There's not. oh silly Someone should do that. No so server calling required. Yeah. So obviously for most users, the most important thing we can do is to deploy the app near them geographically. And that is straightforward enough for us to do because we can deploy the app really just anywhere there's a GPU and ah yeah know Even if we can build this to be multi-region from the start, that we don't have to have everyone in the world connecting to San Francisco. and The nice thing is that Kurt, who started the language in the app and everything, he was the firstant first hire.
00:58:10
Speaker
for Zoom and he's in Australia. So all the rest of the team were in Los Angeles and he was in Australia. So the latency from there was ah really, you know, from the start, we had to build it so that the app could work without crossing the entire ocean. You know, the very, very, very first prototype we deployed on our CEO of Jess's laptop and we were using it remotely. And then Kurt was trying to use it from Australia. Anyway, but as you point out, what do you do if you're in Antarctica or I don't know, I mean,
00:58:36
Speaker
so anywhere else with really high latency internet. So we're fundamentally API driven. There's never, I think, going to be local video rendering. But I think what's very likely is in the future when we have enterprise customers, that you will be able to self-deploy the app on your own infrastructure. And it will still be communicated over network, but that network will be you know your office internet or local host or something.
00:59:05
Speaker
right So I think in the case of Antarctica, if we had a customer there who was ah like wanted to work with us, we would probably let them download new binaries which contain the engine that they could deploy locally and they could connect their clients there and have all the data kind of going within yeah within their Antarctica office. Okay. so you So you could foresee going for that dual model where it's mostly in the cloud?
00:59:34
Speaker
yeah Yeah, I think a lot of companies do. you know They run in the cloud and for enterprise customers, if you can download the binaries and you can self-host them in whatever way you want. and that why you have you know so You keep all the data kind of flowing within your own network and you're not sending out data to all three parts of the internet. There's legal compliance reasons and security reasons, you may want to do that, but I think that will We haven't built that yet, but there's nothing stopping us from, like all all we would have to do is send someone a binary. It's really more, think of ah I think the legal side of that would be more difficult than the technical side. and that We'd have to have some kind of contract to give people and like documentation about how to deploy it. But theoretically, as long as you've got an Nvidia RTX GPU on your machine, we could send you the binary and you could sell it first.
01:00:28
Speaker
I think it's very likely we do that in the future. We just have not yet had a customer who that's their deal breaker. I can see that the legal and business and sourcing graphics cards reasons probably trump the technical ones, right? Yeah. I mean, the graphics cards things are interesting because if you've been following all this kind of graphics cards and parallelism stuff going on, which I know you have, there's a It's all very data-driven and ML-driven right now, and yeah you know massive parallelism of scientific calculations. But we're actually using GPUs in a pretty old-school way. We're using GPUs to render graphics. How vintage of you! Yes, it's it's ah it's very retro. It's quite polka-dot.
01:01:14
Speaker
um but This means that a lot of these new shiny, fangled, data center GPUs, they don't support graphics APIs. They don't even have an HDMI port and a lot of these new graphics cards.
01:01:29
Speaker
hang on Hang on, you're saying there are graphics cards that don't do graphics. The world is broken. well yeah so you know ah i'm pretty sure yeah you know I'm pretty sure that Cloudflare has GPU offerings where you can run your code on their data centers really close to the user. I'm pretty sure they only support compute shaders, which are shaders that kind of do all the numeric processing, and they don't do actual graphics rendering for pixels in those shaders. so I mean, the dream would be that someone like Cloudflare says, yeah, you can now run your code on any of our GPUs really close to your users. And we operate in 150 days since around the world. I used to work at Cloudflare, so they're always kind of somewhat what top of mind for me. But they um I don't think they actually offer graphics rendering.
01:02:17
Speaker
in their GPUs offering. I think it's only for doing low latency ML or data processing. Oh, that's strange. Yeah. So we're in this place where there's some gaming angle for them there that they would have taken advantage of. Yeah. Well, I think just when it comes to cloud GPUs, like cloud gaming is no one near as big of a market for Nvidia. It's kind of cloud, massively paralysed matrix multiplication stuff.
01:02:45
Speaker
yeah Yeah, I'm thinking of Google Stadia, which tried and failed to make that happen, right? Yeah, Stadia was actually a big influence for us in terms of how we're doing this cloud rendering. Our app is much more like cloud like a cloud gaming thing, if you think about it. you know We're trying to let the user move the mouse to pan the camera, and we want to keep it happening in a smooth 60 FPS. So it's actually not dissimilar at all to cloud gaming strategies.
01:03:12
Speaker
Yeah, yeah, I could see that, yeah. So streaming back the video fast enough that it looks like you moved the mouse and did something locally. Yeah. That's reminding me of a game. Have you heard of the game Factorio? I have heard of that game, yes. Where you build like pipelines of, it's very nerdy, very logistics core, nerdy. My wife and I were up late last night playing Satisfactory, which is kind of like a 3D Factorio.
01:03:39
Speaker
Right. Yeah. The shapes is another nice one, but yeah, the the the the lines being blurred between, there's also like power wash simulator, things that blur the lines between doing work and playing games. Yeah. I've often tried to tell people like Factorio is like all the fun parts of programming without the boring parts, like sitting down and talking with your stakeholders about, you know, where this green science cube is going to go next. and Oh yeah.
01:04:08
Speaker
ah my wife ah discovered she loves these factory games. and She's always said, I think I would have made a good programmer because I love doing math in school, but no one ever encouraged me because you know but in the 90s, programming was not cool and women were not usually encouraged to give it a try. and Then when she sat down and got totally addicted to these factory games like I was, I realized, yep, absolutely, she's got a programmer's brain.
01:04:36
Speaker
Yeah, the joy of optimizing pipelines. Yep, you're one of us. Yeah, exactly. The goal is that people will basically not be able to realize that the rendering is happening remotely. And we're trying to be clever about it and keep the latency down, but the goal is to really have it below latency enough to feel local.
01:04:56
Speaker
Does that mean um've now I've never dug into WebRTC business is it something like UDP in which you're kind of dropping packets if they don't arrive in time because the newer packets more relevant.
01:05:09
Speaker
Absolutely. yeah that's That's how WebRTC video transmission works. WebRTC is a really complicated protocol. And to be honest, it's a little bit of overkill throughout use case because WebRTC basically says UDP has solved the problem of transmitting video packets around the internet at low latency. It's everything else on top of that. like How do you manage if someone joins a call or leaves a call? And how do you manage in a peer-to-peer way all these people on the call sending video to each other without all going through a central server?
01:05:40
Speaker
And we are, in fact, actually just a central server for the video. And so we're sending it back down to peers on the internet. So WebRQC is definitely a bit of a overkill for us. But it just happens to be the primary way to send video, live video, over the internet. If you're sending a video file, there's more there's easier ways to do it. But if you're live streaming, even if it's from a central server down to an individual user that doesn't have their own yeah NAT problems and everything,
01:06:07
Speaker
WebRTC is the way to go for now. And the nice thing is that then in the future if we wanted to add in yeah know multiplayer basically to our CAD editor and have people joining the session and collaboratively editing each other, then we have ah video that kind of accommodate that quite nicely. Yeah, yeah that makes sense. You throw in the video chat as well as them fighting over who gets to move the mouse at the moment.
01:06:32
Speaker
Yeah, I'd say it's more, you know, we can easily accommodate like live streaming the same model to both people rather than having, you know, their webcams too, but absolutely the webcams too. The tricky thing actually has been that um WebRTC and UDP, like you pointed out, give you the low latency of transmitting video. But we actually we need to make sure that the mouse click events and the mouse drag events are also really low latency. It doesn't matter if the video is really low latency. If your mouse you know you drag the mouse and then a second later, the model starts moving. yeah So we've been obviously using UDP for the mouse movements and everything. and
01:07:12
Speaker
A nice thing about WebRTC is that there are both media channels you can add in WebRTC for streaming video, and there's also data channels. And these data channels are very much like WebSocket, except you can configure them to be You can configure them to trade off reliability and ah latency. So you can configure a WebRTC data channel, which can send text or binary, to be more like a WebSocket or more like UDP. And you can kind of trade them off in different ways. So we send mouse events that need to be really low latency.
01:07:49
Speaker
And it's also, OK, you drop a mouse event because you know if you're smoothly scrolling across the screen, it doesn't matter if the mouse event, the 28 out of 30 milliseconds gets dropped because the one at 29 will come through. it So we send those over UDP managed by ah by WebRTC, and then WebRTC manages the process of getting these UDP datagrams over with a minimal amount of retransmission and then reliability and a maximum amount of throughput and latency.
01:08:17
Speaker
Yeah, I didn't realize that was built into WebRTC. But the what about the the code changes? Are they going over HTTP WebSocket? All the code stuff happens locally. So when you have the when you have the app open, the KCL, the language, is entirely happening within your local session. It compiles down to this executor that's running local A, or sorry, it gets interpreted local A, and then we're basically making WebSocket calls. So every time you run a KCL standard library function, like yeah line line from x1, y1 to x2, y2, that is actually making a call over the kitty-cat API saying, you know
01:09:03
Speaker
add line to view from these points to those points. And that's being sent over a WebSocket. And that is using WebSocket. So it's reliable. It's TCP. It's doing retransmission and checksums and all those nice things. Right. OK. That gives me the networking protocol stack. There's one, perhaps there's one large topic to move on to. It's the last big thing on my mind. And I don't know if it's the present or the future of what you're doing.
01:09:33
Speaker
But I'm thinking about your language and thinking of all kinds of different checks you can do on this data. Like you can check if it's physically viable. You can do static typing checking to see if the code makes sense. You can do, um type like we said earlier, type checking. that not Model checking for how much is this going to cost me versus how long is it going to take to manufacture? What kinds of checks do you have of the model data now and what are you planning? Right now, we have really, we don't have kind of checks in this way built into the editor or anything. We have a couple of API queries you can make, like you can send your model, ah you know you can you can send her a query for you know what's the weight or the mass or the center of gravity of it, but Absolutely. We are like so excited to build this kind of linting or system that we're going to call. you know I saw a thread on Twitter a few months ago from a mechanical engineer who was saying, I just got sent, I just downloaded this design on Thingiverse for you know some part. I don't remember. it was like ah
01:10:47
Speaker
some kind of case for holding something. and He went through as a mechanical engineer and talked about all the mechanical problems with it. He's saying, you know this groove here is too thin to be manufactured by 90% of the different blades, so you'd have to get a really specialized blade. If they just made this larger, this purely decorative element larger, you'd be able to manufacture it so much more easily in a range of machines.
01:11:08
Speaker
This edge here is just like a complete, hard, sharp edge that's going to be really unpleasant for anyone to hold. It's going to kind of dig into them a little bit. It should really be filleted away, kind of smoothed away. ah yeah And so there's all these things mechanical engineers know to look for manually when they look at a model. And we are absolutely going to make as many of them machine checkable as you can. So just static analysis for yeah code. But static analysis isn't saying, you know,
01:11:38
Speaker
Hey, you're doing you know you're using a slower method here than you should. You're pausing this UTF-8 into a stream and you can actually operate in the robots directly. We're going to have the same thing saying, hey, this tolerance here you've selected is you know very unlikely to be supported by the range of machine shops. You know you should really, if you can, remove make this tolerance larger and something like that.
01:12:04
Speaker
That's, yeah, yeah, I can see lots of applications for that, like 3D printers, you've got your overhang is too high, it's going to topple over on the printing plate if you don't adjust it. Exactly, yeah. What would that take? Because that is that, do you actually have to get into like modeling the effects of gravity on the object? We are going to, I mean, one last thing, is it on the back end?
01:12:33
Speaker
There are two kinds of ways to represent 3D. There's what's called visual representation, or VREP, which is basically triangles. You store just points and edges between them, and generally you can make whatever triangles you need. That's really good for rendering graphics really quickly.
01:12:54
Speaker
but it is not really able to understand the physics of the model because you've approximated your model as just triangles. you know If you look at a a bottle like this, it cannot be represented easily or accurately as triangles. It actually has circles and curves. This is the thing where if I fire up Blender and zoom in too far, it all starts to look very janky. Exactly. so People often say, but why are you pulling this? Doesn't Blender already can do what you want? No, Blender is fundamentally It might look good, but you cannot use it to do manufacturing analysis. So we had to build a kernel using boundary representation, or B-Rep, which actually stores, as you said, the Pezier curves and things like that to understand the geometry. so
01:13:34
Speaker
The goal is to basically having the engine, we keep an accurate enough model that we can do analyses on it. And we can then send those analyses back to the front end, and it'll you know put them at the right places in the code, or there'll be a little pop out panel for a model analysis, whatever. We haven't yet built that. We're still kind of really heads down trying to get the core CAD workflow ready so that all the all the buttons users expect to see when they open up their existing CAD software will be in our app too.
01:14:02
Speaker
but absolutely it's something like very excited about because there are so many checks which require a human being to kind of look over a model and just kind of use their expertise and that's great but imagine if programmers had to do that every single time like the whole insight of the software industry for the last 15 years has been that we can use these you know we're adding software so make the software better. you know Keep improving the software with more expert analysis of how to write good software and don't require human discipline every single time. you know That's why we use newer languages and have balance checking for arrays. We don't constantly make the same mistakes over and over again. I'm a big believer that ah you know the reason I think this language is going to work out is because
01:14:51
Speaker
Humans are really good at so many things. And machines are really good at exhaustive detail and analysis. And to be able to marry those two together is going to make something much stronger than either of them separately. To have a machine look over every single edge in a model and say, hey, this one in the very back like thing here, that is not filleted. And that's going to be really hard to manufacture that human could miss. But if you have your model stored as code that computers can reason about,
01:15:17
Speaker
But I think everything's going to become much more much easier to analyze with the machine. And this will probably mean we have to build something that understands the specifics of yeah Is this being 3D printed or is this being modeled, cut away with a a CNC machine or some other kind of manufacturing thing? And like I said, you know I think engineers have these big books of different coefficients of materials and everything. So we're definitely going to build material analysis so it knows this isn't just ah a cube of this size. It's a cube of this size made of this particular alloy, this particular set of blades in a machine shop.
01:15:58
Speaker
and so I'm really excited to add that kind of information into the AST. We haven't gone there yet, but yeah I'm very, very, very excited when we can start doing that. Right now, the app is just for designing your 3D models, but the goal is that you know you will be able to press Ctrl P and print it to either a 3D printer or a CNC machine, whatever kind of manufacture send it to a manufacturing shop in this kind of nice machine format. and Like I said, that sheet metal foundry have made their own CAD app to kind of encode all of the design constraints. The goal is to have all those design constraints available in the app, so you don't have to call up the machine shop and say,
01:16:40
Speaker
you know tell me what my model is. What do you think about my model? Is it easy to manufacture? The app will be able to give that feedback in real time in the design step, so you don't have to go and send off your model somewhere else.
01:16:54
Speaker
Yeah, I could even see a future in which it tells you which of the different manufacturing options would be the best one to get a good result, right? This is the kind of thing you really ought to 3D print, it just makes more sense. Absolutely, yeah. Says the computer, yeah. Yeah, you know, that knows if you're doing a bunch of really small features here that the 3D printing will be easier, but if you need this kind of high fidelity of these kind of particular curves that aren't used to 3D print because the printer is going to voxelize it, then you should be using a CNC or something for that. Hopefully, in the future, it can then tell you what kind of machine will be best and then even point you to
01:17:32
Speaker
i mean This is the dream, and pointing to machine shops around the world that have an have an API that will accept these 3D files and partner with us to get work sent to them straight from the app. You'll be able to say, this shop is the one closest to you that has the right balance of close to you and has capacity at their lab and has the right kind of machine and has the right kind of blades to do everything right.
01:17:58
Speaker
Yes. Do you know, I've got a friend who works for a company that does robots that assemble circuit boards, who we must get on the show one day. I could see a future in which you just send off your file and it automatically goes to here for the circuits and there for the casing. And it just shows up as a finished product that you designed. That would be great. That would be very cool. Okay. That's the future. And maybe that's a good point for you to leave us with. Give me, I want a couple of suggestions from you for links for the future.
01:18:28
Speaker
Where should I go if I want to learn how to write a programming language in Rust? And where should I go if I want to play around with KCL? So I think if you want to learn to write a programming language, the best thing that I've found is a book called Crafting Interpreters. I've heard of that one. Yes. Yeah. Yeah. i When I kind of started taking this project seriously, I thought, OK, well, I've really got a, you know, this is actually going to be a core part of this now, and I've got a kind of But I did not have any of experience making a programming language before this. So I thought, OK, I've kind of just happened to be the person here with the most strong opinions about programming languages, I guess. So I really got an upskill. So really strongly recommend that book. And it's written kind of targeting Java. But it's pretty straightforward to translate that into whatever language you want to use. And because it's it's such a well-loved book, there's a lot of examples. People will share the code they've written.
01:19:25
Speaker
so Well, crafting interpreters kind of starts by defining a pretty simple language called locks. And then it helps you write an interpreter for locks and by code VM for locks.
01:19:37
Speaker
And these are both written in Java in the book. Sorry, in the book, the first is written in Java and the second is written in C. So two examples of different languages. And it's pretty straightforward to translate the C into Rust if you're familiar with Rust. But the other thing is because this book is fairly well loved, you can go online and you can find all kinds of people's implementation of these locks.
01:20:01
Speaker
interpreters or VMs in different languages. Basically, any language under the sun, you can find someone's code, read through this book, and see their implementation. There are several different Rust implementations of locks. Oh, that's a nice resource. Yeah, okay. I'm linking to that in the show notes. Yeah, please do. If someone wants to play with your work specifically? They can go to zu.dev. That's all right. They can download the modeling app, and it's free, and they can get started. They can start using KCL to make their models or use the mouse driven UI to make their KCL to make the models. Nice. Mouse driven computing from a different angle. Yeah. Yeah. What if the mouse was actually good for programmers?
01:20:45
Speaker
yeah Yeah, and actually generated real source code rather than some drag-and-drop blocks. Yeah. Yeah, interesting. And I also v blog about programming, so if people want to check my stuff out there, they can go to adamcharmroads.com. Cool. I will link to all of those in the show notes. Yeah, it's got you little bits and pieces for me and links to our places to find me. Cool. Adam, time for you to go and put a physics engine inside. You're not really a game honest.
01:21:16
Speaker
Yeah, time for me to actually get started with work for the day, not just talking about it. Thanks so much for having me on, Chris. This is so much fun. Absolutely a pleasure. Cheers. See you again. Cheers. Thank you, Adam. He's a lucky man, isn't he? Getting paid to write a programming language? That doesn't really happen. I think it's the first time I've heard of that happening outside of academia, where the pay isn't great, or banking, where the pay is great, but money isn't going to be your biggest problem.
01:21:43
Speaker
so Yeah, I think I'm a little bit envious of you, Adam. Good luck. And I'm looking forward to seeing the world's first type checker that has a notion of weight and density. That'll be interesting. While we wait for that to come along, there are links to all the libraries and a few of the games we mentioned down in the show notes. I hope you enjoy them.
01:22:04
Speaker
If you enjoyed this episode, please take a moment to click like if you're watching it on YouTube or rate it, review it, heart it, star it if you're catching this on one of the podcast apps. We'll be back next week with more, of course, so stay tuned, make sure you're subscribed and I will see you then. I've been your host, Chris Jenkins. This has been Developer Voices with Adam Chalmers. Thanks for listening.