Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Graphite: Image Editing as a Syntax Tree (with Keavon Chambers & Dennis Kobert) image

Graphite: Image Editing as a Syntax Tree (with Keavon Chambers & Dennis Kobert)

Developer Voices
Avatar
3.1k Plays22 days ago

Graphite is a new image editor with an interesting architecture - it’s a classic UI-driven app, an image-manipulation language, and a library of programmable graphics primitives that any Rust coder could use, extend or add to. The result is something that you can use like Photoshop or Inkscape, or make use of in batch pipelines, a bit like ImageMagick.

Joining me to discuss it are Keavon Chambers & Dennis Kobert, who are hammering away on building a project that’s potentially as demanding as Photoshop, but with a more ambitious architecture. How can they hope to compete? Perhaps in the short term by doing what regular image And is the future of image editing modular?

Graphite Homepage: https://graphite.rs/

Graphite Web Version: https://editor.graphite.rs/

Graphite on Github: https://github.com/GraphiteEditor/Graphite

Signed Distance Fields: https://jasmcole.com/2019/10/03/signed-distance-fields/


Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices

Support Developer Voices on YouTube: https://www.youtube.com/@developervoices/join


Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social

Kris on Mastodon: http://mastodon.social/@krisajenkins

Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/

Recommended
Transcript

Introduction to Image Editing Tools

00:00:00
Speaker
There are two programs I use for image editing. The first one's called Pixelmator. It's essentially Photoshop, but you're allowed to own it instead of renting it by the month. That's the desktop app.
00:00:11
Speaker
That's the one you choose when you want to create something new by hand, when you're still figuring out what the image should be. It's got rich UI, it's fully featured, it's exploratory. You gradually pin down the image you want.
00:00:24
Speaker
The other tool I use is ImageMagick, and that's the command line tool. That's the one you use when you know what you want, but you need to script it up, automate it, scale it out to run over a million images.
00:00:36
Speaker
Each of those tools is great at what it does and absolutely terrible at what the other one's good at. And it seems to me that's just an accident of history. There's no fundamental reason why the one-off user-friendly image editing tool is completely different to the scriptable image editing tool.
00:00:56
Speaker
We could, in theory, have one unified toolset.

Introducing Grapheme and Graphite

00:01:00
Speaker
Well, my guests this week, Keevan and Dennis, would agree with that, but they've taken the idea much further than just a unified toolset.
00:01:08
Speaker
They've designed a language called Grapheme, which is all about describing and chaining together image editing operations into a big graph, one that's developer extensible in Rust.
00:01:20
Speaker
And then on top of the language, they've built a user-facing desktop app called Graphite, which looks like Photoshop or Inkscape, but is really just manipulating a tree of Grapheme primitives,
00:01:32
Speaker
which you can use to make your image or you could then get your hands back on as a programmer to turn into a script i'll give you an example you take a user's image you send it to your graphic designer and they design it up and send you back a nicely designed avatar the file they send you back isn't just one avatar it's also a program that you could run across all the images for all the users in your system and the graphic designer never knew they were writing a program It's a very clever approach to unifying these two needlessly separate worlds through the magic of creating a specialized language and then building tools up upon that language.
00:02:14
Speaker
So I think we should find out how it's all built. I'm your host, Chris Jenkins. This is Developer Voices. And today's voices are Dennis Coburt and Keevan Chambers.

The Need for a New Image Editor

00:02:36
Speaker
I'm joined today by Keevan and Dennis. Gentlemen, how are you? Hi, I'm doing great. Yeah, me too. This is the joy of having two guests at once. There's instantly that race condition on who gets to speak first.
00:02:49
Speaker
Yeah, and it's more difficult with the latency involved. Oh yeah, yeah, yeah. Well, we'll work around that. I'll try and to do some stage management for this conversation and we'll see how we go. So you two are working on something fun. um You're working on an image editor in Rust.
00:03:06
Speaker
And there's a lot we have to get into on that, especially on the technical details, which are juicy. But I guess the first question has to be, and I'll direct this at you, Keevon, why does the world need another image editor?
00:03:19
Speaker
Yeah. um So basically, everyone is always asking for one, but it seems that no one has a solution to offer one. um So there's definitely an actual community interest amongst many people, both hobbyists and actual professionals who feel like either their own existing editing software is being abusive to them, or they simply don't have a good free option that exists.
00:03:40
Speaker
If they're using Linux, for example, it's not available or if they're using really any operating system, maybe if they're just a kid just learning, that was kind of my background when I was growing up is, um, having to use like an old, uh, like purchased copy from an employee discount of Photoshop growing up. yeah Um, but like, I can't get updates to that. So at this point, it's pretty expensive to subscribe. but And, um, at the same time, people who were using, for example, GIMP, um, they're sort of limited by its old interface design approach and,
00:04:13
Speaker
um a number of limitations that it's able that ah able to provide or not able to provide, depending up upon your perspective um with the kinds of things that you do with it. um So basically, it's really time to inject sort of ah some fresh ideas into this landscape and um really have a nice interface, but also something that's way more ah flexible by making it also kind of ah a generative art tool using programmatic concepts from ah from actually borrowed from the

Graphite's Innovative Editing Approach

00:04:41
Speaker
3D industry a lot. There's a lot of um node-based, generative, procedural, non-destructive editing approaches.
00:04:47
Speaker
And that is some really, really useful idea, like a useful set of ideas that needs to be brought into this realm of 2D editing. That is something we've got to get into because generative has become a loaded term in the past couple of years. It's actually developed an entire new term, a new definition with with AI, but also generative in the sense that you know traditional algorithms that can be scripted together to generate content live or based upon dynamic data.
00:05:13
Speaker
or at different resolutions or um being powered by certain data feeds or anything ah ah like that. kind of That's it's all within the realm of what we're ultimately ultimately making Graphite do.
00:05:26
Speaker
And that's because we have Rust powering everything and we have our own language built upon on top of Rust. So that's just a little teaser there, but we'll get into going to dive it all into all of that, but I have to ask you before we go into that.
00:05:39
Speaker
does say i mean the scope of an image editor especially you look at you look at something like Photoshop or Word or Excel and you think I could do the simple version, but they are now so huge that you don't stand a chance of competing with the full thing, right?
00:05:55
Speaker
Does it not terrify you trying to build an image editor? It is truly massive in scope. ah Just unfathomably massive in our roadmap. We have to you know actually spell out all the all the things we know about, and there are so many things we don't know about at the moment that have to be built as well.
00:06:10
Speaker
But all of those that have been built out, you know that is a decade-long roadmap that we will continue crunching towards as we as we make progress. But we have been making progress, and it's so far been succeeding.
00:06:21
Speaker
um It's still relatively early on,

Graphite's Development Roadmap

00:06:23
Speaker
but it's also... you know It's a useful tool at this point. so um It will take a long time. It's very, very ambitious. I think, Dennis, you can go and give the quote about ambition.
00:06:34
Speaker
Yeah. So um if that's one thing that graphite is not if there's one thing that graphite does not have, it is a lack of ambition. Yeah, that's a quote from me. yeah it's what for me ah It's basically... we Oh, no, sorry. Yes.
00:06:50
Speaker
Yes, you're right. That's quote from you. We end up... Like we look at something, think, oh, that could fit. That's a good feature. That's useful.
00:07:00
Speaker
We'll just add it to the roadmap. And because this is such a universal approach of, like, because Graphite has such a universal approach of building an application, just lot of things fit. And we can we could extend it to so many realms and do so much useful things.
00:07:19
Speaker
It's, yeah, the scope is... Pretty huge. For example, someday the same platform that powers the rest of the graphic editor for graphics could also be moved into being like a digital audio workstation, you know, like more than a decade down the road. But it is such a such a generalized approach to making a graphics editor or to making really any kind of editor that it's like ah it's more of a more of a generative.
00:07:44
Speaker
scripting tool with a creative editing environment built on top of it and we can just keep extending that. You're going to have to take me into that because I've i've played it with graphite and it looks, it reminded me a bit of Inkscape, that kind of SVG editor thing.
00:08:00
Speaker
There's definitely something deeper going on under the hood, but i I didn't immediately see the general editor supreme universe that could be a digital audio workstation one day.
00:08:12
Speaker
So what's your architecture? Give me the high level, Dennis. All right. So in... Well, graphite foremost is and image editor and a vector editor.

Graphite's Architecture and Non-Destructive Editing

00:08:25
Speaker
So one of the principal goals was that we unify both the vector editing and raster editing experience. There shouldn't really be a reason why you need to have two different programs with two different UI concepts for doing images.
00:08:43
Speaker
And the this was one of the founding principles. That's what we want to unify. We also don't want to lock the user into, like, we would we want to we want the user to be as flexible as possible.
00:09:00
Speaker
And one of the general ideas of Graphite is instead of, like, we when the user does something, we record that as the user's users intent, but we don't destructively modify something.
00:09:19
Speaker
So we have this idea of both raster, vector, and non-destructivism. The non-destructivism allows us to record whatever user does and then non-destructively... So, well, I should give an example.
00:09:36
Speaker
So in Photoshop, if you draw something, you have an like an image. This is developer voices. We have ah like you an array of pixels, an array of image types.
00:09:48
Speaker
Then you draw something. The stroke modifies that array of image, like array of pixels. And that's done. If the user draws another stroke, you modify the same array again.
00:09:59
Speaker
That's destructive operations. What we do in Graphite is that we record the brushstroke, what the user drew, and then we can apply this at runtime to compute the final texture.
00:10:15
Speaker
so the idea of Graphite did that is that instead of applying everything the user does, is we record what they do, and we'll get into how we record that.
00:10:26
Speaker
That's the node graph idea. ah But we record the user intent and then we can like we we can replay that. We can compute what the result would have looked like if we applied the operation directly.
00:10:44
Speaker
but We can do that after the fact and we can modify things. This is making me think, is that explicitly inspired by what's happening in the kind of functional programming and event-driven systems world?
00:10:56
Speaker
Yes. so Exactly. Exactly. What we do is that, like, essentially, if you have a, if we just draw a box in Inkscape, that is, we represent, like, every every operation the user does is reflected in a node graph.
00:11:17
Speaker
The node graph is a visual representation of your document. Within graphite. Within graphite. So a box would just be a single node that produces a box shape.
00:11:30
Speaker
And we can think of this node as a function. It's a function that doesn't take any input and returns... Or width and a height, for example. yeah Yeah, for example.
00:11:40
Speaker
yeah It takes width and height as input yeah and returns a box as output. If you then, in the user land, you draw a box. If you then change the color of that same box, instead of making it a node that returns a red box, we instead, first we modify the node graph to first have a node that creates a box.
00:12:03
Speaker
And the output of this node is then fed into a second node that takes box as input and returns a red box as output. Like it's a color node. It modifies the data flow. Oh, yes. I've got a function that's from like input to red or output.
00:12:18
Speaker
Then presumably I've got nodes like opacity and stroke. Yeah. Yeah. This whole library of functions. Right. And yeah all of this, like every node, is a function.
00:12:29
Speaker
If you modify something, like if you modify the document, we provide tools for you. Like we provide the same editor experience as you would have in other editors. and So for example, if you in Photoshop draw strokes, they would be rasterized as pixels.
00:12:46
Speaker
What we can do instead is that we record the strokes and then have the strokes as input data, feed that through a function which rasterizes them, and we then display the result.
00:12:58
Speaker
what that allows us to do What that allows you to do is that you can modify the strokes after the fact. If you didn't like that one squiggle you did, you could modify it and the rest ris result is updated.
00:13:13
Speaker
Okay, to what degree can i play around and compose that? If i i build up a chain and then I say, actually, I can delete the box at the start of the chain and change it for a function that returns two overlapping circles, do I still get the same pipeline after that?
00:13:28
Speaker
Yeah, yeah, definitely. Okay, because it really is just a toolkit of functions that transform the image. And our goal really is to make a bunch of useful ah useful graphics editing operations, like a big, big catalog, you know as as many as we possibly can that are useful um so people can really do whatever they want with it and use it as a toolbox.
00:13:50
Speaker
So if you're making a toolbox, the first question has to be, to what degree is that exposed to the user? I mean, is the user expected to just use these functions or to write them as well?
00:14:02
Speaker
So we have different levels of abstraction for the for the different functions, the different nodes, and some of them are intended to be used inside of other nodes. So they're sort of like lower level implementation parts of more ah abstract so nodes, like nodes that do more but have more complexity.
00:14:19
Speaker
So the idea is that they're composed and we have just like you have one function in programming, you know, calling other sub functions, which generally have smaller, more atomic units of of complexity and abstraction.
00:14:31
Speaker
um Same thing where some of our nodes in our catalog may actually be very technical. They're just like, we have like an unwrap node that actually unwraps an option. um But other ones are higher level, you know, for example, like mirroring a shape or um doing something else that involves combining shapes together.
00:14:48
Speaker
or involves doing some raster operations that are complex, like de-hazing an image. um And those may be written purely in Rust, or they may be composed out of subnodes that build up how they're implemented.
00:15:02
Speaker
ah same and the goal is to have lots of different levels of abstraction people can access, but we'll probably categorize them differently. And then additionally, the tooling. So that's like the actual interactive tools that are very similar to Photoshop or Illustrator or Inkscape or GIMP.
00:15:16
Speaker
um Those tools, they have a predefined set of nodes that they operate on, and those don't even require working with the node graph, but they provide a completely traditional editing experience that everyone is used to from all those editors.
00:15:28
Speaker
um And those operate on the nodes they're used to to operating on, but that will be, of course, a subset of all the nodes. Right. Yeah. I mean, I've played around with this and I like you draw a shape and you fill it in and it feels just like any other in image editor, but then there's a button and you can flip over to seeing the node graph, right?
00:15:48
Speaker
Yeah. So I'd say there are basically three levels of abstractions this user can choose. like The very basic one is they just use the editor as you would any other editor, and the tools do the modifications in the background. like We built the document graph for you.
00:16:05
Speaker
You don't have to do anything. You just use the tools. That's like the first level of of abstraction. Then you can click on the button to overlay the node graph. Then you can sort of explore what the inner working is, how this all works.
00:16:21
Speaker
That's the second level of extraction, XREP. abstraction. You can then also start like modifying this, building new nodes, actually work in the editor to modify things.
00:16:34
Speaker
You then can start abstracting the node graph by building your own custom nodes out of existing ones. So, for example, if you you want to make a clever color color reassignment function that reassigns colors based on the rainbow, you could do that and make a node for that.
00:16:55
Speaker
And that is a node built out of other nodes, which can be abstracted and then shared. Saved news and used in other images. yeah We'll even have a package manager, like an asset store.
00:17:07
Speaker
And it's essentially the equivalent of crates.io or npm or any other package manager. Image transformation. Yeah, exactly. the That's the third level of of abstraction. And then we have a fourth level of abstraction, which is...
00:17:19
Speaker
Well, it's the lowest one. you just We literally will allow you to write Rust functions. Currently, that's all defined at compile time. But in the future, we will have support for writing functions at runtime in the editor to modify the code.
00:17:36
Speaker
Oh, OK. You should speak to the Fyrox guy who's doing live reloading of um Rust code. maybe Which is exactly what we're going towards as well. Yeah, live reloading, exactly.

Game Engine-like Functionality in Graphite

00:17:50
Speaker
Okay, I have to pick up on something you said, this um unification of um vectors and raster images. Because I'm thinking of a node graph. you i get I get this in the image editors I use, that you can turn vectors into rasters, you can turn splines into pixels, but that's usually a one-way operation. You can't turn it back.
00:18:12
Speaker
So that makes me think, how many different kinds of node... do you have and what's the path between them? Because there are some routes you can't transform between.
00:18:25
Speaker
Yeah. So we sort of have this idea of one way. you know Things are a one way street usually. um And ah large part of the actual node design process is how do we keep things as pure and as useful as possible for as long as possible.
00:18:40
Speaker
So if you're working in vector land before you rasterize, we want to have as much tooling as you know We want to build the the tools and the nodes to be capable of keeping your data in raster land. But beyond that, there's actually even higher like additional levels along that spectrum of purity.
00:18:58
Speaker
And for example, um a node might start out as a simple vector shape that's described as, let actually, let's say text. So text can start out, you know text is vector-based, but also text can represent paragraphs and spacing and different sorts of layout information.
00:19:16
Speaker
So the text data type can become vector, but it actually has a higher level of purity before it becomes just plain old vector paths. And then at some point after that, it then becomes pure old raster. um So, I mean, sorry, not pure, opposite of pure, dirty old raster. um And then the raster, of course, is the base level because that's what what gets sensed what gets sent to your screen.
00:19:36
Speaker
But there's actually even one level of abstraction or purity or whatever we want to call it um within the idea of raster, which is that we'll pretty soon move towards talking about what resolution agnostic or adaptive resolution, we sort of have two names for it.
00:19:51
Speaker
refers to, but the general idea is you can zoom in and it will re-render your content at that new resolution, or you export an image at a really high resolution and it will re-render the entire document at that higher resolution.
00:20:01
Speaker
And some raster data is more pure in the sense that it is able to be represented as something that gets re-rendered at the contextual resolution that it's being viewed at, that it's being rendered at.
00:20:13
Speaker
Whereas other types of and of raster data is actually what we call like a bitmap. It's literally just ah a width and a height of pixels. um It's just a simple image.
00:20:24
Speaker
And in that case, you can't re-render it at different resolutions because it's just a finite amount of actual data. and And in fact, it has no position in space even. um You'd have to place it into the document and give it positional and transformation information.
00:20:37
Speaker
But otherwise, it just exists as just a piece of data, you know, just an image file. are we saying that there's a lot of places on your node graph where it is just a function chain, presumably with caching, which we should talk about?
00:20:50
Speaker
But there must be some operations that say, I'm sorry, that thing you're about to do is going to turn that branch of the tree into a single node and you can't reverse it.
00:21:01
Speaker
um I'm not sure if I understand the question exactly, because the graph is always representing your program. It so that there's ah you sounds like you're working very hard to ensure that like it's always a function, a chain of functions to a thing. But there are not some operations that say, well, um I've got to destructively turn this into a raster.
00:21:26
Speaker
Or you manage to avoid that throughout. Yeah. So the way you can get around that, like that is very useful to do. It's a basically a cheat code.
00:21:36
Speaker
Because turning things into raster is very easy. And then you don't worry about anything. The other way of going around it is to pay a hefty performance price. And just don't do the rasterization.
00:21:50
Speaker
And we do the rasterization live. So in a sense, what Graphite is, is very, well, you know our architecture built similar to that of a game engine.
00:22:04
Speaker
We have a game world. Think of the document as a scene in the 3D game. For example, you could if you build something out of shapes, you can walk up to them and you get a higher resolution render.
00:22:19
Speaker
And that's the same, like, if you think about your document as a 3D game world and your camera as, like, what you're currently looking at, this is what we do in Graphite. We were you we basically re-render the document every frame based on what you're currently looking at.
00:22:36
Speaker
And that is what this, this is what we do with the adaptive resolution system. For example, blurs. You can't, like, you usually apply a blur once and then you can't modify what's beneath it.
00:22:49
Speaker
yeah So in Photoshop, if you blur something, you can't change what's like the source data. What we can do in Graphite is that if we compute the blur at runtime, yeah we allow you to change the underlying data and then we have to recompute the blur.
00:23:07
Speaker
So we don't do destructive operations. Instead, we do them at runtime and try our best to provide caching and make that as fast as possible. Okay.
00:23:18
Speaker
Yeah,

Grapheme Language and Graphite's Technical Architecture

00:23:19
Speaker
I can see that appealing because you always have the thing of like, once you've committed to a certain blur level, two hours later, you can't go back and change the the blurred thing. Yes, exactly. Exactly.
00:23:29
Speaker
OK, so tell me about caching, because in order to make that performant, you must be making some very difficult decisions about what to cache and what not to. I will preface this by saying it gets a lot more complicated as a result of the adaptive resolution system.
00:23:42
Speaker
So we'll get into that. But it's actually very different from other traditional node-based editors, ah because we actually have to treat it more like a program, like an actual Rust program, and less like it is simply just flowing one operation into the next operation. So we actually have a bidirectional flow of data. So go ahead, Dennis. Yeah.
00:23:59
Speaker
So basically what we do, well, the the concept we're using is also used in programming. It's called memorization, which is used to cache the output of functions. But first um I can explain how our caching model works.
00:24:18
Speaker
So the way we define a node graph is that it like every node is a pure function, so to speak, or at least it's idempotent. So given the same inputs, it will always return the same outputs.
00:24:31
Speaker
and That is a very useful property. So we do actually enforce the users. like If users write functions, they have to be idempotent. What that allows us to do that we can...
00:24:42
Speaker
if we build a document graph, we can take the leaf nodes and compute the hash of that node. Like all inputs, we hashed the inputs, we hashed the node, and now we have a new hash, and this hash is like a function identification.
00:24:59
Speaker
If you are familiar with theoretical computer science, it's sort of the Gödel number. We give each function a number, and this is a unique representation. Given the same inputs, it will always say return the same outputs. so That's what we do for our entire document.
00:25:15
Speaker
In Graphite, ah graphs have to be directed and acyclic. So we don't have loops. That makes things a lot easier. yeah But what this allows us to do is that we do bottom-up topological sort, tree traversal, and we then can compute the hashes of every node, given inputs and the node itself, what is the output hash.
00:25:38
Speaker
This is reminding me of gate. Yeah, exactly. same yeah yeah It's a Merkle tree. More generally, I think it's called a Merkle tree, right? Yeah, it's a Muckle tree. So given inputs and thing itself, we compute a hash.
00:25:51
Speaker
And this is what we use for caching, like the what at least the first layer of caching. So if you have the same input and the same output, and well, same if you have the same node, you can assume it will always behave the same.
00:26:07
Speaker
So what we can do is that we just insert cache nodes into our node graph. And the function of a cache node is if you call it once and the cache is empty, it calls its inputs, computes the result, stores the result in its own struct.
00:26:26
Speaker
And if it's called again, it just returns that result. So if i have two in a simple case, if I have two boxes of the same width and height, you'll only be computing that once, right?
00:26:36
Speaker
Yes, exactly. We automatically deduplicate nodes based on this hash system. Okay, yeah that
00:26:47
Speaker
yep. Do you prune that as you go along? mean, what do you do about evicting things from the cache? Because get huge. Yeah. We currently prune if a node has not been used in two subsequent executions of a node graph, then we prune it from the active list like list of active nodes we keep around.
00:27:08
Speaker
OK. And also, I think at the moment, just because we haven't built something bigger as a system that is a little smarter, but at the moment, I think we also just ah throw away anything before the last memoized output.
00:27:21
Speaker
Is that correct? um Yes. like We only saw the last evaluation of of the... Yeah, so and this this was very like this was easy still if we don't think about any inputs. so But like you could argue if you do the game engine thing and render the same object at a different resolution, the nodes will be the same.
00:27:45
Speaker
The thing is, yeah like at that point, we have the function. like Again... the entire note like The entire graphite document graph is basically a function. It's a function given these viewport parameters, like this zoom level, this viewport translation panning, what's the result pixels that I throw onto the screen.
00:28:11
Speaker
Right. And for this function, if you zoom in and out, Like the node IDs won't change because nothing in the document graph has changed. But we need to be smart about caching and we have to take the input arguments into account.
00:28:26
Speaker
And currently we just recompute it if the input arguments have changed. So that be if you change the width or height of the rectangle, for example, then those input arguments change because the hash changes.
00:28:37
Speaker
But if you simply move the camera around, then it does not it does not update that. That's equivalent to generating a game world and then walking around inside of that world. If the character is walking around and the camera is moving, we're not regenerating the world.
00:28:51
Speaker
ah We don't have to recompile the program that generates the content because we're statically describing a scene that is not changing. But as you walk around or if you move the camera around or in this case, move the editors navigation of the viewport around um that is not changing the scene.
00:29:07
Speaker
The scene is guaranteed to look the same. We just render it at a different resolution or we render it at a different location. But the scene is static after we have compiled the graph.
00:29:18
Speaker
That makes sense. Is this why you're saying um changing resolution makes things more complicated?
00:29:27
Speaker
That's why we have to use this Merkle tree approach. right Where with the resolution itself is an input to the node graph. it's ah it's a different input. um it does not it does not invalidate the caching. um So there's sort of two types of inputs.
00:29:44
Speaker
There's the inputs that pertain to the current way of viewing the static scene, and those do not change the way the caching works. They don't involve the Merkle tree. They don't involve anything like that. But then there is the, um like, if you're actually changing the width of a rectangle or something, you're you're modifying the scene.
00:30:01
Speaker
So now your actual scene, your actual game world has has changed. And in that case, things have to be recomputed. If you blur the rectangle afterwards, you have to go recompute the blur. So in that case, the scene is no longer static. It changed, even if you're viewing it from a different location or you're not viewing it from a different location.
00:30:18
Speaker
So those are the two different types of inputs. One describes the scene. You're actually updating the world. And one of them is just viewing it. Okay. Yeah, I think I'm with you. I think I'm with you.
00:30:30
Speaker
So I want to dive into because there's and there there are two parts to this because beneath all of this, there is a graphics description language of some kind. Exactly. yeah Yeah, that is grapheme.
00:30:43
Speaker
Yes. Yeah. Right. Explain grapheme. Why? What does a separate language do? What does it look like? So Graphene is essentially just a functional programming language.
00:30:56
Speaker
And every node that you draw on the node graph is bijective to a Rust function. OK. Well, we at least every atomic node, we you do have the concept of allowing you to group nodes together into high-level nodes, like functions. like You can compose functions and write new functions that contain the different functions. but Conceptually, every elemental, like if you double click into a graphic node, you see what did the subnet work, what the the node is made of.
00:31:29
Speaker
And for some nodes, you can't go any further because it's just a Rust function. Right. And what we do is that we take this description of the user node graph, which is our description.
00:31:42
Speaker
That's the program the user wants to run. And we then compile it and assemble it into... like at runtime linked Rust functions.
00:31:54
Speaker
So we pass the input, do all our computations. We can also do optimizations if you want. We currently don't have them yet because we I want to build a more general like tree, like graph rewriting system first, but we could modify the graph and then we assemble the output very much like a general, like like a compiler.
00:32:18
Speaker
and yeah And instead of assembling to buy code, because we usually, like, first of all, we're targeting, we're mostly targeting WebAssembly right now, like execution in browsers.
00:32:32
Speaker
And what we do instead is that we, at runtime, link Rust functions. We
00:32:42
Speaker
take the compiler output and every atomic node is a Rust function and we then link them together into one single big function which we then can call in the runtime.
00:32:55
Speaker
So this is our current execution model.
00:33:00
Speaker
Okay. Okay, yeah, i can see I can see that if you think of a picture as being made up of nodes and transformations, it becomes quite a lot like a compiler.
00:33:12
Speaker
yeah I can definitely see that. yeah And in that sense, that's the programming language. Well, Graphite, the editor, that is more like your IDE. And we sort of have a visual IDE instead of ah a code IDE.
00:33:24
Speaker
And then also we have the node graph because the node graph is the the visual description of your code. That code can be also modified in the editor. But ultimately, it is a Graphene program. And Graphene is very much its own language, but it's also kind of a language built out of Rust.
00:33:39
Speaker
In what sense is it its own language? Because what you've described to me so far sounds like a compiler of Rust functions to a specific target. Well, it's a compiler of our input graph.
00:33:52
Speaker
Like, it's a it's a graph and not Rust functions yet. The Rust functions, for example, don't have any nesting. In our input compiler, we do couple things.
00:34:03
Speaker
they we do a couple of things One of the things is that in Rust, you can't have functions with variadic arguments. You only yeah have to specify how many arguments a function has.
00:34:15
Speaker
And in like in Graphite, we want to have nodes that have multiple arguments. For example, what is the width and the height? So you've got to have both descriptions, not just a single description.
00:34:29
Speaker
So what we do is we take this high-level user-facing description of but what the node graph looks like, and then we can transform that into an actual execution model.
00:34:41
Speaker
Like we do the topological sorting, we flatten the input graph, we resolve inputs, we do type checking. That's another thing. We do type checking in our compiler. we've got get into that.
00:34:51
Speaker
okay And then we can dynamically link the functions together. And with when you say grapheme is a language, is there like a textual description or is it entirely constructed in memory by the editor?
00:35:10
Speaker
Well, it is. We do have, well, we have the.graphite file format. And that.graphite file format it contains a description of the grapheme language.
00:35:22
Speaker
because it's like the embedded document. That's the description. Is it something I could read or is machine readable description? It's mostly machine-readable. It's a textual description of the visual node graph.
00:35:36
Speaker
So you would usually use the visual node graph. Okay, that makes sense. And yeah the use case here, this is something we'll be building soon, is copying like a group of nodes. You can copy them and paste them into Stack Overflow or something, and someone else could copy that text and paste it back into their editor.
00:35:51
Speaker
And that would allow you to give like a group of nodes to somebody else. Okay. Okay. But in the end, that's just the serialization of the visual representation of the format.
00:36:05
Speaker
Okay. i I think we should take one step to the side and just quickly ask why you chose Rust for this particularly.
00:36:14
Speaker
um Sounds like maybe a question for you, Dennis. Yeah. Well, I didn't make the call. like I just joined on because the project was written in Rust or like ah was aiming to be written in Rust.
00:36:27
Speaker
That's why I joined the project. So that's one half of your reason. Those are some ancillary benefits. We get smart people like Dennis. But the like the original rationale was, I think, largely about the tech stack and the ecosystem.
00:36:40
Speaker
ah We have WGPU, and that is a very, very core part of our of our goal in meeting our our our design guidelines. or our design requirements um is having it support both the web and also all the native platforms and having WGPU as an abstraction layer to be able to take your graphics API calls and put those out if you're on a Mac to Metal and if you're on Windows to DirectX and if you're on Android or Linux to or a Chromebook ah to Vulkan and if you're on okay the web then it will be to the web GPU API that is provided by JavaScript and
00:37:17
Speaker
um although we're still waiting a little bit on the ah the all the browsers to actually roll out support for that so we can properly deploy that. ah We're sort of in this limbo land at the moment with WebGPU not quite being fully deployed, so we're not actually using it by default, and we're using some workarounds until it is, but Hopefully within like the next half year, I think it should roll out in all the other browsers.
00:37:35
Speaker
But back to Rust. um So a WGPU is the the crate that provides the implementation of the web, sorry, of the the web GPU API, but for native platforms.
00:37:46
Speaker
And it abstracts over both web GPU for browsers and then also all the other platforms. um So that's a huge part of it. The other aspect is that we've always wanted to have a desktop client and a web client.
00:37:58
Speaker
And... You can either use with InScripten or you can use Rust with WasmBindgen and the just the risk the the Wasm backend to provide a Wasm binary that can then be loaded into a browser and run by people such as 10 years ago myself, if I was in school, having to not have to download a remote desktop program to remote into my own personal computer at home just to be able to use Blender or Photoshop or whatever professional graphics tools that I could not use on the library computers or the the computing lab computers at school.
00:38:34
Speaker
um So that has been very much a goal is having a tool I can use in a browser and also a tool I can use with native performance on desktop. And really there's C++ plus plus within Scripton or there is Rust.
00:38:46
Speaker
Those are kind of the only tools in town to do heavy duty WebAssembly development. And I can probably also give you a bit more on why it was actually a good idea. but Like that's the initial why we chose it or why we even chose it.
00:39:03
Speaker
um some of the benefits of Rust are, like first of all, we don't really have to deal with undefined behavior and hunting down memory bugs.
00:39:15
Speaker
That is ah huge relief. We did ah at one point have undefined behavior, and I spent like a week frantically trying to remove every little unsafe, like every last unsafe code in Rust, in Graphite.
00:39:29
Speaker
And it was, yeah, it was really annoying and really stressful. And in the end, I found out that was that like the SIMD implementation in the Chrome browser was just faulty and that just did undefined behavior. It's yeah it's never the compilers and never the browser's fault unless it is. And then, well.
00:39:56
Speaker
Yeah, and you guys believe you can apply to Google for a little ah medal that says you found a genuine browser bug. And I i had so many Rust compiler issues and compiler panics.
00:40:09
Speaker
And you know you and you've reached a good place when you sac-fold, ah like cause the Rust panic handler to panic, and you get a double compiler fault. It's fun times. Yeah, yeah.
00:40:25
Speaker
No, but but um using using Rust is great because it allows you both the fine-grained control, and we do need that because we do need to squeeze out the last little bit of performance and have fine-grained control about what assembly is generated.
00:40:39
Speaker
And it also provides powerful abstractions that allow application to scale. And that's just a very unique combination. and very fine also I'm also very much a high-level programmer. I'm normally here coming from like a JavaScript background. I like thinking at the very high level and not thinking about what undefined behavior I just invokes by accident.
00:41:00
Speaker
um But Rust actually provides the best of both worlds, and it allows me to be really happy doing high-level programming, but we actually get that C++-level performance and C++-level granularity. um It really is just kind of a C++, plus plus but with 25 years of hindsight, that is what Rust is.
00:41:16
Speaker
Okay, yeah, yeah i can I can totally buy that description of it. It does make me think there are a couple of, if I were naively trying to do this, there were two problems I would instantly expect trying to get program like this Rust working on the web.
00:41:32
Speaker
And the first is you said you're dynamically linking Rust functions at runtime. That sounds like that's going to break in a browser. Yeah. And, well, we're not, we we don't use, then well, the term dynamic linking is a bit overloaded. We're not using dynamic libraries.
00:41:51
Speaker
That's not supported yet. It wasn't. We're waiting for that. Because at that point, we can actually generate, like we can then at runtime inject new code that has not been compiled to into the application.
00:42:05
Speaker
you can Download some Blur functions from Blur.com. Yeah. Like what we do is that we, like, if you think about how the assembly is generated, there's an actually an assembly call instruction, which calls function pointer. You provide it with a function pointer and it goes to that function and executes that code.
00:42:28
Speaker
What we do is that we essentially exchange that function pointer. It's, Yeah, it's like it if you think of if you think about it and you think like you have a big array of functions, like a global function table, and each node knows it's going to call this function.
00:42:52
Speaker
But it doesn't know when we compile the node, it doesn't know which function it's going to call. And then what we do in the compiler is that we fill in those holes. We say node.com.
00:43:04
Speaker
red, you call the box function. And then okay yeah like this, we can chain together nodes. So the box is colored red. Yeah.
00:43:16
Speaker
Okay. That makes sense. You've got... Yeah, dynamically swapping that I should also take a moment to explain, because you said the red color calls the box. I don't think we properly explained this so far, but conventional node graphs would take the box, then they'd color it red, then they'd output it.
00:43:32
Speaker
But in our case, because we are starting out from the fully compiled program that's been linked together, we have just a single blob that can be called with an argument. And that argument is going to be the resolution you're viewing it at and the location you're viewing it at.
00:43:45
Speaker
And then that calls, let's say, the red function, the the color the box you know the color the something red. And you're saying, okay, I need data from this color it red function.
00:44:00
Speaker
And that color it red function says, okay, we we need a shape somewhere, um but we don't know what the shape is yet. We're going to call color. box generator function and it's going to then take the viewport information as well, the the bounds that you're rendering at, and then that box generator function says, oh you're on screen or oh you're off screen.
00:44:21
Speaker
If you're on screen, you produce the coordinates, you know, the the actual vector path, give it back to the coloring at red function, and then the coloring red function has the data it needs to color it red and then return that back.
00:44:33
Speaker
So we're going both directions. We're starting out from the final program output and calling it with the viewport information. And then if something's on screen, we render it. And if something is not on screen, we actually return no data because that becomes culled.
00:44:50
Speaker
And then it returns no data. And the color red function has nothing to return as well. So it also returns empty data. And the result is you have nothing rendered because there was nothing on screen.
00:45:01
Speaker
Right. Yeah. This makes me wonder, is there, is it possible or is there a future where it's possible for me to say I've designed a graphics pipeline in Grapheme and what's a good example? where I would like a program that takes a photo, i a JPEG, I give it at random and I call it in the command line and it puts a happy little frame around the edge and makes it grayscale.
00:45:27
Speaker
Could I use that as my graphics processing language rather than just the UI tool? You hit it on the nail there and then you can compile it out as a standalone CLI program and invoke it as you would otherwise use like image magic for a similar effect. Yeah, yeah. Because I've used image magic for exactly this.
00:45:44
Speaker
Yeah. You can design it in the editor, a nice WYSIWYG editor. ah what what you see is what you get editor do everything you want, expose your inputs. And you could also have text as an input and have like a birthday card generator that takes the name of the person and and their, their age and their photo and their hometown or something, and creates like a a nice personalized thing that you can then compile to a standalone program, invoke it as CLI program or as something that was like a,
00:46:10
Speaker
ah process that runs and as a web server and responds to webhooks or to API requests for like HTTP GET requests. um There's a number of ways you could compile this out. This is obviously still the future we're talking about, but that is very, very much exactly what we have planned.
00:46:25
Speaker
like Like what you can, like I did that actually for my bachelor's thesis. I needed to, I did some like performance optimization with GPU programming, GPU programming, And one of the things I did was write a small like graphene CLI, which allows you to input a graphite file, so it a textual description of the node graph provided with input, and it generates the output for you.
00:46:49
Speaker
So we do have like experimental functionality for that implemented. But that is definitely something that you will be able to do and usefi use case we want to support.
00:47:02
Speaker
Yeah. How far will that go? Because i'm I'm thinking like... I was thinking about the games world and how much of their work is asset pipelines.
00:47:13
Speaker
Could I define a scene that said, okay, if I change my Blender model, please re-render a hero shot for the main character. I'll give you a Blender model. These are the rules to make it look like a pretty thumbnail.
00:47:28
Speaker
that's I think that example... oh well not in Not exactly that example, but that was one of the things that inspired the quote with one thing that Graphite does not have is it's like it a lack of ambition.
00:47:40
Speaker
Because we did think about, well, couldn't like if I have a three d model, couldn't I just plug the 3D model into the node graph and then have a node that renders the 3D model to a picture and then applies some other things to it?
00:47:56
Speaker
And yeah our answer was, yeah, we could do that. That's perfectly feasible. um So yeah, we could you could just use the vendor model as an input to the Nutgraph and then render like set the camera settings.
00:48:13
Speaker
We would, of course, use a like pre-existing renderer. We would need to use something like the Blender Renderer and wrap that in Graphite. on Graphene as a node, but you could use that as like a one process.
00:48:29
Speaker
And if you change the 3D model, you could just regenerate the hero shot. Don't need to. And apply post-processing, et cetera. And you could even do a couple weird things with this.
00:48:41
Speaker
One is you could still use the adaptive resolution system, where as you zoom into your 2D viewport inside of Graphite, inside the editor, that changes the camera render parameters for the 3D scene.
00:48:52
Speaker
And you end up rendering the 3D scene with as much resolution as you need. You never have to pick an arbitrary but an arbitrary resolution to render at. You can just keep zooming in, and it'll keep rendering the 3D scene at that viewing resolution.
00:49:03
Speaker
the other weird thing you can do is because 3d scenes are composed of triangles you could actually render not to a vector shape but so not to a to raster content but you could actually render every single triangle as its own vector triangle layer and then start modifying with your yeah vector effects you could start modifying the 2d projection of that scene with your vector effects Okay, yeah, that would be a horrifyingly large SVG file, but in theory... But there could be use cases where that's actually useful. Like if you wanted to create, let's say, a torus, and then you want to take that torus and get sort of like a 2.5D effect, but you you can then start modifying that torus...
00:49:41
Speaker
in vector land. um It's ah perhaps an easier way to render something out of 3D into 2D, but keep it as a vector. Maybe you don't have every single triangle be separate, but you have like the view of all the triangles in the entire model becomes a single vector path.
00:49:57
Speaker
um where it's not a bunch of triangles, it's just one single shape composed of all the triangles, but you can still make that into vector land, and that's perhaps a useful way of creating like an outline or a silhouette of something that would be harder to draw by hand.
00:50:10
Speaker
Or you could then rotate it. you could Every frame, because we'll have animation support going with that ambition part, we can have you know animation where it will then rotate that object, but keep it going to vector, and then you can do subsequent vector effects.
00:50:25
Speaker
Yeah. Presumably there is a future where I could handwrite my own Rust functions and say, you know what, I'm going to have my source image data be a Rust query or a SQL query.
00:50:39
Speaker
Yeah. Yeah. OK. This is fun because i I have done stuff like this in shell scripts of ImageMagick and it's possible, but it's not fun. yeah Yeah. And like one of the unique things about Graphite is that, oh well,
00:50:55
Speaker
First of all, one of the main considerations I had for, like while designing graphene language is that I wanted it to be, I wanted to give the user as much power as possible.
00:51:06
Speaker
We never want to limit the users in what they can do. So that's also one of the reasons why we don't have a traditional runtime. like Usually you would have a runtime, you execute one node, then you manually pass around the data to the second node, and then you run the second node.
00:51:24
Speaker
We don't do that. We don't have a runtime. It's all linked together as a function, and the users could theoretically write their own runtime. They could modify how nodes are executed.
00:51:38
Speaker
And basically all of the features we've added to Graphene, to our language, have been features that are implemented as nodes. So that's one of the sort of fundamental principles. We want to have a simple language and build the language features out of the language itself.
00:52:00
Speaker
And one the nice, like this paradigm, is very powerful. For example, if you want to think about batch processing. In Graphite, we will never have to implement batch processing.
00:52:12
Speaker
That processing is basically just, we have folder of input images. We want to apply this operation to those input images. For example, if you if you're a photographer and have a speck of dust on your sensor, you would apply the magic eraser tool at this location.
00:52:28
Speaker
yeah so what And in Photoshop, you have this batch processing input file. You then apply the operation and get that as an output and save the output.
00:52:39
Speaker
What you could do in Graphite is that you take a sample image, apply the operation, and that's then a program. It's program from input image to output image with the apply with like with the magic yeah And what you could then do is you take a step back and think about the entire thing you did, like all the editing operations as a program, and then wrap that in a second program, which has a note that reads all the files in a folder, applies your program you did, like your image editing, your edits as a function to those images, and then save the output again.
00:53:23
Speaker
So something like batch processing is just an emergent behavior of our node specification. Yes, I can start to see now they The big ambition here is to make image editing a language first with all the power that a programming language brings to solving problems.
00:53:42
Speaker
Exactly. like that. And not just file export. you know You could also export to a database because I know you just mentioned SQL queries. yeah um Same for a spreadsheet. like Let's say you're designing a trading card game and you have different you have different images, you have different text for the actual title of the card, you have different stats and abilities, you have different levels, you have like a flavor text and you know all the different information. You could put build your entire catalog of cards in a spreadsheet, and like an Excel spreadsheet or a CSV file, and take that, run it through this batch processor, and output pf PDF files of every single...
00:54:18
Speaker
ah card in your entire game or put them into a grid put them into like a a grid that can be sent to the printer um to be printed you know one card per per page you know one card per grid cell within a page um as the specification of the printer requires finally for next year's rust con we have a practical way of making individual pokemon style conference passes That would also be a really good use case. Yeah.
00:54:47
Speaker
Okay. So that i'm I'm convinced about the language angle. Let me step back up into the other hard rust thing that occurred to me that we have to get to to complete the picture, which is i can see you write this language, you compile it to WASM or hunky-dory, but then you've got to worry about building a UI on top of this.

Graphite's User Interface and Design

00:55:09
Speaker
And that seems like something that isn't fully fleshed out in the Rust world and is taking you into the JavaScript land in Wasm world. That's me. How have you tackled I'm the web developer.
00:55:22
Speaker
I'm also the the UI designer. So it's a pretty iterative. or it's a It doesn't have to go back and forth between people in different departments since I'm the one designing the UI and then also implementing it. ah But it helps that i've wo I have a web development background.
00:55:35
Speaker
so i But is the code base like a mixture of Rust and JavaScript, or is it all Rust? yeah Yeah. 90% of the code is Rust, but then that other 10%, that's the web code that I pretty much almost entirely wrote myself.
00:55:49
Speaker
um And the way that that works is we have... some TypeScript, so that, of course, compiles into JavaScript, and we have Svelte. Svelte sort of takes the Rust model of doing a lot of doing as much as possible of the heavy lifting at compile time, wherever possible, do the compile time work instead of runtime work to make it faster, ultimately, to run on the user's computer.
00:56:08
Speaker
But Svelte transforms a combination of HTML and JavaScript. So you have a single Svelte file that defines a component, like a number input or a text field or a slider or a dropdown menu.
00:56:21
Speaker
And i have I've built like 30 or 40 of these components for all of our different widgets. um to implement our UI like like are ui design system um for all these different widgets.
00:56:35
Speaker
And each of these are just components, and Svelte's job is to transform the HTML combined with the CSS with some templating built in so you can actually update different pieces of data live. So the the numbers you're seeing in the number input, for example, they change live.
00:56:48
Speaker
um But we try and keep this as as lightweight and limited as possible. um So that way, The actual Svelte files are very small. They account for probably the average size of one of these files might be like 50 lines or something. Some of them are a little bigger.
00:57:05
Speaker
um And they take some JavaScript, sorry, they they take a message that's passed from the Rust world and subscribe to... So for example, we have a a layout handler and the layout handler handler will take a message received from the Rust world and says, create this diff of component changes.
00:57:25
Speaker
So if we have a number input, it will say, okay, replace this number widget with a different number widget that has a different number. and then it displays that. So Svelte then receives the request to change it, and it will go and actually do the update to the web DOM. So the DOM is the the tree of elements that are actually actually HTML ah code living in your browser and rendered by your browser.
00:57:49
Speaker
um So it keeps it pretty lightweight because it's just passing a message saying, replace this widget, and this widget gets just it gets replaced, and then the Svelte code has compiled at compile time,
00:58:00
Speaker
the exact API calls to swap out the text in that field, for example. Okay, so it's driven by events from Rustland. Yeah, and um that's actually very useful because we have a theyll define ah we we have a well-defined interface for communicating from Rust to JavaScript.
00:58:22
Speaker
And to those familiar with Rust, we actually use nested enums. So we have our fronted messages and, in general, message delivery. That's also an interesting topic, but not quite relevant.
00:58:34
Speaker
So it's a bit of an aside. But we have a well-defined interface to communicate from Rust to the frontend, and we try to keep that as frontend agnostic as possible.
00:58:48
Speaker
So the idea is that in the future, when we have a native backend, we can use the same interface to communicate with the native backend as we do for the current web frontend.
00:58:59
Speaker
No, native frontend, sorry. Yeah. native front yeah Same interface for communicating with the native frontend or the web frontend, And that's a nice layer of abstraction and a very clearly defined interface.
00:59:46
Speaker
And so if you click on an actual button, the button is going to receive that click. And then it's going to say oh, I'm supposed to to zoom in. it's ah It's the zoom button. You click the zoom button and it immediately, like as soon as it's clicked, its callback, its handler sends a message into the Rust world.
01:00:00
Speaker
And that's how we invoke the Rust world. side of of the architecture is that that call stack begins, like your actual call stack, if you're looking in the debugger, for example, um the call stack begins with the click event in JavaScript, and then you run a couple functions, and those those functions pretty much immediately call into the Rust world.
01:00:21
Speaker
And then you've got the WebAssembly running, so the web you know the the actual VM that runs the WebAssembly code does all its stuff. goes and circles around through our messages in the back end, does all the rendering, does any kind of changes that are needed, and then ultimately it produces changes that have to be sent back to to update the data in the front end.
01:00:40
Speaker
um But again, we keep this very, very lightweight with the front end. So um we've we've seen a number of people who have just tested out Graphite and they've been public you know commenting on some public forums that we haven't even that we would normally not even be reading because we're not part of that discussion.
01:00:53
Speaker
But I've read people organically in the wild talking about how they feel like this doesn't have the unresponsiveness that a usual web app does because of our careful attention to the architecture to make sure that we have as much responsiveness as possible um to feel more like a native app, as people have been describing it, compared to feeling like a a web app.
01:01:14
Speaker
Yeah, but ultimately, it even though it's all running in the browser, ultimately, it's kind of a thin client, thick server architecture. Exactly. saying Exactly. okay Okay, yeah.
01:01:25
Speaker
I can see how that would leave you mostly in Rustland, where you seem to be very happy. Yep. So I'm i'm thinking... I'm thinking about how getting getting my hands on this.
01:01:39
Speaker
And my first question with that is actually a more technical one. Is this open source? Can I start hacking around with the code and maybe writing my own functions yet? Absolutely.
01:01:50
Speaker
Tell me about how I would do that. Yeah, so I guess I'll, I'm the i'm the project manager, I can talk about the contribution parts. Yeah, so it's all open on GitHub. Additionally, our website has a reasonably extensive developer developer documentation section, if you click on the volunteer button at the top, and then go to the development, like the contributor guide.
01:02:11
Speaker
It tells you how to actually install the program and um you know install your development environment. um So you know we're using pretty much the the common ecosystem thing. So obviously you need to have REST-C installed, um the REST compiler, and its tool chain that comes with it.
01:02:26
Speaker
um There's a couple other programs that are used to kind of combine together the web architecture with the backend architecture. um So it's watching for changes with both the rest code and the the JavaScript code.
01:02:39
Speaker
And if you change either of them, it will automatically reload the the app that you have running in localhost. um and you just open it in your browser and the nice thing is that it's very very quick to just open in the browser and it takes you know half a second or quarter second or something to open in the browser so every time you make changes you don't have we still have to wait for the rest code to compile which is unfortunate it takes a little while um i wish that we could find solutions and if anyone if anyone's listening to this and knows good tips and tricks for breaking up the code base into smaller parts that can be compiled independently. That would be very helpful to speed up our development workflow. um
01:03:14
Speaker
But yeah, anyways, basically, it's we've also got a number of issues that are easily beginner tasks. so people can go and update you know either fix bugs or update the functionality for how a tool interacts in a certain situation or a certain edge case um or given certain user input.
01:03:30
Speaker
And then also we've got bigger features and especially the language parts. the That's more Dennis Land, and I guess he can talk about that in a second. um But all the language parts, so Graphene and um also we didn't touch too much on GPU compilation, but that's also a big part of Graphene, um getting more of that graphics programming into the Graphene language.
01:03:50
Speaker
And he talks a little bit earlier about how we try and make nothing be special with the way that the compiler does things, avoiding special case handling for how the compiler works. We want to actually make as much of this built into the language as possible. So actually using the GPU and also compiling your programs into um something that the GPU can run as a compute shader, all of that is actually built in the user land side of the Graphene language, rather than being something that is like fundamental to just the the runtime, um because we have as little of a runtime as possible.
01:04:28
Speaker
So all of that basically is part of Graphene, but it's also the user side of Graphene, the user land side of Graphene, as opposed to the compiler backend. okay yeah So someone were a Rust Desk expert and knew the way around shaders, they would useful contributor.
01:04:43
Speaker
They would be amazingly useful because we've only got Dennis working on Graphene and that seems to be most commonly the thing that we ah we run into blockages with ah in terms of our meeting our roadmap goals. is oftentimes we like For example, we want to move on from vector editing to raster editing this year. That is our our main goal is to actually start working on raster editing and become more of a Photoshop or GIMP alternative instead of just being an Inkscape or Illustrator alternative.
01:05:09
Speaker
um And that means unblocking the GPU-related compilation and sorry, rendering. I mean the GPU-related rendering of tasks, which yeah which is all graphene land.
01:05:22
Speaker
So, Dennis. Get a move on. We are also sort of blocked on some ecosystem changes. um But that's... ah First of all, you asked if you want to write your own node. That's actually fairly simple.
01:05:37
Speaker
You can... like i Like last year, we redid our entire system, how we define nodes. And now it's basically just you write a function.
01:05:48
Speaker
Like you go into a Rust file, write a function. And we then use some but special, like, proc macro magic. It feels like just writing a function.
01:05:58
Speaker
But then, magically, it's going to appear in the Graphite UI. And we even auto-generate, like, a settings menu for you, like a properties menu, based on the input types your function takes.
01:06:11
Speaker
So that's... that's That was actually a nice ah DevEx improvement and quality of life improvement that I worked on last year. Oh, nice. Yeah, that also makes it easier for like researchers, for example, people people with more of a research background in image processing or any other kind of like computational geometry. Actually, there's a number of algorithms. For example, one that we really want to figure out is a convex hole of any shape, so any vector shape.
01:06:36
Speaker
So not just a polyline, but actually a vector shape with Bezier curves, creating a convex hole, which means taking the equivalent of taking that shape and wrapping a rubber band around it, and where you level off everything that is concave. that is I've looked in the the the literature, and I have not found an algorithm that actually does that on Bezier paths.
01:06:52
Speaker
So that is a great research research opportunity, amongst many others of a very similar style. where people can actually just write a single function. It doesn't require learning the rest of the code base. It's just write a single function. um But it implements some kind of image processing algorithm or computational geometry algorithm that ah you know does one of the many goals we have for these nodes.
01:07:12
Speaker
So anyone who wants to you know implement the computational geometry of of a convex hull, then it's just a simple function. I can see that being very appealing to the kinds of people that research those functions and then want to make it look good for and usable for like when they publish. Yeah. and yeah Exactly. And it's it's basically like you know it's a graphics processing toolbox.
01:07:34
Speaker
Yeah, yeah, yeah. It's a graphics processing toolbox, which means also that ah it means also that like researchers, the research community in general, and academia, they could use it, especially in the future once it's a bit more robust. I think it's probably actually going to be a pretty common choice, where instead of your coast like posting some random code on GitHub as part of their thesis or something, which basically falls into obscurity,
01:07:53
Speaker
It could instead have all the other parts of the ecosystem that would make their lives as researchers easier. And then they can actually publish it onto to our future asset store. And then people could actually use it instead of it just sort of falling into obscurity.
01:08:06
Speaker
Oh, yeah. Yeah, that would be very cool. Very cool. And that's like that's also one of the... You asked about... Well, we talked about how this is all very ambitious.
01:08:18
Speaker
But that's one of the key insights and one of the advantages of it being open source. If we provide the tools for people to build something, if like if they...
01:08:31
Speaker
use the editor and think, huh, this node would be really great. They could just do that. And that sort of takes the load of our shoulders and we can, yeah, then then this all becomes feasible all of a sudden because building everything ourselves would be just a lot of work. And we're trying to basically build a platform and gain momentum to have volunteers help contribute and build an amazing software together.
01:09:00
Speaker
Yes, that's the other advantage of turning it into a language. You can then turn um features into libraries. Exactly. makes lot of sense. Okay, so final question then. If someone's not feeling quite that ambitious and just wants to use this as an image editor, I'm going to have to ask you to be very honest about this one.
01:09:17
Speaker
Where is it today? Why would I choose it instead of a different image editor today? And where would I not choose it? Yeah, so six months ago, the answer was basically, to be totally honest, it was more of a toy, more of a prototype.
01:09:30
Speaker
um It was not something that I would have necessarily recommended people use as part of their ordinary day-to-day workflow because it just had so many... Cases where like the type system, you would just keep running into weird edge cases where like two nodes are just not compatible with each other for no particular reason. That was six months ago.
01:09:48
Speaker
And additionally, the performance, because Dennis did a huge number of operations over of optimizations over the summer. And before then, it was just you you couldn't really do much with it because you'd pretty quickly hit into the performance ceiling of what...
01:10:03
Speaker
of what becomes impossible to work with. But six months down the road, now that we've had Dennis work on improving all of that over the summer, um now it's actually reasonably useful if you're looking for a vector editor that doesn't have some like very advanced features. like There's definitely a number of things that you could do in Inkscape, you could do in Illustrator, that we just kind of haven't gotten to. For example, if you draw a shape,
01:10:26
Speaker
any random shape and then you want to round its corners you'll have to actually go add a node for that as opposed to just using a tool and that kind of thing might feel a little less intuitive but it's actually not that hard you just click on a single button and it adds a node that adds rounded corners but we have a little bit of limitation as well where for example you can't change how much to round each individual corner it has to be the same value for all your corners.
01:10:52
Speaker
So it's like little things like that where you might run into a problem and but we haven't supported that yet, but all the general things you need for vector editing, I'd honestly say you're going to have an easier time jumping into it for the first time um because the interface is just much more intuitive. You've simply got the tools on the left. You click on the different tools.
01:11:11
Speaker
and Those are just the traditional tools that you find in Illustrator. You find in, I guess even Inkscape probably has the same tool library as well. Um, and you have a big canvas in the middle there's very few buttons to distract you or to make you feel confused or overwhelmed and you just draw some things you've got a layer panel you've got some properties to change the color of something after the fact um it's just a very streamlined simple editor that i think is pretty useful for anything if you're doing a laser cutting project or a vinyl cutting project and you could use that for crafting and cnc work you can also use it for doing graphic design um
01:11:43
Speaker
pretty much anything that isn't overly advanced. Or if you want to get advanced and start using the node catalog to do things that you actually cannot do in Inkscape or Illustrator or any other vector editing program, you can start doing procedural effects. And um for example, what I created was a morphing between different bottles of ah potions.
01:12:05
Speaker
So different like potion flasks. And it could like morph between like a rec a tall stout rectangular one and ah like a very swoopy um like bulbous one and a triangular one and then i could also morph the level of the liquid of the green potion inside of it up and down by just draging dragging all these sliders around ah you can check out our uh check out i guess the different social media profiles that um we've posted that on and if you want to see that video but it is something you just could not do in any other any other vector editor at all
01:12:40
Speaker
So that's the kind of thing, if you really want to you know take more of a programmatic approach, take your your developer background and start putting that towards like creative graphics editing. and um I think the world generally is called, like that that world of programming to make visual things is called creative creative coding, that's usually the name, but also generative art.
01:12:59
Speaker
And yeah, we would really actually find it very valuable to see people creating that kind of those kinds of um those kinds of art because that helps us figure out what the use cases are.
01:13:10
Speaker
There are so many possibilities that we just never thought of. And if we can just add a few little nodes or a few little settings to make those kinds of things easier, we would you know love to see more use cases from people and that informs the design.
01:13:24
Speaker
yeah Yeah, that would actually help me as well because i am i make ah I make thumbnails every week for YouTube and I have exactly the same process every time and I do it from scratch every time. Yeah.
01:13:37
Speaker
There's that's like one caveat, oh well, to to sort of disclaimers to this. First of all, we don't have a stable document format yet. We are still in flux. We're working on that. We're trying to figure out like the best possible format that doesn't lock us in in the future. but Currently, we can't guarantee that something you worked on six months ago six months ago will work in the most current version of Graphite.
01:14:03
Speaker
That is the first thing. The second is that currently the performance for image operations, like we support the image operations, but currently that might be a bit slow. So that's with pixel-based raster data.
01:14:16
Speaker
Yeah. So but you would need to ah like test that out to see if that works for you. But those are sort of the two disclaimers I have. What kind of slow are we talking? Are we talking like their the yeah zooming isn't going to be buttery smooth anymore? is it the user interface is going to chunk? it depends it also depends on the resolution of the image you're talking about. Because okay yeah like with like big 4K images, we also at some point run into memory limits with the browser. Because currently your WebAssembly only supports up to 4 gigabytes of memory.
01:14:51
Speaker
And if you very young if if you have like raw images and very big images, that's going to max out your RAM pretty soon. Yeah, that makes sense. And those are the sort of things we work on.
01:15:05
Speaker
yeah ah The solution we have going forward is that two things. One is that we will be loading all of that data onto the GPU, and then it does not have to exist on the CPU. Or at least if it does, it can exist in the JavaScript side as opposed to in the WebAssembly memory side.
01:15:18
Speaker
So that will allow us to get over that limitation. But also, I believe they're rolling out support for 64-bit memory yeah addressing in WebAssembly to different browsers over time. i I don't know exactly the status on that, but I do know that it is coming down as the ecosystem pipeline. Yeah, it's the WebAssembly 64-bit proposal.
01:15:33
Speaker
Okay. Yeah. yeah So it sounds like you have a lot of work ahead of you, but it's all tractable, right? It's all possible.

Open Source and Contribution Opportunities

01:15:41
Speaker
On which note, I feel like maybe ah the biggest blocker to your productivity is Dennis and I should unblock you.
01:15:50
Speaker
I would say Dennis, his time working on graphite graphene usually is the biggest, the biggest blocker for our continued work on the editor. Okay, excellent. In that case, I will leave you to carry on with it. Dennis and Keevan, thank you very much for taking me through it.
01:16:04
Speaker
Yeah, thanks a lot. was a great time. Now can I clone Dennis and get 10 more of him? That would be wonderful. We'll just pull him up 10 times on the screen right now as we close this out. He could just talk with a a slight delay for each of him. i'm I'm adding extra work for you to edit now.
01:16:21
Speaker
If you feel like you might be just like Dennis, contact us on this number.
01:16:28
Speaker
Cheers, folks. Thank you, gentlemen. If you want to try Graphite, head to graphite.rs. There's a desktop app and you can also run it in the browser. Link in the show notes as usual.
01:16:39
Speaker
I have to say, when I first heard about this project, I wasn't expecting something that looked quite as polished as it does. They have done an excellent job. So hat tip to them. And I hope Graphite has a bright future.
01:16:52
Speaker
give it a try. But before you do, if you've enjoyed this episode, please take a moment to like it, rate it, share it, discuss it over dinner in excruciating detail, even though your partner isn't actually interested in programming.
01:17:08
Speaker
Or is that just me? Either way, we'll be back soon with another episode. So make sure you're subscribed for that. But for now, I've been your host, Chris Jenkins. This has been Developer Voices with Keevon Chambers and Dennis Coburt.
01:17:20
Speaker
Thanks for listening.