Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Raspberry Pi Hardware & A Lisp Brain (with Dimitris Kyriakoudis) image

Raspberry Pi Hardware & A Lisp Brain (with Dimitris Kyriakoudis)

Developer Voices
Avatar
2.4k Plays6 days ago

Dimitris Kyriakoudis is a researcher, programmer and musician who's combining all three talents to build dedicated music hardware. Specifically a device called the µseq, which reads Lisp programs and uses them to drive synthesizers to make music. In this episode we go through the full platform that he's building, from soldering resistors to an RPi chip, up through writing a Lisp interpreter, to the design ideas that make Lisp a good choice for composing both software and music.

uSeq Homepage: https://www.emutelabinstruments.co.uk/useq/

Emute Lab’s Homepage: https://www.emutelab.org/

Buy a uSeq: https://www.signalsounds.com/emute-lab-instruments-useq-live-coding-voltage-generator-eurorack-module/

Build a uSeq (DIY Kit): https://www.thonk.co.uk/shop/emute-lab-useq/

SICP (book): https://en.wikipedia.org/wiki/Structure_and_Interpretation_of_Computer_Programs

Machina Bristronica (expo): https://machinabristronica.uk/

Sonic Pi: https://sonic-pi.net/

Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices

Support Developer Voices on YouTube: https://www.youtube.com/@developervoices/join

Kris on Mastodon: http://mastodon.social/@krisajenkins

Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/

Kris on Twitter: https://twitter.com/krisajenkins

0:00 Intro

2:20 What is µseq?

5:40 Live Coding As Another Instrument

17:42 Why Choose Lisp?

25:03 Different Dialects For Different Musical Tasks?

32:34 Live Coding As Academic Research

44:11 How Do You Fabricate Production Hardware?

49:00 The Triple-E Triangle

1:09:53 How Well Has This Theory Worked Out?

1:20:01 What's This Like To Play Live?

1:25:17 Comparisons With Sonic Pi

1:33:06 Outro

Recommended
Transcript

Nostalgia for the 70s: Programming and Prog Rock

00:00:00
Speaker
Some days I wish I could go back to the late 70s in programming where you still had tiny teams building the programs and the programming language and the operating system and the hardware it all ran on as a single project.
00:00:16
Speaker
Also, some days I wish I could go back to the late 70s and just join a prog rock band. And this week we sort of get to do both. I'm joined by Dimitri Kiriakudis, and he is co-building Raspberry Pi hardware that connects to synthesizers but runs a lisp environment. So you drive the instruments and play music by writing lisp code.
00:00:40
Speaker
It's a project that combines software and hardware, and some long-term research theories about human-computer interaction, with the terrifying immediacy of live coding in front of an audience that wants something to dance to. It's just a wonderfully huge, but self-contained project, and a truly full-stack project.
00:01:04
Speaker
And on top of that, it's an exploration of the things we still have to learn about the way we write software.

Building Music Hardware with Dimitri Kiriakudis

00:01:11
Speaker
Before we begin on this one, full disclosure, after we set a date to record the episode, Dimitri offered to send me the parts to build the hardware we're going to talk about. And I happily accepted. I spent about an hour soldering it all together. And I've been having some great fun with it, coding up some electronic music and making noises to slightly annoy my wife. Great.
00:01:34
Speaker
and I like to think I am journalistically incorruptible, but if you did want to corrupt me, sending me musical instrument hardware would be a great start. So, I'll just say upfront, having this definitely helped shape the parts of the project I wanted to ask about.
00:01:51
Speaker
But beyond that, Dimitri had no influence on the content, no warning of the questions that were coming. As always, I'm just here picking the brains of another brilliant software creator. So let's get started. I'm your host, Chris Jenkins. This is Developer Voices, and today's voice is Dimitri Kiriakudis.
00:02:22
Speaker
Kudos. How are you doing? Very well. How are you? I'm very well. You're one of my favorite topics, which is the interface between computers and music. Oh, absolutely. That's my favorite topic too. So I think this is going to be fun. Yeah, we're going to have a good discussion. So you are, let me say it in summary, and you can unpack it with me, right? You are building specialized computer hardware for live coding of music.
00:02:51
Speaker
Yes, and I would also add Specialized Software um on top of that. Specialized Software too, yes. So, um unpack the music side of it first, because that's the least familiar to the audience.

What is Live Coding in Music?

00:03:02
Speaker
What kind of music? Where? How?
00:03:06
Speaker
Ah, so the kind of music is kind of up to the user, um is up to the musician who who uses these technologies. um The technologies in question are live coding, which is a predominantly computational practice, um and an analog modular synthesis, which is in a lot of ways also fundamentally a computational practice, but people don't really think of it as such. um But a modular synth, what lets out from what a synthesizer is. um A synthesizer is either a ah digital program or an analog circuit that produces sound. um synthesizer The word itself comes from the Greek word synthesis, so which means to put two things together and and kind of to compose. It's it's the Greek um analog to the Latin compose.
00:03:52
Speaker
um and so A synthesizer is a device that that creates sound. um Traditionally, they've been used um um since the 60s or 70s, depending on how how far back you consider um scientific equipment to to have been synthesizers.
00:04:09
Speaker
um But they they've been used in all sorts of genres of music. um And they've been interfaced with in all sorts of different ways. So um there's a traditional ah piano-style keyboard that a lot of people use to play with synthesizers. Modular synthesizers have mostly controls in the form of encoders and buttons and dials and and and faders. um But all sorts of different controllers can be can be used to control the parameters of a synthesizer.
00:04:41
Speaker
So... So the ah the reason we're bringing live coding into this, and perhaps it's it it'd be um good to to unpack the term live coding itself, because that's ah another very loaded term and very loaded practice. Live coding simply refers to um ah to interacting with a computer program um in with its source code, with its actual implementation, um as the program is running. So it's the dynamic interaction with
00:05:12
Speaker
the source code that generates a process as that process is unfolding in real time. um And so it's been used by um by creatives to make music, to make real-time visuals, um but it doesn't need to be a creative practice. I mean, people have live-coded um physics simulations or or other kind of algorithms um just to to get a to to get ah as short a feedback loop as possible. um Right, so you are...
00:05:43
Speaker
i'm I'm thinking of a musician that sits down to a piano and thinks, OK, I'm going to play a 12 bar blues on the keyboard with some live improvisation. And what you're doing is the analog, I'm going to write a program that does the 12 bar blues that will drive the sound engine without a keyboard.
00:06:02
Speaker
Yeah, that is more or less correct. um the The idea is that you're interacting with um a so-called generative process. So instead of the direct manipulation of notes on a keyboard or strings on on a guitar or or even push buttons and dials on a modular synth, you are directly manipulating the code and then, therefore, indirectly manipulating the code's output.
00:06:28
Speaker
um But we we are interacting with a live computational generative process. and so um but This is not all too far into many electronic musicians though. A lot of what um electronic music production is nowadays um is in a lot of ways generative and algorithmic. Whether you use a digital audio workstation program or whether you're using a synthesizer with an arpeggiator or a sequencer. These are just fancy ways of of describing algorithms that make certain parameters move over time. And those parameters can be the notes, they can be the um the type of sound that the notes are played with, um or or any kind of anywhere in in in the whole stack of of um ah the the synthesis process. um But live coding is fundamentally about exposing that kind of computational nature of these like algorithms and and not hiding hiding them away behind into interfaces that are fixed. right yeah So when someone twiddles a knob on the keyboard on stage, you'd be writing a program that does the digital equivalent of twiddling a knob.
00:07:38
Speaker
Exactly, yeah, toodling a no-biz is nothing more than than changing the value of a particular parameter. um yeah We can think of each module in a modular synthesizer as a function of sorts, and it has an API, and um and we can directly manipulate their arguments, or we can use other modules that generate abstract control signals, um which are essentially just numbers that go up and down.
00:08:01
Speaker
um and use those to control the parameter. But then we we create this kind of almost recursive stack of algorithms that

Functional Programming in Music Composition

00:08:10
Speaker
control algorithms, that control algorithms, that control algorithms. um And so the whole thing is inherently very computational. Live coding just exposes that and says, you know don't don don't be afraid of the code. The code is is just um the representation of of what the process is doing.
00:08:27
Speaker
but There's also a sense in which, you're as well as turning code into music, that you're turning music back into code for that to be the interface we play with. um Music into code.
00:08:40
Speaker
theres that Music is is almost, whenever we're talking about a a computational and algorithmic music process, and it's important to note that you don't need to be working with a computer to be to be doing algorithmic composition. i mean Johann Sebastian Bach and others have have done quite ah heavily algorithmic compositions on clavichords and pianos in the past.
00:09:02
Speaker
yeah Yeah, that's true. so So music, in a lot of ways, has has always had a relationship with algorithmic thinking um in in in in a lot of cultures. so um And so what LiveCoding tries to do is to make that explicit and maybe assign a ah clear and formalized vocabulary to all of that.
00:09:24
Speaker
um Okay, we we have to get into the hardware of how this happens in reality. But since we're on the topic of code, how do you how do you describe music in code? What have you been your design choices? ah yeah so There's a lot of different paradigms and ways of um thinking about the computational nature of music. um there's and and In a lot of ways, they're informed by the various different approaches in schools of thought and paradigms of programming itself.
00:09:57
Speaker
So there's there's almost a one-to-one mapping between object-oriented programming and some some ways of algorithmically formalizing music, um functional programming as well. The design choices that we've made are um are very closely aligned with functional programming, and and in particular with purely functional programming, um which is which which maps um which maps to music in a very interesting way when you start considering music to be a function of its domain, which is time. and so Part of how we designed this the the language that we're using um had had to do with taking that almost pushing that to as as an extremely literal level as possible.
00:10:44
Speaker
Can we write algorithmic music by literally writing it as a function of time? It's a function of one one argument, one parameter, which is time itself in seconds. And as you keep evaluating that function with that argument increasing over his time, starting from zero and going on to infinity until until we stop it,
00:11:04
Speaker
yeah um then the behavior of the music and the instrument um is is the concatenation of the output values of that function for every point in time.
00:11:15
Speaker
like So I would assume that everyone can imagine um music as a program that takes time as an argument and is a switch statement and says, if it's less than 10, we're doing the intro. If it's 10 to 20, verse, chorus, verse, chorus, outro. How on earth do you break that down into the individual adjustments of notes on a musical instrument and the individual parameters you're twiddling?
00:11:46
Speaker
ah oh Yeah, that's a great question. um it ah The short answer is it depends on how you want to structure um time. um At the end of the day, time is just the linear flow and um and it's up to us how we group that and how we twist that and how we make it ah either be a repetitive process or a a a completely chaotic one or anything in between. um so So in terms of notes, um if you think of... um ah Let's take ah the piano for example as an instrument. We have 88 keys and each key has a fixed frequency to it. um um Each key has a hammer that strikes ah but just a specific string or sets of strings usually that are tuned to a specific frequency. And so a piece of music played on the piano
00:12:34
Speaker
can also be thought of as 88 parallel streams of time varying um time varying signals. And each one has its own fixed frequency. And what varies is just the amplitude at which that frequency is audible. And so what happens when we press a key on the piano is we introduce the little spike in the amplitude of that particular frequency at that particular moment in time. And then the string's natural decay kind of brings that that back down to zero.
00:13:03
Speaker
And so in a lot of ways, that's just a graph that we can describe with maths and have those spikes be kind of generated as the time argument flows forwards. But but that I'm sure that works in theory, but in practice, it absolutely doesn't work if you've got 88 people with one finger.
00:13:22
Speaker
Never mind getting the room for them. They couldn't possibly coordinate it. Oh, absolutely not. And what a pianist actually tends to do is coordinate groups of five things. Right? Left hand, right hand. That's exactly correct. And that's where live coding comes in. It's you become the pianist who coordinates these things. Of course, you don't write 88 functions and go between them frantically changing parameters to play for a lease or something like that.
00:13:51
Speaker
yeah um But the the idea is that you start building this kind of tower of abstractions, and each that pushes you towards a slightly higher level of how you want to be thinking about what's going on. You don't want to be thinking about AT8 keys, you want to be thinking about a chord or a melody or or a key modulation.
00:14:11
Speaker
um And so programmers being programmers, we can build these layers of functions on top of functions that take very high level descriptions, such as play this chord for a bit, followed by that chord, and turn it down, turn it kind of compile it down, if you will, to very specific instructions for for different parts of the sound making process. Right. so Yeah, yeah. So that that starts to give me an idea of how you build up sound. So I'm going to write I'm going to write a function that takes the time and a root note and plays root third fifth third over and over again. yeah exactly And as I change either the time parameter or the root note parameter, I will get some kind of baseline, which I then packaged package up, leave that running while I focus on something else on top.
00:15:02
Speaker
Exactly. so So there's, there's, you know, live coding tries to condense the feedback iteration cycle as much as possible, but there's still an element of, I need to take a few seconds to think about what I want, you know, the music to be doing initially. And then once you've built that up in code, and you evaluate that, so there's an interacting kind of rebel like evaluation loop.
00:15:24
Speaker
um When you evaluate that and you send that over to your system and it starts producing music, then you're free to start thinking about how to proceed from there. um and Depending on the complexity and the kind of elaborate elaborate design of the the code that you wrote, you you may end up with code that does something that repeats every few seconds and therefore you start feeling the pressure of like okay this this has been repeating for a while and now I need to change it or you can also very very um just as well write code that does something over minutes or hours and so um you can have a live coding piece where it's a page of code and and you press play and it does it does everything or you can have a line of code that you keep for a while and then when you want a certain change to occur you make that change yourself and reevaluate it.
00:16:14
Speaker
OK, so you would be expecting, whilst you could pre-bake the program, write the whole program, get it how you like it and play it, you're sort of expecting to have a live coding session where things are written, run for a bit, and then you throw them away to make room for the next thing.
00:16:29
Speaker
Yeah, exactly. So there's there's almost a spectrum of how much of the performances or the pieces, if we're talking about a you know a studio production piece, um there's a spectrum of how much of its overtime behavior is currently encoded in the code versus how much of it Manifests as in when I make changes to that code um and in a particular piece of music or a particular performance can exist anywhere on that spectrum we can have any combination of. um I've written the whole thing down and I just need to press play and all of the overtime.
00:17:02
Speaker
behavior is encoded in in the code that's already there, or we can have the but complete opposite end of the spectrum, which is every single change needs to be made manually. Of course, that's very impractical, and so people don't usually reach that far into that end of the spectrum. um but yeah You can go all the way from classical to free jazz.
00:17:20
Speaker
Yeah, yeah, absolutely. and then And then anywhere in between. And we can play with rigid timing systems, freeform timing systems, um or just pure chaos. I mean, you know, computers are are great at... we can We can just add randomness to anything and everything. Yeah, no matter how it sounds. Yeah, yeah. yeah the Well, we can. The question is, do we want to? So you have chosen... Let's dig into some technical choices here. yeah You've chosen Lisp as the interface language.

Why is Lisp Perfect for Live Coding?

00:17:50
Speaker
Yeah, so so that um that comes out of a um a love affair, is the only way I can describe it, um with with the Liszt family of languages. That has been unfolding over the last few years in my life. um and And there are some highly technical i guess reasons for that, too.
00:18:12
Speaker
um um Part of the the superficial reasons for that um for that decision have to do with LISPs syntax and how how you can have um ah ah operators like plus or minus or multiply having multiple arguments. So um you know you mentioned in the beginning that it's almost as if you can have these switch statements that say, well, if t is between 0 and 10, we do the intro. And if it's between 10 and 50, we do something else. um How I like to think of this is,
00:18:48
Speaker
I have all of these parts in my program, and each part can in itself be describing a process that is potentially infinite. You know, if if we have a sine wave, a sine wave just infinitely repeats forever. But if I don't want that sine wave forever, what I can do is simply nullify it before the beginning and after the end of the section where I want it to be active. So essentially what that means in in in code is multiply with zero.
00:19:13
Speaker
And so I think of a composition as a huge summation of every single part of the composition that will get multiplied by zero when I don't want it to be active. So it's that's basically the same thing as muting it on on a synthesizer or a mixing board. um Yeah. So you are taking advantage of this ah language where everything's an expression, right?
00:19:35
Speaker
Exactly. Everything is an expression. um and yeah And expressions, um because we have this kind of prefix um arithmetic syntax, instead of, you know, in C or Python, we have the kind of infix arithmetic operators. And because this is another, again, completely superficial um element to it. but But when you're live on stage, and you're projecting your code for however many people um you've managed to to to rope into the live coding event,

DIY Musical Hardware with Raspberry Pi Pico

00:20:04
Speaker
um And you you are you're feeling the pressure. You're feeling the pressure of the music being too repetitive or of the audience being eager to dance more or or whatever it might be. um You don't want to have to worry about things like semicolons at the end of statements. Or you don't have to want to um worry about adding commas between arguments. um And you certainly don't have to don't want to have to worry about taking an expression and wanting to wrap it into another function call, and then having to navigate to the far end to open a parentheses and to the other end to close the parentheses. And so and list makes that easy with slurping and barfing and... Structural editing. Exactly, and yeah. Yeah, yeah. yeah
00:20:49
Speaker
so But does it also play into the... ah because one of the things other things that Lisp is famous for is having a REPL, this interactive development experience. And is that is that another reason why you've jumped on this for live coding? Yeah, exactly. So so this is where um where the hardware that we're running all of this on becomes relevant. um the The module itself um is powered by an RP2040 microprocessor at the moment. we raspberry pi right raspberry pi yeah correctly um
00:21:20
Speaker
um it's It's powered by a Raspberry Pi Pico um or equivalent processor. um Which is, you know by the standards of the previous century, it's it's a supercomputer. but um But it would be kind of tricky running um you know and an LLVM compiler on it, and um or or running um some a kind of you know the or any other large... kind of repel like You want a small, light language. we Exactly. so We wanted a small language, not only one that's that's light for the um for the microprocessor to run, but one that is very lightweight in in its implementation for
00:22:04
Speaker
people to be able to fit it all in their head. And so I like to joke that this is the only programming language that I've ever worked with that I i can i can understand all of it. I can understand the whole kind of stack of of how it works because it's it's a few thousand lines of of C++ plus plus in this case.
00:22:21
Speaker
I was going to ask, so it's written in C++. plus plus Yeah, so so it's been forked over by a so-called hobby implementation of Lisp, um Adam McDaniel's Wisp, um I believe he he calls it. um And it's it's a it's a simple tree-working interpreter at the moment, um although there's a lot of plans for a um for a specialized VM that kind of bakes a little bit of this the the time-traveling capacities of this language that I'm sure we'll get to um very soon. um and and so The idea is that it that the language implementation itself
00:23:03
Speaker
It needs to be hackable. It needed to be hackable by us because we're not um you know industrial compiler engineers. and and At the end of the day, we want to make music with this. We don't want to spend all of our time hacking on on on compilers silently.
00:23:20
Speaker
um and ah And so there's this beautiful simplicity um that that Lisp um has ah in its in its core design, um which I felt was was particularly suitable for this kind of project. And then there's one last particularly kind of relevant reason why we chose it, which is so-called metalinguistic abstraction. So the ability to be able to build layers of language on top of language, um essentially macros. So does that mean you've got macros in the surface language?
00:23:57
Speaker
um Yeah, so there is the the idea is that there there is a stack, which currently is is mostly empty, but there's a stack where users can start inserting and removing layers of macro transformations. And so each one you can kind of think of them as nanopasses in a compiler, um where you each pass does one thing. it it will um Do you know constant propagation or it will um it it will optimize you know ah expressions that are multiplied by zero and so you can just kind of replace the whole thing with zero. um But the idea is that different musicians have drastically different needs so not just musical needs but also.
00:24:44
Speaker
um user experience and interaction needs. Someone might you know be very keen on typing it away for the whole performance. Someone else might want to have some high-level structural editing features so that they can jump between the live coding and you know and and a different instrument perhaps um while on stage. um And so the idea was that Let's create this unifying core, um li be S expressions in a lot of ways we can think of as as just a lingua franca. They they just group objects together. and um And so we have that at the core, and then different people with different use cases can build slightly different dialects almost um on top of those layers, and then potentially even dynamically add them and remove them during a a performance. OK, that's interesting.
00:25:33
Speaker
would but When would that be used? When different musicians take over on the same hardware perhaps. ah That could be the case. um it could also be used i mean you can You can make slightly different mini languages that you use at the same time on the same hardware, um in the same code editor. um But the idea is that, you if you like in a lot of ways, we think about things like sequencing drums versus composing a bass line versus um coming up with chord progressions. We think about them in slightly different ways, often very, very very different ways.
00:26:08
Speaker
um But why should we have to express them in the same kind of language with the with the exact same semantics and the exact same syntax? A simple example in the beginning was, why can i and this is something that not even most LISPs can do. I believe maybe... I'm not even sure if if common list could do that with reader or macros but my my thinking was why can I not have a macro that takes in an expression and then and then the expression can consist of characters and spaces and and and so on but then ah space a white space is significant so a white space indicates a rest for example.
00:26:44
Speaker
Oh, yeah. because Because in most lists, the reader throws white space away, and then all we can work with is is what remains, which is symbols and lists and so on. um But the idea was like, yeah, can we can we leave that possibility open so that if someone wants to use white space or indentation to mean something musically relevant to them, um why should we not allow them to do that?
00:27:08
Speaker
yeah Yeah, you can instantly see how x space, x x space would be a rhythm, right? Exactly. That's almost a rhythm a second you say it loud. Yeah, absolutely. and and it's ah it's ah It's a visual rhythm as well. You see it spelled out visually in front of you geometrically. um and All of these are extremely important. i mean They might be completely insignificant for someone writing you know production code. ah um in their office. But if if you have the real-time constraints and the this ah this almost endless thirst for feedback when you're performing live, you want your computer and your editor to be able to tell you or show you what's going on as it's going on. And and and this is, again, where the kind of time-traveling tricks come in, not just as it's going on, but potentially, if possible, before it comes on. Before it before something occurs, you might want a bit of a heads up.
00:27:58
Speaker
um from your instrument. but Explain. This is the second time you've mentioned time travelling. Tell me what you're talking about. Yeah, absolutely. so So this is a little bit, I guess, tongue in cheek. and there's Time travelling, is as far as we know, is not possible yet. but um What we mean here is because we because we're focused so much in this purely functional paradigm, because we're literally treating time-varying behaviors as as very literal functions of time, um the only thing that we really need to do to be able to preview what's coming is to just evaluate that function for a value of t that is the current value of t plus some offset.
00:28:44
Speaker
So if you've got a program, you could tell me what the note is going to be three seconds from now, because it's still just the same code. Exactly. It's the same code. The code itself is, is um I like the the term inert or static. For this, i've I've heard it used in the context of languages like Clojure or um you know code as inert data. So the the the data itself doesn't change. The code itself can be statically analyzed. And and so much about its over time behavior can be inferred just by looking at it. I'll give you a simple example. um if we If we have a function like square, square just takes in a phasor, a value that rises from 0 to 1, and produces a square wave. A square wave is just a low and then jumps to a high, and and it has a um a periodic behavior where it just jumps from low to high um with a period of 1. And most things in and our language, which by the way is called modulusp. Modulusp.
00:29:42
Speaker
Yeah, it's a little bit of a play on words with modular synthesis and Lisp. Also with the word modulation, which is a lot of what the module itself is called using. That's the technical term for twiddling the knob, right? Yeah, absolutely. We prefer to think of it as a modulation generator rather than a knob twiddler. But essentially, that's precisely what it is.
00:30:06
Speaker
right yeah um and so So because a lot of this overtime behavior is just statically encoded into the program itself, um we can then do all sorts of neat tricks like and you know go into the future and and and see what's going to be happening in the next five seconds. And depending on what is how coming in those five seconds, we might want to be able to change some things in the code.
00:30:33
Speaker
um or Or even more interestingly, we can statically. So you asked there earlier, you know can you tell me what note is going to be playing in three seconds? And then there's another question that we can also ask the computer, which is, can you tell me when the next C note is going to play, or when the next D note is going to play, or when when when this sample is going to trigger next? So we can ask for it to kind of solve for T, um so to speak, right for various behaviors.
00:31:04
Speaker
Would you just do that by running through the values of take? No. So so that's that's the obvious way to do that. And and we can it can certainly work. But it's not very computationally efficient, um especially when you're running on a microcontroller. um And especially when that microcontroller's runtime integrity is crucial for your performance is runtime integrity. so You really don't want something to crash in the middle of a gig. Yeah, you you do not. And I've had that happen to me before. um it's it's it's very um It's very nicely immortalized forever on YouTube.
00:31:41
Speaker
um and i yeah it it was it was It's a lot of fun you know reflecting back on it now, um because it was it was at a functional programming conference. um And so you know people people are are you know programmers are very familiar with crashes. and And it was almost like a little mini game of like, can I fix the crash and bring the sound back before I lose the audience. um Which yeah, like live live debugging under under pressure. ah um but the so
00:32:14
Speaker
the Instead of running through all of those um you know moments by moment and in time to get to whatever you're looking for, we can also just kind of statically solve for it. and and This is not something that's currently implemented, but it's something that I'm working on as as a part of my PhD on live coding instruments at the University of Sussex.
00:32:34
Speaker
um Okay, so you're actually doing a PhD on this. This isn't just like your new hardware business, on this this is also your research.

From Academic Research to Commercialization

00:32:41
Speaker
Yeah, so maybe be it'd be nice to set some context for the actual venture um or behind which um a well from which the the module that we're talking about um has spawned. Immute Lab Instruments is is the name of the company. and It's a company that um we're launching with um one of my PhD supervisors at the University of Sussex, Chris Kiefer and Steve Siemens, another PhD student at Sussex.
00:33:10
Speaker
um and And the whole point is um to bring work research work that Immute Lab, research lab at the University of Sussex, Immute stands for experimental music technologies. And so so the Immute Lab has been working on all sorts of um ah sort of research projects around musical instruments and musical technologies and interfaces and so on.
00:33:36
Speaker
And so the idea was to start bringing those from the research domain to the commercial world and start making them publicly available and accessible um for people because there's this problem in the academia that people will come up with a new instrument idea and they'll make a prototype and they'll write a paper about it and present it once or twice and and then it kind of sits at a shelf.
00:33:59
Speaker
and no one ever really gets to play with it. um And that's that's ah that's one of the worst things that you can do to a musical instrument is not put it in the hands of people who want to make music with it. Yeah, because unlike software, once you've got hardware, you have to commercialize the production of it or no one gets one.
00:34:16
Speaker
Exactly. yeah um and and And there's you know modular synthesis and EuroRack, which is the particular i guess you can call it a platform. um it's ah It's a set of electrical and mechanical standards for how to produce modules that work together. So they they use the same power supply, they fit in the same racks, and they they more often than not, but not necessarily always speak in the same voltage ranges. um And so they can kind of talk to each other. But the problem with that is that can get really expensive really quickly. If if you're buying all of these individual components and they're all made by you know in in small batches, um potentially by boutique manufacturers, um you very quickly
00:34:59
Speaker
get into the thousands, if not more, if you want to build a a kind of functional system. and What we wanted to do is to try and um to try and kind of flip that on its head and say, well how can we how can we cut costs almost to you know to the bare minimum?
00:35:18
Speaker
um and And so the first you know the first incarnation of this module um was a a DIY prototype with an actual Raspberry Pi Pico, not just the chip in ah you know on a PCB, but the actual Raspberry Pi Pico attached onto a 3D printed board with hand soldered wires between everything. And um and the idea was um ah to make this a super accessible DIY project for anyone.
00:35:46
Speaker
to build in an afternoon or a weekend, depending on your level of familiarity with DIY projects like that. If you're handy with a soldering iron, it will be quick. Exactly. And it doesn't really take much more than that. And if you have access to a 3D printer or some other means of creating a panel, all of the schematics and the hardware design files and all of the source code, by the way, is completely open source under a permissible license.
00:36:16
Speaker
So we we wanted this to become ah something that a community can be built around. um And something that would not be prohibitive um due to cost. So people could buy the bare materials for around 20 or 30 pounds depending on on where they live in the world. um And then DIY this, flush it with the firmware, make alternate firmwares, make alternate editors, whatever whatever it is. um we We wanted to make... ah ah We wanted to make this an open-source product and also an open-source company in general. so So most, if not all, of the things that we're working on at the moment are are open-source. Including the hardware? Including the hardware, yeah. So the PCB design files and and the the CAD files for the faceplates and everything else, yeah.
00:37:06
Speaker
OK, so teach me the big missing piece that I don't know. I i can ah can imagine having and access to a synthesizer, like someone like me. Actually, in reality, I have access to many synthesizers, and it's become a problem. But I can imagine. I'm familiar with that problem. Yeah. I can imagine having access to a synthesizer. I want to control it with some external live coding controller.
00:37:31
Speaker
I know how to write a programming language or port one, I probably can figure out how to get it to compile to Raspberry Pi, and then I think my knowledge caps out. How how do you go from this is running on a Raspberry Pi to this is actually sending the right voltage signals out of it to something else? What's the hardware part of this puzzle?
00:37:53
Speaker
Right. yeah um I might not be the best person to ask for this because I mostly work on the software side of the module. So the answer is get someone else. The answer is um ah well the short answer is ask Chris Kiefer on our Discord or um or or if you happen to run into him somewhere.
00:38:12
Speaker
um But the the the answer is, the so the Raspberry Pi has analog outputs. um and it has And by analog outputs, we mean you can you can call a function with a ah um with a value, I think it's 12-bit, but I might be mistaken. You can call that and then one of the pins on the board itself will send out a ah voltage.
00:38:38
Speaker
um Right. There isn't just one or zero that's variable between... um that You can output floating point numbers as voltage. Yeah, so the the trick to get floating point numbers as voltage is to use so-called PWM, which which stands for pulse width modulation. Tell me about that. Yeah, so, so ah um well, let's talk about what a pulse is. ah An electrical pulse is just a signal that goes from low to high. So, so we think of a... ah Think of just the binary kind of analog gates that can either output a high voltage or a low voltage. The pulse width is the duration and time for which that pulse is high versus low.
00:39:18
Speaker
um and so By modulating how long that pulse that binary pulse um stays on high versus on low, and then by putting that whole thing through a filter um a a filter in the mathematical and electronic sense,
00:39:36
Speaker
um you essentially smooth out those jagged kind of binary on and off transitions. Um, you smooth them into a wave that actually kind of continuously interpolates between, between these values. So am I right in thinking if I want to, I want to output a value of 0.75. So I just flick my switch on for three quarters of the time and off for the other quarter of the time.
00:40:07
Speaker
And that jagged output can get smoothed out to the average of 75%. Yeah, i five percent i'm not I'm not exactly sure that it would be the kind of three quarters and one quarter ratio. OK, so the maths may change, but the idea that you flip it back and forth in the right proportion. So course this becomes a little easier to intuitively picture. um If we think of filters as as something that makes a signal lag a little. um So we have a signal that's very jittery, yeah let's say very noisy, um and we can ah we can imagine it as as you know a a very chaotic, um a noisy signal. If we put that through a filter, what the filter does is it tries... it Mathematically, what it does is it it attenuates the higher frequencies above a a certain point, which we usually call the cutoff frequency.
00:41:02
Speaker
um and What that means in practice is that we take a signal that's very jagged, and we ah you can almost think of it as attaching some weight onto the object that's creating the signal. and Because we make it heavier, we make it have more inertia, and because it has more inertia,
00:41:19
Speaker
it resists kind of very quick, snappy changes um to to its position. So it almost becomes heavier, and and and then it starts moving in these kind of smoother patterns. um Not unlike how objects that, you know if you have two bowls, and one of them is extremely heavy, like a a bowling ball. and And then we have a similarly sized bowl that's that's empty inside. It's just filled with air.
00:41:45
Speaker
um the The lighter ball we could slap around very quickly and easily, but the heavier ball would resist these almost instantaneous changes to its position a lot more.
00:41:56
Speaker
Okay, so now I'm thinking if I want to get my 0.75, what I'm kind of doing is watching this voltage slowly fall down to 0.74. I flick the switch on, it starts slowly rising above, I flick it off, it' start though and eventually it smooths out to the actual value I want. Yeah, I'm no electronics engineer ah by any stretch, but that's how I think of it. Yeah, that's how I understand it.
00:42:23
Speaker
and Then my job presumably is to arrange the right size of resistors and capacitors to make this. I have to go away and research filter circuits in resistors and capacitor values.
00:42:35
Speaker
Yeah, more or less. I think it's it's one there's there's this is ah an already established practice. A lot of devices work that way. um I mean a lot of i think um might be mistaken on this, so please nobody quote me, but I think a lot of synthesizers used to work that way um with with PWM outputs that... that would generate sound that way. um But that makes it that makes it a lot cheaper um and it makes it a lot easier. and then The coding for all of this is is already abstracted away on on you know in libraries for Arduino or Raspberry Pi or or similar.
00:43:10
Speaker
um And then the the filter circuits themselves, are you know you can just pull one off the shelf, so to speak. um so So we didn't rein we reinvent any wheels, we didn't need to um ah design any kind of groundbreaking system to to do this. um the the What I believe to be the novelty of the the kind of the whole design is the idea of putting the interpreter in the module.
00:43:34
Speaker
um instead of having to re-flash it every time you want to make a change to the code. So there's already quite a few modules out there that are programmable, but by programmable we mean we have to write the code, compile it, flash it, use it for a bit, then think of what changes you want to make and repeat that cycle.
00:43:52
Speaker
Yes, because the big thing here is that the programming language interpreter isn't running on my laptop. It's running on your Raspberry Pi module. Exactly. Yeah. So I can, yeah, I can constantly send new code. Okay. I start to see how I would, let me ask you one more question about building the hardware before we move on. Once you've figured out that circuit board and you've like, um, you've done some prototypes locally with like you were saying wires and soldering it yourself. Do you just sort of.
00:44:22
Speaker
I don't know, do you just go to a factory in China and say, can you make a thousand of these? Here's the circuit diagram. Yeah, so um so that's definitely an option. um our Our initial option was, so we we got the PCBs printed. PCB stands for printed circuit board. And it's just the board with the, you know instead of wires that go from one point to another, there's a trace of some conductive material on on an otherwise not conductive ah board um that that essentially are a a compiled and optimized version of what the wires were doing.
00:44:57
Speaker
but um And so that we we get the PCBs printed. PCBs are quite cheap, you know and especially if you print them in in high um high volumes, they can cost as little as a couple pounds, um or or a lot less if you go into the thousands in terms of batch sizes. um But we got the PCBs printed, um and then we booked out three days between the three of us, and we we sat down ah around a kitchen table, and we we made those ourselves.
00:45:26
Speaker
So we ordered all of the components, we ordered the PCBs. A lot of the PCBs nowadays you can order with the surface mount components already installed. So the factories have, um I think they're called pick and place machines, the little robotic arms that will peak pick these tiny tiny little electronic components and and place them exactly where they need to be on the PCB and then solder the whole thing in one go. and It's extremely extremely efficient.
00:45:52
Speaker
um And so we get that, we get the components such as the aluminium cut faceplate, which again you just send the design file to a manufacturer and they will they will um cut it and and print on it, all the labels, everything. um Then we get the jacks for the cables, the encoders,
00:46:13
Speaker
every every other component that's required and then we put it all together ourselves um and neatly packed it in a box and send it off to to the distributor.
00:46:25
Speaker
um But the the idea is that now, when if we want to be able to make this a bit of ah if we want to make the production of this module more sustainable for us, um which is not really sustainable to take three days or four or however many is required, um out of our busy professional and academic lives um to to build these every time, um we've we've gotten in touch with various manufacturers, some of them actually quite you know quite large and they they do a lot of um ah like ah medical equipment or um they work on any any kind of sort of what what they call electromechanical assembly, um which is which is basically taking the boards and all the components and soldering what needs to be soldered, snapping in what needs to be snapped and and tightening the screws and putting it all together.
00:47:19
Speaker
um But then then more interestingly, a lot of smaller manufacturers that specialize in lower production, you know lower size batches, specifically for um electronic music equipment, and and and in our case, specifically for EuroRack modules. And and i believe so yeah I believe that one is based in the yeah UK. Okay, that sounds interesting. i didn't i I would have guessed nearly all of this is happening in Eastern Europe or China. but yeah A lot of it, if not most, is, yeah. There's a lot of companies um that that also hire out their own assembly line for things like that. So there's there's some companies based, up I believe, in and Poland, if I'm not mistaken, who you know have their own production facilities for their own products. But then you can also hire them to build your yours. OK, so the path to productionalizing, if that's a word, the your software ideas as hardware.
00:48:14
Speaker
It's fairly smooth, you're making it sound like. You don't need to build your own factory. That has been our experience, yeah. um there's There's a huge market of of ah companies where you could you just send them your files. and Again, all of these files are available on the GitHub, so theoretically anyone could download them and send them off to a manufacturer and order a batch of, I don't know, 100 or 1,000.
00:48:36
Speaker
um But yeah, it's it's it's actually quite quite smooth. And in terms of prototyping purposes, you can order smaller batches of five or ten. um okay And the turnaround times are not terrible. We're talking in the weeks, not months. That's actually impressive for turning an idea into actual hardware.
00:48:55
Speaker
yeah Okay, so let's pull it back to um once you've got your module. what's What's your real aim with this? Are you are you trying to get a PhD? Are you trying to become a synthesizer manufacturer? Are you trying to gig with your ideal synthesizer setup? What's in it for you? um ah the ah the The shortest answer is yes.
00:49:16
Speaker
um I had a suspicion it'd be about them to author it. Yeah, um it is all of the above. um So the module itself is part of my PhD, quite a core part, I believe. ah The PhD itself focuses on um what I like to call the triple E triangle. So the the ergonomics, ergodynamics, and ergologics of live coding instruments and practices. Right, unpack that for me, because I think I know what ergonomics is. Yeah, yeah um so as ah you know as as programmers, we love defining things, so I'll i'll define some terms first. Ergonomics comes from the concatenation of the two Greek words ergo, which means work, and nemo, which means or nomos, depending on how you think of it. Nomos means law.
00:50:13
Speaker
so you if you If you go that route, um um you can think of ergonomics as the law for how things work, you know the the rules almost, so to speak. If you go the route of Nemo, which is what I prefer, Nemo means to distribute, or to hand out, or to give.
00:50:33
Speaker
um and so then Ergonomics becomes, in some ways, the distribution of labour. That's how I like to think of it. um so something Something is ergonomic when it distributes the labour that's required to work with it.
00:50:49
Speaker
appropriately. So um when when I'm using my mouse, for example, and i'm I'm constantly making these left and right movements with my wrist, that's not particularly ergonomic because I'm overworking a very small group of muscles. that's not necessarily designed to withstand this kind of abuse over years and years and years, which is where things like repetitive strain injuries and so on come come along. um And my my personal interest in all this is that I developed um lateral epicondylitis, also known as tennis elbow, um at the ripe age of, I think it was 16 or 17, I don't remember. um um And it's kind more or less been with me ever since. um And and it's ah it's now something that I've
00:51:34
Speaker
learn to work with and work around and and change the way that I interact with you know my computer and my keyboard and my mouse and and and all of these things um so as so to avoid making it worse. But that led me down this massive rabbit hole of why does almost every single mouse look that way? And why do almost all keyboards look like typewriters from the late 1800s?
00:51:59
Speaker
um and and And this rabbit hole eventually leads to, oh, I i guess it's for financial reasons because there's you know typewriters became fundamental to running a business at some point in history. And then new technologies came along, digital computers and so on, but businesses could not afford to have to retrain all of their employees to now type on different keyboards. They they had to be able to buy the computers and and jump onto this new technology from day one and be productive.
00:52:29
Speaker
And so yeah we've we've kind of been stuck with these designs that are are not have not been designed with ergonomics in mind. In fact, they you know the the the famous legend is that the QWERTY keyboard's layout came about because the typewriters with their little you know um ah mechanical heads that would punch the letter, um if you were pressing adjacent keys very quickly,
00:52:53
Speaker
um there was a chance that they could like block one another. And jamming together. Yeah, the whole thing would jam and and and would not work very well. And so in in some ways, the design of the QWERTY keyboard was had the goal of spreading key presses around the keyboard as much as possible, randomly, in in some ways.
00:53:12
Speaker
um Whereas guess what we really want is to to minimize the amount of work that our fingers have to do, um which not only makes us type faster, but it also reduces the chance that we'll develop and worsen things like repetitive strain injuries. So this is the ergonomic part. Ergo dynamics now is, again, ergo, which means work, and vinomis, which in Greek means either power or potential.
00:53:41
Speaker
um and and And so ergodynamic, you can think of it as the the feeling that you get when you approach an instrument or any other system really, the the vibes almost, so to speak, that you get from it in terms of what its strengths are and what its potential is.
00:54:04
Speaker
in the sense of producing work. so In the case of an instrument, we'll take the piano as an example. You sit in front of a piano for the first time, and you don't have to be a pianist to realize that, okay, I have all of these keys, and what this instrument can do is play any number of them at the same time. I could, like in theory, just you know lie down on it and play all of the notes at once.
00:54:25
Speaker
yeah um but But you don't see any knobs or dials or buttons or anything to really substantially change the sound. And so you get a certain feel from the instrument about what it's guiding you towards doing with it. um And of course, there's always you know you can always go the experimental route and take the covers off and start sticking things between the strings and um and and and playing it in ways that it was not intended, but that still is part of its ergodynamic. So how does its design delineate almost what its potential for for sound production and in and musical performances?
00:55:02
Speaker
Yeah, like a cello. I'm thinking of a cello. That's something where it's kind of hard to make a specific note, but very easy to vary the tone of it from day one. Exactly, yeah. So very different instruments give us very different feelings and feels in terms of how they work. And they all guide us, whether we realize it or not, they guide us in various different ways to make different music.
00:55:28
Speaker
um For example, i i I also play electric guitars and I love thinking about all the tiny little you know differences that you know you have ah a single coil pickup or or a double coil humbucking pickup and that's a very technical distinction. It's like how is the whole circuit wired and how many um how like how long is the coil and all of these things. um But at the end of the day, they they feel very different. you know There's music that would not have um that would not have been created if that the the person who made it was holding a guitar that sounded significantly different, because then they they would have you know interacted with it and and the the feedback iteration loop would have led them to to a different musical outcome.
00:56:10
Speaker
Yeah, yeah and so I'm thinking of a Telecaster, which is a classic country guitar. exactly and like yeah there It just has that certain sound that feels country. exactly yeah and And similarly with the Les Paul and the Stratocaster. Ergo Dynamics, in a lot of ways, has to do with Yeah, what what is what is the bidirectional relationship that you have with the instrument in terms of how you use it? um Not so much in a practical sense, like ergo ergonomics. Ergonomics is more about like how does it sit against your body. How far do you need to like twist and bend your limbs um in in awkward positions to to work with it? And what does that do to your body after many years? um there's a lot of
00:56:56
Speaker
professional musicians and performers um have famously gotten very bad repetitive strain injuries after decades of you know practicing the the the violin or the piano for for um for days and days on end. And then lastly, ergo logics. This is This is a term that I haven't really seen used very widely. um And and i I tried to Google for it, but um it it only came up in the context of, I think, ah a company in its name or something like that. You have this advantage with your Greek heritage. You can just make these words up. Yeah, absolutely. It's it's ah it's a cheat code. um And i'm i'm very i you know I always say that if I if i hadn't been ah raised
00:57:43
Speaker
um with Greek as my mother tongue, I would have probably never bothered to learn it because it's you know it's not it's not a very useful language outside of Greece. But precisely in in in these ways um of ah um lexical synthesis, um um both of which are Greek words, by the way.
00:58:01
Speaker
um um you can you can yeah You can very easily kind of stick um two and two together and and get a new language, a new word, sorry. um So the word ergo-logic, um ah i I kind of think of as ergo again, work, all three of these words share the same first half. And logos in Greek means either speech, in modern Greek it's come to mean like no when you give a speech.
00:58:29
Speaker
Um, uh, it also means the speech as in like the actual act of speaking. Um, and it also alludes to the notion of logic and reason. So, um, the, the law course of you doing something is the reasoning behind it. Why, you know, why are you doing this? Um, uh, and so ergo logic is, I, I like to think of it as how do we as Potentially technical people who are you know intimately familiar with the with the actual engineering details of how some of these systems work. How do we reason about what's going on? How do we reason about why the system has been put together this way? um or Or if we put it together that way, why did we do that? And and what's actually happening? is it Do we have a a JIT compiler somewhere? Is it an interpreter? Is it a tree-walking interpreter? Or does it work in some other way?
00:59:23
Speaker
Is this like when I'm debugging, what I'm kind of doing is creating a mental model of how the programming language worked and evaluated this software? Yeah, so you're you're modeling the ergo logics of the system in your head. That's how i I like to think of it. um and And the reason why this is relevant, and and and just to go back to the the the topic of the PhD, it's I am focusing on the interaction between these three variables, then therefore the the you know the triple E triangle. um Because it's almost impossible to change any one of them in in a system without affecting the others.
01:00:01
Speaker
um and so Bring this back to bring us back to the concrete for me. and like Describe how these factors influence your design of your module. ah yeah absolutely so um so Ergonomics um is is was the kind of initial um motivation. yeah It was this idea that well i i love um I love computer music, um but I also love real-time direct interactions with musical instruments. I grew up studying the piano, i I play guitar, bass, and I love this feeling of tinkering with an instrument in real time with my hands. so And this immediacy that it gives you, you get into these flow states and you're not really thinking about what's happening, but you're reacting both at ah at a physical embodied level, but also at a cognitive um um level, whether it's creativity or um or or rational thought or any any mixture of the two.
01:00:58
Speaker
um and and so I had this problem where I was playing with software programs like Max MSP or Pure Data or or other so-called virtual patching environments, modular patching environments.
01:01:14
Speaker
um and The idea is that you have ah you know you have ah a blank canvas on your screen and you use your mouse to pick and place um modules or objects. and You can think of an object as essentially just a function. um and It has some inputs and it has a one or more outputs. and and you of physically well you you You click and drag to connect these with virtual wires. You're drawing a graph of what your processing graph looks like here. Yeah, pretty much. um and And it has a lot of pros. As, for example, you just mentioned, it it is almost a one-to-one direct visualization of of a computational data flow graph. um and And so it's very intuitive to work with. The problem that I had with it was, one, it's a particularly mouse-heavy workflow.
01:02:03
Speaker
And the mouse, at least non-ergonomic er ah sorry um non-eronomic modern mice um have the the issue of of extended pronation, extended periods of time of pronation of the arm. That means the the palm of the hand and the wrist are um are parallel to the desk and to the floor. And so the hands are in a in a kind of generally uncomfortable position for long periods of time to be in due to how the the um ah bones in our forearm, um you know, in in in the in the elbow, they they're fixed, but the wrist obviously can rotate. And so they create a pinching um of the tendons that run through the forearm. And then as you're using those tendons while they're pinched, while your arm is in this kind of pronation position, that severely um wears them out over time.
01:02:54
Speaker
um Or at least they did in my case. And and so that was that was the ergonomic issue there. It was this idea that, well, I i i love this, but ah the more I work with it, the more it hurts. And and this is not sustainable. I can't keep keep working like this um to make music. um And then the other problem was the problem of ergonomics.
01:03:14
Speaker
and ah Sorry, ergodynamics. And that was that I didn't get the feeling working with e-systems, working with MaxMSP or PureData. I didn't get the feel that I could develop muscle memory that I could rely on over time.
01:03:31
Speaker
Yeah, mice are terrible for muscle memory compared to key points. Yeah, answer absolutely. And and the reason what part of the reason is because what you're doing with the mouse physically at any given moment depends on where it is on your desk. So if if it's you know if you've pushed it very far, you're going to need to lift it and move it or or work in that kind of position, which might be uncomfortable. But also, it is entirely dependent on what's on your screen, where it is, and what where your mouse pointer is in relation to that.
01:04:01
Speaker
And so it actually becomes really hard and i I personally believe impossible to build any kind of muscle memory that doesn't depend on you. um looking at the screen the whole time. and And if things dynamically move around your screen, if you're jumping in and out of windows, if you're if you're moving windows to the side so that you can see something behind it, all of these things um throw you off um in terms of muscle memory. And and you know it might not seem like a big problem, but when you're this all ties down to your own stage in front of an audience and you're feeling the pressure.
01:04:36
Speaker
you know You don't want to have to spend those extra three seconds or two or even half to you know find where on the screen that button is and then find where your pointer is and then work out how to move your arm so that the pointer goes to the button. um In high stress situations, it's all about what you can do without thinking.
01:04:55
Speaker
Yeah, exactly. So it becomes very important to be able to develop this muscle memory um over time in the comfort and privacy of your your own house or studio while you're practicing your your craft, and then knowing that you can rely on it. um and And this is not just important for high stress situations, but it's also important for um from what we call flow experiences.
01:05:20
Speaker
it is almost impossible to get into a state of trance, you know, tweaking parameters um on a screen with a mouse. um and and And for me, I missed these kinds of, you know, I'm not really thinking right now, I'm just doing, I'm reacting. and And obviously, there's thought that goes into that, but it's not thought that I consciously have to initiate and engage with at every stage. And and feel frustrated when it doesn't work and and so on. And so the motivation for the PhD was, can can we come up with not just different interfaces, because a keyboard already exists, you know, MIDI controllers with knobs and faders that can be mapped to any arbitrary parameter exist. But the problem is we don't really have a, ah at least I believe, we don't really have a good paradigm for thinking about
01:06:15
Speaker
ah kind of how we structure these systems. um it's It's a bit of a free-for-all. um um you know every Every environment just gives you this kind of blank page and then it's up to you to figure out how to how to map physical engagements, how to map button presses and dial movements and fader pushes, how to map those, or or what to map those to. um And is it then then the whole other aspect of, like how does that happen in the digital? What what are the ergonomics and what are the ergodynamics of the computational thing that we're actually interacting with? Because there's you know physical ergonomics is one thing.
01:06:59
Speaker
But then programming languages have their own cognitive economics. I mentioned earlier that having to remember the semicolons at the end or having to worry about indentation in some languages or these sorts of things. So is it fair to say on this then that you have... I'm sort of paraphrasing here, but it's like if I'm playing guitar, if I'm playing the piano, I can forget myself, forget the object I'm holding and lose myself in music.
01:07:25
Speaker
And it sounds like on one level you're asking the question, could we create that with making music with a computer? um Yes, but more specifically, making music with a computational process. um i i'm i'm always I always love going back to um to what Hal Abelson says in the first episode of the 1987 MIT course on structure and interpretation of computer programs. I'm sure some of your listeners will be familiar with that I've got the book on the up on the shelf somewhere, and I'm not going to look for it. yeah It's just there out of reach. Exactly. um it it should be It should be on every programmer's bookshelf, um I believe. and And cheers to Sam Aaron, who tweeted about this once, randomly. And and I was like, well, if Sam Aaron, who's made live coding instruments that I've enjoyed and been inspired by, um endorses this book, then I'll i'll y'all check it out. um And it became my my bedtime kind of watching um almost for for a couple of weeks.
01:08:31
Speaker
um and so What he says in in in the first lecture is, um computer science is ah is a you know a terrible name for this business because it's really not exactly a science in the same sense that natural sciences are, but also most importantly, it's not about computers.
01:08:49
Speaker
it's it's It's almost, in a way, substrate independent. It's not about whether we're using digital computers or analog or or quantum or whatever. It's about computation as a notion in general. It's about information and um and and what we do with it into it. um And so it was it was, can we bring these embodied flow state trance-like experiences to the practice of working with computational processes in real time. um or how i like to think of it is or How I like to summarize the research question in a very high-level abstract, potentially not very specific way is
01:09:30
Speaker
what what What would it take for a programming language and the interface to that programming language to be designed in such a way that I can study it and practice it at home and then go on stage and close my eyes and just improvise?
01:09:48
Speaker
ha yeah that yeah oh That's a beautiful image. So I have to ask you, I have to put it on the spot. So you've you've built this module.
01:10:00
Speaker
as almost a hypothesis of what that would look like? What do you think the results are? yeah so it's it's the it's it's the first kind of concrete um testing of the of some of the hypotheses so um one of them was um you know with this We clearly went down the route of picking a very low-powered computer to put inside a module, to put inside a modular synthesizer case. And then there's the obvious question, well, why don't we use a laptop instead? I mean, we have these extremely powerful computers already. um There's tons of software. We're we're not limited to however many megabytes of RAM um microcontrollers have. um Part of that, the answer to that question is,
01:10:45
Speaker
We don't want to feel as if we're working on this playing music and improvising on the same machines that we we work on during the day.

Integrating Computers and Modular Synthesizers

01:10:54
Speaker
We we all spend just the amount of time in front of a computer that some years ago we wouldn't really have imagined um ah we'd be spending so much. ah it's the And so the idea was can we put instead can we put the computer in the modular instead of having this distinction between here's the thing that makes sound and here's the thing that I interact with mostly to control the thing that makes sound. Why can we not just have it all be in one instrument that I can put in a suitcase and bring with me to a jam or to a gig and play with that. So rethinking a little bit what what the personal computer looks like. um
01:11:34
Speaker
um And then the other thing was, the world of URARAC modular synthesis is already filled with modules that are meant to be interactive controllers. There's there's joysticks, there's XY touch pads, there's um there's all sorts of other modules that that you're meant to touch and and play with physically. and And then the question was, well, can we use those in the codes a little bit? Maybe I don't want to have to type the exact frequency of my low pass filter. Maybe I just want to have an XY pad that I can use to explore the space of parameter values. So so you're saying that this standard of URAK gives you a pre-built library of other other pieces of hardware you can mix into what you're trying to achieve. Exactly. Yeah. So so there's there's companies with large engineering teams that have designed three-dimensional joystick controllers. and and And then there's other companies that have developed you know things like the leap motion or or other controllers that that kind of track movement in in one way or another. There's all of these um almost abstract
01:12:47
Speaker
um blank slate components that you can use in a system. um Why don't we build something that can read those in so ah our module has has a couple inputs um and then there are more expander modules on ah coming along the way.
01:13:06
Speaker
to to kind of scale that up as as high as we want it to go. um And the idea was, yeah, can we can we just get some of those modules? Or if we already have some, can we use those to directly change parameters in the code? you know I can set up a half a page or a page of code the way that I like it, and then leave these well-placed holes in the code where I'm like, this is where the joystick goes. This is where the XY pad goes. This is this is where you know the the pressure sensor in this module um controls the code and and and then when how the kind of custom software comes in as well.
01:13:43
Speaker
is once I do that, can i have can I have whatever monitor I'm using, whether it's my laptop or my tablet that I'm using to send code to the module, or maybe a dedicated screen that's built into the URVAC case itself, can I have that visualize in real time what my hardware physical controls are doing to the computational structure of the code? Can I move the joystick and see the number go up and down?
01:14:10
Speaker
and And as it's going up and down, I start i mean i can hear what the you know the smooth motion of the joystick is doing and I can see what the changes in the numbers are. And over time, hopefully that helps me build an intuition about which kinds of numbers get me which kinds of sounds and how how would I go about kind of bi-directionally traversing this in in my imagination as I'm performing live.
01:14:34
Speaker
Yeah, teaching your brain how it's all related by making sure different parts of your brain are actively being stimulated. Yeah. Yeah. Yeah. Yeah. Okay. So yeah, that, that, that opens up something wider than music, doesn't it? Because it's like for years we've done all our coding with keyboard and mouse. And if you can weave in this whole range of other different ways of interacting with software and with the ideas you're trying to code, what are the possibilities?
01:15:03
Speaker
Exactly. um and this this all i mean It touches upon things like interactive and incremental development. There's there's a reason um LISPs are are being um you know praised for having this incredible development experience because you you're trying out things and and you know you can You can experiment with minute changes very quickly in ways that write, compile, execute, wait for the test condition to be reached. Yeah, this has always felt more like a conversation with the computer. Exactly. Yeah. And that's exactly what you want to have with a musical instrument as well. You want a conversation. Or at least I want a conversation. I don't want a musical instrument where I sit down and write a letter and post it. And it takes a few days to arrive. And then we have this kind of correspondence back and forth. Yeah, keyboard is not a pen pal. Yeah, exactly. And so there's people already working on things like that in the wider computational context. There's structural editors.
01:16:04
Speaker
um are are great for then pairing with hardware controls. I have here a um split ergonomic keyboard that has a built-in encoder in it. And so, while I'm... You've got one of those keyboards, for the people that cut aren't on YouTube, you've got one of those keyboards that splits into a left and right hand and it's also got value dials on it.
01:16:24
Speaker
Yeah, exactly. so it's it's is a so-called um It's a ergonomic ortholinear keyboard. Ortholinear meaning that the columns are not the rows are not staggered between them. um so so it's ah It's just a straight line up and down for fingers to extend instead of having to go diagonally in all sorts of ways. The layout fits your fingers rather than fitting the available space on the computer. Exactly, yeah. um And in it's designed with like, you know, the the pinky column is is ah is brought a bit further closer to the pinky so you don't have to stretch and um it's ergonomic in a lot of different ways. But but my personal, and well, I obviously cared about the ergonomics of it, but I was fascinated by the idea of having an endless encoder built right next to my keys. So as I'm coding, I can navigate to, you know, a filter of frequency parameter and instead of
01:17:17
Speaker
I like to say instead of live coding with my and playing my instrument with my prefrontal cortex by pressing backspace and deleting the number 4600 Hz and writing 4700 Hz instead to see what that sounds like, I can just navigate to that number and have my editor be aware that I'm actually on on you know um I've navigated to a number now that's that that can be manipulated. It's not just a string of of digits um that then a parser and needs to get involved and and all of that. um And then just can I just move that endless encoder and have that number go up and down and hear the effects in real time? And so instead of playing with my prefrontal frontal cortex, I'm playing with my ears and hands instead.
01:18:04
Speaker
yeah Yeah, it's um eights kind of offloading the writing of code to your hindbrain, right? so you see your So your forebrain can focus on what the task you actually want in hand is. Exactly. and and the My brain can can take a moment to listen to what's happening and take it all in and and get a feel for what the music is doing and what the audience is doing and how it's reacting to it and and and basically just be more present instead of being all up in my head thinking, have I got the order of arguments right? And is is this is this the right syntax? Is this spelled with this L or this you know double L? or
01:18:45
Speaker
m things that are I mean, ultimately, they are important. We can't misspell code and expect the computers to magically know what we want. But why are our editors so kind of blind to all of these um you know to all of these conditions? Why don't they help us pick ah functions from a list of similarly spelled or um or why why don't they give us the option to um turn ah a ah ah ah you know a block of code, let's say a ah ah case switch statement. um Why don't they help us instantly transform that into the equivalent you know if-else
01:19:26
Speaker
um series of statements because there's these are all kind of you know there's there's well-formed methods from going from one to the other. But if if in that moment I feel like it would be much more conducive to my flow for the code to have a slightly different form, um I would like my editor to be able to be taught what those forms look like and how going from one to the other um should happen.
01:19:51
Speaker
yeah Yeah, I do think that whilst we've made terrific progress in computer science since the 50s, we're still in the infancy of how we speak to them. Yeah, absolutely. OK, so let me ask you perhaps as a final question. Let's ah take it out of the lab and put it into hard reality. I think I'm right in saying you've used this approach and your your hardware and your software to play live.
01:20:18
Speaker
What was your actual experience? Did you come away delighted with what you built? Did you come away wanting to make fundamental changes? um ah both in some ways. um there's The the you know their feature list is never complete enough. The editor is never smart enough. um yeah i mean there's There's a lot of these things that I've talked about are not necessarily things that have been implemented yet. they're all There's a very long roadmap that involves
01:20:52
Speaker
doing some background research, reading on what people have been doing to tackle these problems since the 80s. I've got a couple of books behind me um called Visual Programming Environments, Paradigms and Systems, and Applications and Issues. and They date a thing from the 70s, if I'm not mistaken, or 80s.
01:21:10
Speaker
And they look very much it, but it was kind of mind-blowing to see that people have been trying to tackle these problems for so long. And they've so many different kind of testing systems were put together for these sorts of things. And we still haven't quite solved it. I mean, if such a great new paradigm had been developed already, I would like to think at least that people would have started migrating towards it.
01:21:36
Speaker
um But so i've I've been frustrated when using these systems by the very things that we're now planning to to move it towards um anyway. So I've been frustrated by um by having to have my laptop next to the module. I mean, I can use the the you know the inputs to use other encoders and so on. And there's there's a couple encoders and push buttons and switches and so on on the module itself. so There's a little bit of textility there already, but fundamentally, I'd still probably need a laptop and a keyboard, or at the very least, a tablet and a keyboard. um we We have a little setup that we um demoed at bristronica um in in Bristol a few months ago, where we had... This is a music expo for music technology. Yeah, yeah it's ah it's a music synth sort of technology and and instrument expo. And we we we didn't want to just have laptops and modular cases on our table. So we thought, well, what what if we just put a Raspberry Pi inside the case and a little screen and a little keyboard attached to a mechanical arm that is you know that can be set on the case. and And so kind of try and bring the whole thing closer to a
01:22:49
Speaker
a coherent ummp system rather than this dis almost disjointed um you a combination of here's a laptop over there, here's a module over here. So I've been yeah definitely frustrated by all of these, but at the very same time I've been delighted by the fact that I can now bring together these two worlds. I've i've been playing with modular synths for a while, and I've been playing with live coding for a while, and they've always lived in different different worlds. They there were always islands that it was either impractical to connect or very expensive. You have to buy so-called
01:23:26
Speaker
DC coupled ah audio interfaces. um So that means a converter that can take a digital signal, produce an analog signal, but not filter out um frequencies that are lower than audible. rate If you're plugging a computer to speakers, you really don't want to send a voltage that's like constantly at a high level. and Because the speakers need to move. They need to oscillate. And a you can damage speakers by sending by by trying to just keep them pushed in one end of their of their oscillation or the other. um So these are expensive, they are clunky, they don't scale very well and now suddenly i could I could take the code that lives over here in this little island and send it over a very thin USB cable um to the module and then have the module run it and suddenly it it speaks the language of the whole other island.
01:24:15
Speaker
um it It generates voltages, it can read voltages, then those voltages can be sent back to the computer, they can be visualized. um So so it it has been a validation of the fact that um there's something here that that needs to be refined and um and something that can be evolved into a whole ecosystem where modules talk to each other, not in voltages anymore, but in code.
01:24:39
Speaker
There's this kind of desire that I have for modules to be able to tell each other each other more than just, you know here's here's the result, here's the output. and They should also be able to communicate processes. They should also be able to say, hey, I'm a filter and these are my parameters. and and And I'm receiving an oscillator and has those parameters. And maybe I can tell the oscillator to change some of its parameters before it sends me a signal. and have have this kind of paradigm of things talking to each other at an actual semantic level.
01:25:10
Speaker
subcomponents of a musical instrument chatting with each other to try and achieve the job. Yeah, I mean, you can think of it as a distributed computational system where instead of just running, you know, instead of just having remote procedure calls that say, here are the arguments, you have the function run it and give me the result, is can we share code? Can we compare what we're doing? Can we can we have a a symbolic, um you know, meta-linguistic, this is, again, where Lisp is is extremely um extremely useful. Can we look at code and unpack it and and potentially maybe change the code before we run it instead of taking the output and running it through more code? And and no bit of code knows about any other bit of code either before it or after it.
01:25:56
Speaker
Nice. Yeah, yeah, yeah. Let me just ask you one last quick question then, because there is a definite overlap between what you're doing in hardware and what Sam Aaron, who you mentioned, is doing entirely in software.
01:26:09
Speaker
Yeah, so i um so what Samaran does a is not just limited to software in in in my my opinion, in the sense that he was one of the first few people that I saw performing, um and this was not necessarily live, I think he was just, he posted a ah a photo of his live performance setup before a gig um on on his Twitter account at the time. um and And I was fascinated by the fact that it looked like a live performance setup. It didn't just look like, here's an off-the-shelf laptop on a desk so with an off-the-shelf audio interface. that He had an ergonomic split keyboard. He had a a MIDI controller with LED lights on it to visualize things in the middle. um He had a synthesizer that he was he was taking the sound and putting it into the computer, but also the computer was sending messages to the synthesizer. So there was this kind of like, this is all a system that that is put together for me to perform tonight.
01:27:05
Speaker
Yeah, so he's drawn into the hardware world too. Yeah, exactly. um and And I don't know that whether he has made any custom hardware or um ah or is using you know whatever components are are and and parts are actually available. But this idea that you know Sonic Pi, his um his ah perhaps most popular, um and perhaps the world's most popular live coding, um ah software, which is used in the UK to to teach the computer science curriculum and in schools. And it was funded by the Raspberry Pi Foundation and all that. That was the first live coding program that I, first of all, installed without any issues. um Because live coding systems are notorious for, okay, you need to install Super Collider. Super Collider itself is like three different processes with their own memory space and on your computer. There's this audio server, there's the language interpreter, there's the graphical editor window.
01:27:59
Speaker
um And then you need ah you need the Haskell interpreter and then you need the title cycles library or um or anything else like that. And then you need um you need an Emacs or you need a VS Code or an Atom or you need some way of being able to tell these multiple different processes, each completely living in its own isolated island in in user space.
01:28:24
Speaker
um And so Sonic Pi was like, can I make an instrument? That's all in one. You download this. This is a binary. You run

Teaching Kids Music with Code and Visualization

01:28:32
Speaker
it. It has the audio synthesis server in it, even though it it it yeah um relies on supercollider server. But it was it had a code editor that was specifically written for for you know kids to be able to write code and live code some music. It had in-buffer visualization of the waveforms. It really felt as if it was a step closer in not being a combination of random projects that you pull together and you have them talk over extremely thin wires, but a thing that was designed to be an instrument from start to finish.
01:29:06
Speaker
um And so, so i ah yeah i find I find his work um very inspirational. um and And I love how he uses it to... I don't know if he still does that, but he's like knife codeded that life-coded you know gigs in clubs

Live Coding in Clubs and System Design Inspirations

01:29:21
Speaker
in the past. Oh, he totally still does that. Yeah, I can confirm. I need to um i need to to go to one of those.
01:29:27
Speaker
um So, yeah, it's it was it's a whole new way of of promoting this practice to people who might at first sight look at it and be like, but why? you know Why not just grab a guitar or a keyboard instead? And it brings together people from musical backgrounds, it brings together people from from computational and programming backgrounds. um And so, and yeah, a lot of a lot of what um a lot of his kind of design um choices and ethos and so on um ah have have influenced the way that I look at and design such systems. um yeah And and so there's no way that the ah that our system can't, there's no reason why it can't talk to his as well. um But we we ah we follow a very different computational paradigm instead of this like continuous functions of time. um Sonic Pi, for example, still very much follows this idea of
01:30:23
Speaker
trigger discrete events in time. Yeah, his is kind of scheduler based, whereas yours is curves in time. Yeah, and you can you can think of it almost as a distinction between a keyboard where you press a key and you have like discrete events, like I press this key right now, and this will trigger a series of events.
01:30:41
Speaker
um And versus modular synthesis which is all about these you know um control voltages that are just abstract you know waves they go up and down and they curve in different ways and different patterns and maybe they look maybe that don't but each one is a continuous entity over time and there's this notion of.
01:31:00
Speaker
persistence. like i you know My modules don't magically appear and disappear when I'm spawning them. they are They're always there. The oscillator is always oscillating. The filter is always filtering. and Therefore, the music is and just comes about and emerges as a result of how we actually change the behavior of those persistent systems over time.
01:31:21
Speaker
Yeah, yeah.

Can You Hear the System in Music?

01:31:22
Speaker
Do you know, I would like very much one day to see a gig with you and Sam on the same bill. as well And I'd love to hear, can I hear the difference, not just in the composer, but in the way it's composed?
01:31:36
Speaker
ah Yeah, so this is this what you just expressed is, can I can i listen to the ergo logics of these systems being different? um you know One of them focuses on this continuous parameter, whatever. The other focuses on events and triggering and scheduling. um These are very technical distinctions in in some cases. You can still make music that sounds very similar in in or exactly the same. They can be um they can be kind of computationally equivalent, so to speak. but But in practice, it ends up sounding very different. and so my My hypothesis is that you will be able to hear some difference in shining through the kind of the cracks of abstraction layers that we've both built.
01:32:18
Speaker
I look forward to putting that hypothesis to the test when I'm in the audience. Absolutely. I do too. And then we're going to we're going to run some questionnaires on it and and publish a paper. Awesome. Sounds good. Dimitri, thank you very much for taking me through this. I look forward to your next gig.
01:32:35
Speaker
And I think I'm going to go and play with it myself. Thank you very much, Chris, for for having me. And yeah, please please ah record whatever little snippets of of ah experiments you you make with a module and but send them over to the Discord server or or but anything. we We love to see how people use it because it gives us ideas for what else it can do, other than what we you know have we're too close to it and we have our own biases about what it can do. so yeah yeah Well done.
01:33:03
Speaker
Brilliant. Thank you very much. Cheers. Bye for now. Thank you, Dimitri. You know, that last part of the conversation makes me think of a parallel between music hardware and regular programming. Because Dimitri was saying you can have these two different ways of coding music, and you could do the same thing in either. You could get the same result with either approach. But in practice, you won't. The tool that you choose inevitably influences the shape of the final result.
01:33:32
Speaker
And I think that's definitely a parallel there to our programming languages. All these languages that we have, they're all Turing-complete. You could write the same system in PHP or Python, Rust or Assembly, Haskell, whatever. You could, but you won't. Inevitably, your choice of language influences how you write, and therefore what you choose to write.
01:33:55
Speaker
and we think we choose our tools for what they make easy, but maybe we should also start explicitly choosing our tools for how they reshape our projects, and perhaps how they reshape our thinking.
01:34:09
Speaker
I leave you with that thought. While I'm leaving you with that thought, if you've enjoyed this discussion, please take a moment to like it or rate it, maybe share it with a friend or your social network, and make sure you're subscribed, because we'll be back next week with another interesting voice from the world of development. Until then, I've been your host, Chris Jenkins. This has been Developer Voices with Dmitry Kiryakudis. Thanks for listening.