Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
86.  Math, Music, and Artificial Intelligence - Levi McClain Interview (Final Part) image

86. Math, Music, and Artificial Intelligence - Levi McClain Interview (Final Part)

E86 · Breaking Math Podcast
Avatar
4.6k Plays9 months ago


Help Support The Podcast by clicking on the links below:

Transcripts are available upon request. Email us at [email protected]

Follow us on X (Twitter)

Follow us on Social Media Pages (Linktree)


Visit our guest Levi McClain's Pages: 

youtube.com/@LeviMcClain

levimcclain.com/


Summary

Levi McClean discusses various topics related to music, sound, and artificial intelligence. He explores what makes a sound scary, the intersection of art and technology, sonifying data, microtonal tuning, and the impact of using 31 notes per octave. Levi also talks about creating instruments for microtonal music and using unconventional techniques to make music. The conversation concludes with a discussion on understanding consonance and dissonance and the challenges of programming artificial intelligence to perceive sound like humans do.



Takeaways:


  • The perception of scary sounds can be analyzed from different perspectives, including composition techniques, acoustic properties, neuroscience, and psychology.
  • Approaching art and music with a technical mind can lead to unique and innovative creations.
  • Sonifying data allows for the exploration of different ways to express information through sound.
  • Microtonal tuning expands the possibilities of harmony and offers new avenues for musical expression.
  • Creating instruments and using unconventional techniques can push the boundaries of traditional music-making.
  • Understanding consonance and dissonance is a complex topic that varies across cultures and musical traditions.
  • Programming artificial intelligence to understand consonance and dissonance requires a deeper understanding of human perception and cultural context.



Chapters

00:00 What Makes a Sound Scary

03:00 Approaching Art and Music with a Technical Mind

05:19 Sonifying Data and Turning it into Sound

08:39 Exploring Music with Microtonal Tuning

15:44 The Impact of Using 31 Notes per Octave

17:37 Why 31 Notes Instead of Any Other Arbitrary Number

19:53 Creating Instruments for Microtonal Music

21:25 Using Unconventional Techniques to Make Music

23:06 Closing Remarks and Questions

24:03 Understanding Consonance and Dissonance

25:25 Programming Artificial Intelligence to Understand Consonance and Dissonance

Recommended
Transcript

What makes sounds scary?

00:00:06
Speaker
I built an entire horror instrument in order to figure out what makes something sound scary. Now if I asked a composer, they might say it has something to do with the timbre of the instrument in concert with certain compositional techniques. Appropriately placed dissonances, stinger chords, and things to do with tension and release.
00:00:32
Speaker
An acoustician, by contrast, might examine the anatomy of a scary sound and observe a high degree of roughness in their waveform. This is an acoustic property which refers to the rate at which the amplitude of a given sound changes. A property high in not just scary sounds, but also human screams.
00:00:53
Speaker
The neuroscientist might note that some types of fearful sounds are purely mechanical process oriented and actuated by a five neuron acoustic startle circuit embedded in our brains.

Evolution of fear and its psychological aspects.

00:01:06
Speaker
And the psychologist?
00:01:10
Speaker
Well, they might discuss how our relationship with fear changes as we understand ourselves and the world around us better and better through the decades, implying that some fears are of our own making. I say it's a really complex question. Perhaps a full 30-minute deep dive into the complex realm of psychoacoustics is in order.
00:01:31
Speaker
Oh, hey, and look at that, that's exactly what I did. So if that's what you're interested in, go check out my video, What Makes This Sound Scary over on my YouTube channel in the link in my bio. Hoping to get past a thousand views on this one, that would be nice. So any support helps.
00:01:47
Speaker
So what I kind of want to do is get videos like yours in front of students who are thinking of going to college for music theory, things like that. Also, a lot of people who are really interested in the science but also want to create things like horror movies. There's definitely a market for that. So with myself and my team, that's one of the things we're thinking about is, how do you get this amazing content in front of the right audience? Because it deserves to be seen, frankly. I think it's awesome.
00:02:16
Speaker
Yeah, absolutely. And so my content, what I try and do is I try and focus on the arts and music and soundscapes and things of that nature, soundtracks.

Blending engineering and art in music.

00:02:30
Speaker
But I come from a bit of the engineering background. I have that kind of
00:02:37
Speaker
that kind of mind. I think of things very technically usually. And a lot of the times people think those two ideas are opposed and they're two different things. I don't think so at all. I think you can absolutely approach art and music and all these pursuits that
00:02:56
Speaker
with a technical mind. So, you know, approaching math, or sorry, music with the language of math, I think can be very useful in some circumstances. And that's one of the things I want to try and convey with these videos is that like, hey, we can, it doesn't matter who you are, you can create beautiful art with with whatever you've got, you know, with whatever you're into.
00:03:20
Speaker
I'm actually really, really glad that you brought that up. We talk about the two different sides of the brain. One of the things that we tried to do with our very first episode of the Breaking Math podcast a long time ago is talk about when mathematics is inaccessible. And I absolutely can relate. Before I became an engineer and I had to learn all the symbols, it was intimidating. Speaking of fear response, speaking of fear response, there's a lot of folks for whom, when they see mathematical symbols, it elicits a fear response and things shut down.
00:03:46
Speaker
Do you know that as a grad student in engineering, I wanted to take a material science class and a professor said, oh yeah, come on in, come on in. And like, I didn't have the background to understand some things. And I just didn't have the terminologies. And I wasn't familiar with a lot of the concepts, despite my own background, which is heavy in physics and math. And I froze up and I stuttered and I felt like an idiot.
00:04:13
Speaker
And it was horrible. But I absolutely relate. And I've had to tell myself, it's not that we are, I'm sorry to use the phrase dumb or stupid. It's that you got to take some time and get used to things. And then once you've processed them and really understood them and worked with them, you can really do some amazing things.

Music, math, and practice as modes of expression.

00:04:33
Speaker
Yeah, absolutely. I think almost everything is a skill. So I have basically no natural
00:04:45
Speaker
aptitude for music. It's something that I've had to practice for 10 plus years of just going at it every day to get to where I'm at now, which hopefully is a competent musician. And I think the same thing goes with math too. So many people have the assumption that you're either good at it or you're not.
00:05:05
Speaker
No, it's a skill that you need to flex, you need to work at to be able to get good at it. And once you get good at it, then you have this beautiful language to articulate your ideas and you reach a fluency, whether it's math or music, where it becomes just this other fabulous mode of

Creative use of sound in films and AI modeling.

00:05:26
Speaker
expression.
00:05:26
Speaker
I actually texted you one of my millions of texts to you, and I said, what if we were to do a project where you had a holiday Hallmark movie horror film, and how would you play with the sound there?
00:05:44
Speaker
things I would want to investigate is this idea of the acoustic startle circuit and fast sound. It's pretty well established and it's interesting because that's something you could, I would assume, pretty easily model in terms of AI or model it inside of a computer. That's, again, only one dimension of sound and why something sounds scary. There's also this idea of
00:06:13
Speaker
the slow fear, which has much more to do with psychology and cultural associations. I'm interested because when comparing the visual to the audio there,
00:06:26
Speaker
there, how do I put that? It doesn't seem like there's as much of an immediate answer as to why something visually is scary as there is for this one part of, you know, audio science and audio research. So I'm wondering, like, how do you square that, I guess, with artificial intelligence? And like, if you want, if you're wanting to program
00:06:54
Speaker
an AI to be able to have the same fear response as a human. Well, it seems like we can kind of do that pretty well with at least one dimension of audio, but it seems like it's a bit more of a challenge when it comes to the visual. What would you say?
00:07:09
Speaker
you bring up, I think you bring up a really good point here. And my answer to that would be, I'm curious in terms of, let's just talk about a quantity or, you know, how much information do you get visually and how, and how do you make decisions based on that information? I know that our eyes have, you know, multiple layers in the neural net, which we have our own net of neurons in our neocortex and in our sensory cortex.
00:07:35
Speaker
And I'm aware that there are at least some initial explanation, like one layer will identify edges. The next layer will put together some of those edges into a shape. And I don't know where movement is included, but I know that another layer identifies movement. Oh, they made a Scary Stories movie. And part of that Scary Stories movie is they messed with movement.
00:07:59
Speaker
They had a dark hallway with one of those freaky-deaky creatures and it's moving slowly. And then they stuttered the light like a strobe light and suddenly skipped it 10 steps forward. So it completely messed with your expectations of how fast or jerky things move. But it was terrifying.

Chaos to order: Bird flight patterns as music inspiration.

00:08:15
Speaker
It was very effective.
00:08:16
Speaker
Now, I want to mention, your channel has a multitude of videos that discuss mathematics and audio processing and music theory, and not just on fear. For our listeners, I wanted to do a little, I hope you don't mind, a sampler of some of the other topics in your video. Are you okay with that?
00:08:40
Speaker
Sure, that sounds great. Okay, awesome. Awesome. Very good. In 1956, composer Olivier Messiaen wrote Wasseux Exatique, a piece for piano and small orchestra, which is heavily inspired by birdsong. Today, fellow TikTok user, sowylie, continues this tradition by producing beats from bird samples.
00:09:11
Speaker
Birds are the world's natural singer, so it seems only appropriate that we take inspiration in their song. Today, I want to take inspiration from them too, but instead of bird song or sample, I'm interested in seeing if I can make music with the geometry of bird flight patterns. Check this out. It's called a murmuration, a flock of starlings weaving intricately in and out, creating mesmerizing, highly ordered geometric patterns.
00:09:40
Speaker
This is an example of what's called emergent behavior, a system which does not depend on its individual parts, but rather on their relationships to one another. In this case, when one bird moves in any direction, its closest neighbor will adjust course to compensate, so no birds in the flock run into each other. This simple system gives way to incredibly precise geometric flock patterns.
00:10:02
Speaker
We can actually replicate this behavior in part by a simulation governed by what is called Boyd's algorithm. By assigning three simple rules to these simulated birds, we can shape a behavior pattern that replicates starling murmurations. Now, if we took these three rules and mapped them to musical parameters instead, we get some pretty interesting results. A murmuration determined delay.
00:10:28
Speaker
A reverb impulse response controlled by bird cohesion and separation We can even generate a melody that is controlled by the direction and turn radius of an individual bird Let's layer a few of these concepts and see what trippy music comes as a result
00:11:38
Speaker
That was incredible. All of the math that I saw in that, I can go into so many directions, but before I do all that, I'm going to ask you. Wow. So the music was based on, it was almost like chaotic. Like it was just almost chaotic sounds. You turned it into a beautiful song. I mean, it was just chaos, right? Yeah. Yeah. I mean, it's chaos. And so this is
00:12:06
Speaker
a part of music that I've found a lot of inspiration in, which is sonifying data and turning it into sounds. So sonification is the process of doing that, taking data, taking input, whatever it is, and turning it into sounds.

Innovations in music: Data sonification and microtonal experimentation.

00:12:23
Speaker
You see this with heart monitors in hospitals that beep. That sound is a sonification, it's a representation of what's going on.
00:12:34
Speaker
with your heartbeat and your heart rhythm. And so the interesting thing about sonification is you have so much control over the end product and the end data. So I can take what you saw there, which is the
00:12:52
Speaker
a murmuration controlled delay or even the melody which is controlled by the individual turn radius of a single bird. And then I have so many options when it comes to turning that into a sound because it's just two data points, two, three, four, five data points.
00:13:12
Speaker
that I can make the sound come out through a piano, I can make it come out through a violin, I can change maybe the key, the subset of notes that we're using in a particular piece and that we assign to different things. You have so much control over it that you can really turn this chaos into a lot of
00:13:34
Speaker
order through music. So I find things like the sonifying the Boyd's algorithm to be an endless source of musical inspiration because it's
00:13:49
Speaker
It's just a different way to approach music. I find it very useful. Incredible. And real quick, what dawned on me is you just mentioned something. When you made that program that kind of bounced around and made the noise, you do have a choice in what possible noises the chaos has to choose from. So that's one way that you could control the chaos a little bit.
00:14:10
Speaker
And then also the number of notes. So you could just choose like a single chord, or rather just a bunch of notes that are at least in the same key. So there's some semblance of what we with our Western trained ears would recognize as beautiful. And then have chaos do its thing. And that's one way of having a combination of control and chaos.
00:14:31
Speaker
which is a huge theme in machine learning. It's like, you know, what elements are you controlling? And where do you allow for chaos? And when do you turn that knob? When do you allow for more chaos? And when do you allow for more control? But let's do another video here. What would happen to Harmony if we use 31 notes per octave instead of 12?
00:15:43
Speaker
Okay, that was incredible. I nailed it on YouTube. I've only been made aware specifically about microtonal music. For those who have never heard of microtonal and are mostly used to Western music, can you explain microtonal and what you attempted to do with this video?
00:15:57
Speaker
Yeah, sure thing. So in the West, we are usually relegated to only 12 notes. So we have 12 individual notes that repeat in what we call octaves. And that gives us the harmonic, diatonic and chromatic language that we use to build out the framework for all of Western music. So we basically only have 12 notes.
00:16:18
Speaker
And this series is all about asking the question, what if we lived in an alternate universe where we had 31 notes per octave instead of 12? How would that change music and how would that bend the fabric of harmony itself? So it's kind of this experiment.
00:16:33
Speaker
where I say, okay, we have 31 notes now. In typical Western music, you have minor chords, you have major chords. Well, now in 31, you have minor and major chords, but now you can do sub-minor chords. So it's a little bit of a different feeling, different vibe, different flavor for the chords that we can usually have. In addition to sub-minor, now we have
00:16:56
Speaker
super major chords, which are great, and now we have neutral chords. So basically, by finding the gradient of pitch spectrum that we allow ourselves to have in Western music, we have more options tonally to explore different spaces in music and find different nuance in the beauty that we allow ourselves to create.
00:17:18
Speaker
Okay, I have a few quick questions. I realize that we are almost a bit limited on time, so we'll kind of rapid fire these pretty quick. Number one, why 31 notes instead of any other arbitrary number?
00:17:30
Speaker
Yeah, so I get that question a lot. It seems pretty random, 31. So 31 is ideal, I would say. 12th notes is essentially a compromise. It's a system that we've created that tempers out some irregularities in the math of tuning theory. So 12 notes is fairly easy to play with. It's not too much of a,
00:18:00
Speaker
you don't have too many options, but it is a compromise. So certain things are slightly out of tune. Your major chords, your minor chords, they're not as in tune as they could be. So in 31, again, we find that gradient a little bit. And so now we can get chords that are more in tune. Now you might ask like, why 31 specifically? Why not like, if it's all about finding that gradient, why not like 48 notes or, you know, some multiple of 12, that would seem to make more logical sense.
00:18:28
Speaker
I know you were looking for a quick, rapid-fire answer on this, but it's a little bit complex. There's basically two or maybe three reasons why we might want to choose 31. One, it allows us to use all the same notes that we're familiar with and have them be more in tune with each other. That's great. It allows us to go beyond the 12 notes.
00:18:50
Speaker
Create these weird alien sounding harmonies, which is great and then also it's not too many notes that it becomes impossible to play so, you know, like conservatory students are are You know have qualms with like practicing in all 12 of their keys imagine say your teacher saying like hey you have to practice your scale and all 144 keys or something like that would be crazy. So 31 is kind of that's compromised Compromised not compromised
00:19:18
Speaker
sweet spot to where we get all the benefits of 12. Plus we get to do alien stuff while not being too overwhelmed. Cool. I have two more questions for you on this note before we'll move on on this note. No pun intended. What instruments were you playing with those microtonal or what instruments did you play for that video?
00:19:36
Speaker
Yeah, so I mean, this harkens to one of the biggest problems in the microtonal music in the in the community is like, this stuff is great. And it's really cool to explore. And it gives you an avenue to kind of take a more mathematical approach to music. But we don't have instruments to be able to play this stuff a lot of the time, because, you know, we've have have
00:19:57
Speaker
hundreds and hundreds of years of developing instruments to play in 12 and to play to a certain pitch standard and all these things. So what I end up having to do is create a lot of my own instruments. So what I use is a fretless guitar. It doesn't have frets, so I can hit the notes in between the notes that we have in the west here. But I also develop and build my own instruments
00:20:22
Speaker
I think in that video you saw a little keyboard that I made, which is basically a modular keyboard that I can input certain notes into and connect certain notes to each of the keys. And the keys are completely modular, so I can move them around in an order.
00:20:44
Speaker
that is great for whatever tuning system I'm using. So in this case, it's going to be 31. Last video. Last video. This is an amazing one. Playing a bass with a vacuum. Pull it up.

Unique instrument creation and sound manipulation.

00:21:15
Speaker
Mmm
00:21:24
Speaker
Oh my gosh. Okay. All right. I should clarify on that one. If you're wanting to try that out at home, get a vacuum that has a blow out function because blowing in won't work. You have to blow out and you have to hit the harmonic just right so that it resonates with itself to get that sound. So you didn't have a read there. You just angled it so it made a vibration?
00:21:47
Speaker
Yep. Yeah, you just angle it and you have to be you have to be very careful because the second you get off that this is the second the string starts to lose resonance and then it destabilizes with itself and you lose the sound. Oh, absolutely incredible. Okay. We have had Levi McLean on our show for today. There's so so much more rich content Levi.
00:22:05
Speaker
is very knowledgeable about music theory. Levi is knowledgeable about culture and about how our brains process auditory information. Please, please check out his channel. Go to Levi McLean. Is there an underscore there?
00:22:21
Speaker
Uh, no, that's at Levi McLean music at Levi McLean music. You could also go to Google and just type in Levi McLean music. His videos are entertaining. They're light. They're approachable and they will expand your knowledge of our experience of audio processing. Please check him out.
00:22:36
Speaker
And I'm going to do my best to push this specifically to I'm thinking about some communities to identify. And of course, I'm thinking music theory, but also just science and anybody who would appreciate this. So I'll talk to my people. I'll see what we can do before we close. I want to give the floor completely to you to say anything you'd like to say, to ask any questions about A.I. or anything else, including. Hi, mom. It doesn't matter.
00:23:00
Speaker
Well, I have to, since you brought it up, hi mom. I can't not do that. So yeah, well, first off, I'd just like to thank you for having me on. It's always great to have these larger discussions. And I think I said at the top of the program,
00:23:17
Speaker
One of the things I love doing is going slightly outside of my discipline. I love being put a little bit out of my comfort zone. So to go on here and then discuss applications of audio and music with artificial intelligence, I think that's fantastic. I did have one question, and I'll try not to be too long-winded with it, because I think we may have discussed this a little bit before.
00:23:44
Speaker
It's an example that illustrates a larger question I have in terms of artificial intelligence.

AI, music theory, and evolutionary limitations.

00:23:49
Speaker
So how humans process the idea of consonants and dissonance, which is essentially, do we like a sound? Do we not like a sound? I'm simplifying it a lot here.
00:24:03
Speaker
This idea has been explained in a couple different ways, one of them being this idea of the natural law theory, which essentially looks at different sounds for the harmonic relationships and harmonic ratios between each other. So if you have two notes that create a dyad, which is a chord,
00:24:23
Speaker
the relationship between those notes. If you can express that mathematically in a simple ratio, a simple like a three to two or something like that, then our ear tends to classify that sound as consonant, as good sound. Now the more complex you get with
00:24:41
Speaker
with your harmonic ratios, the more dissonant we tend to class sound. So it's this mathematical model which explains how humans perceive this idea of consonance and dissonance. Now, it falls short.
00:24:57
Speaker
in explaining anything outside of Western culture. If you look at the music of Indonesian Gamelan, you'll find that some of the harmonic ratios between two or three or four notes that they use tend to be a lot more dissonant. But they
00:25:14
Speaker
typically do not classify their own sounds as dissonant. So it's a model that works well in the West. It's a model that kind of falls short elsewhere. So my question in terms of like artificial intelligence
00:25:30
Speaker
is how, if your goal with an artificial intelligence is to say like replicate how a human understands, perceives sounds and how they understand consonants and dissonance, how can we program an artificial intelligence to do this well if we have an incomplete understanding of how we understand this really basic and fundamental concept in audio science?
00:25:57
Speaker
All right, I will line up to bat and I will and I will offer a my own answer, which obviously is a little bit incomplete here. First of all, machine learning has revealed more than anything, anything, I think that it is exclusively exclusively bound by its training data. So you just asked the question that had some some specific terms here, like what a human would learn. Well, first of all, there has to be. Well, what is a human like which humans?
00:26:20
Speaker
So that'll be based on the training data it has and how it defines human. That's its first limitation. We're not yet at a point where it can have any larger category than that. Now, my second point here is a quote from Richard Dawkins. He talked about remaining questions in evolutionary theory is what elements in evolution and biology had to exist
00:26:41
Speaker
and what elements simply happened to exist. And that's what you're asking right now in terms of what we recognize as consonant or dissonant, what had to be and what simply happened to be and how our value is assigned. So those are two ways of looking at this question as we explore the question you brought in much further.
00:26:59
Speaker
I think that I just wanted to make those two clarifications with artificial intelligence and evolutionary theory before the broader question of what is consonant or dissonant with music styles and why. And I don't have that answer. I should have said I do have that answer, but I'm not going to tell you. I'm just kidding.
00:27:18
Speaker
No, I'll leave that question for our listeners to pursue, and we'd love to hear your thoughts on that question. Either send them to Levi McLean on his socials. Again, it's at Levi McLean, or send them to our email, breakingmathpodcast.com, or on any of our socials as well.
00:27:37
Speaker
This has been an absolute blast. We've got so many more episodes and so much more content that we didn't even get to. Maybe we'll return and talk about how machine learning is now being used to attempt to classify whale language. So more on that some other time. In the meantime, we'll leave you to search that on your own. And thank you very much, Levi. It has been an absolute pleasure. Thanks for having me on.