Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Verse, Haskell & Core Language Design (with Simon Peyton Jones) image

Verse, Haskell & Core Language Design (with Simon Peyton Jones)

Developer Voices
Avatar
2.2k Plays10 months ago

This week we talk to Simon Peyton Jones, a veteran language designer and researcher, and key figure in the development of Haskell. Haskell. Simon has made countless contributions to advancement of functional programming, and computer programming in general, and is currently working at Epic Games, working on the foundations of their new programming language, Verse.

We discuss how programming languages are made, focussing on a big design idea from both Haskell and Verse: building a large language from a small, tightly designed core. Then we move into Simon's current work exploring Functional Logic Programming, the big new idea that underpins Verse. It's an idea that blends the fundamentals FP with the core ideas of logic languages like Prolog in an entirely new way. Not even Simon knows exactly where the idea will lead, but it's a fascinating idea that could potentially bring constraint-solving and deduction right into the heart of modern software.

Additionally, Simon discusses his involvement in reshaping the way we teach computing in England. He's been working hard to give computing education the same importance as the teaching of mathematics and sciences - something we should all have a fundamental understanding of.

Simon's one of the smartest, nicest people in programming. Come as hear his brilliant brain at work. :-D

Verse: https://github.com/UnrealVerseGuru/VerseProgrammingLanguage

The Verse Language Reference: https://dev.epicgames.com/documentation/en-us/uefn/verse-language-reference

The Verse Calculus [pdf]: https://simon.peytonjones.org/assets/pdfs/verse-March23.pdf

https://en.wikipedia.org/wiki/Simon_Peyton_Jones

The LogicT monad: https://hackage.haskell.org/package/logict

Can programming be liberated from the von Neumann style?: https://dl.acm.org/doi/10.1145/359576.359579

CAS - Computing At School: https://www.computingatschool.org.uk/

Computer Science Teachers Association: https://csteachers.org/

Kris on Twitter: https://twitter.com/krisajenkins

Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/

Recommended
Transcript

Introduction and Admiration for Simon Peyton Jones

00:00:00
Speaker
My guest this week is Simon Payton Jones and I have to be kind of careful introducing this one because I could turn into a total fanboy. Simon is an absolute legend. I've got such respect for him and he's been a huge inspiration for me as well as hundreds, thousands of programmers. He's been a researcher and ground breaker in functional programming for decades.
00:00:23
Speaker
He's been one of the key developers and shepherds, possibly even midwives, of Haskell, which is one of my favourite languages. And beyond what you might consider the niche of Haskell, I think you can feel his influence on any language that's been touched by functional programming trends in the past few years.

Impact on Programming Languages

00:00:42
Speaker
Anything from Java to Python to F-sharp and beyond.
00:00:46
Speaker
He's even, and this is a kind of separate story, but he's even had a big influence on Excel programming. So despite Haskell seeming like a niche, he might have affected more computer users than just about any language researcher ever.
00:01:02
Speaker
In addition to all that, all his computing work and wisdom, he's also one of my all-time favourite conference speakers. I saw him give a talk a few years back about a new technique for optimising the compilation of let statements.
00:01:17
Speaker
Now, in the hands of literally anyone else, that would be the driest, smallest topic you can imagine. But he has such clarity for explaining things and such raw enthusiasm for programming. I left that talk wanting to write a compiler just so I could optimize let bindings. I kid you not, he's that infectious.

Discussion Topics: Haskell, Verse, and Education

00:01:39
Speaker
So I would have gladly had Simon on the show to talk about anything that's on his mind, but we narrowed it down to three big topics. His past, how do you build a language as large as Haskell out of a very small set of core ideas? And what do you put in that core to make it a larger language?
00:01:59
Speaker
Once you've mastered that trick, we get to his present, what he's been doing for other languages.

Verse: Scripting Language for Unreal Engine

00:02:04
Speaker
Simon's been working on verse over at Epic Games. I thought verse was a high-level scripting language for Unreal Engine, for Fortnite extensions. But it might just be a Trojan horse language for the idea of functional logic programming. What's functional logic programming, you ask. Simon's about to explain to us.

Reforming Computing Education in England

00:02:25
Speaker
And as if his past and present work weren't enough, he's also trying to influence all our futures. He has been heavily involved in shifting the way we teach computing in this country, in England. And he wants to spread that influence far and wide so that children get taught the fundamentals of computing, just like they're taught the fundamentals of maths and sciences.
00:02:47
Speaker
was if all that weren't enough, Simon's also a thoroughly nice chap. So let's get started. I'm your host, Chris Jenkins. This is Developer Voices, and today's voice is Simon Paton Jones.
00:03:13
Speaker
Joining me today is Simon Paton-Jones. Simon, how are you doing? Hi, good to see you again. It's good to have you here. So, I was trying to think of how to introduce you and how to pull together all the threads of your career, which is a big task.

Career and Influences in Functional Programming

00:03:30
Speaker
I wanted to say you are to functional programming what James Brown was to soul.
00:03:38
Speaker
You are, in some sense, the godfather of functional programming. Hardly. There are plenty of other godfathers. Well, yeah, it's a large family, but you're still... John Marcus, onwards. And indeed, from Alonzo Church, and onwards. Oh, gosh, you're going to get all that way back. Yeah, yeah. Let's say you're standing on the shoulders of giants, but you're one of the giants upon whom we now stand. I was trying to pull together a thread that connects what you've done that we could start with.
00:04:08
Speaker
And the one that jumped out at me was you've worked on Haskell, obviously a huge contributor to Haskell. You're currently working on verse. You are a contributor or were to C minus minus. And the thing that connects all three of those, which I think is fairly rare in programming language implementation is they are a large outer shell, which is a lot of syntactic sugar for a very tightly defined core language.
00:04:40
Speaker
Is that fair to say? That's true of Verst and Haskell. I'm not sure it's so true of C-minus-minus. It's a kind of portable assembly language. LLVM has become much larger than C-minus-minus was intended to be.
00:05:02
Speaker
It's a fairly common pattern to try to design a large user-friendly language, which desugars into some kind of small, elegant core.

Core Languages and Syntactic Sugar

00:05:15
Speaker
You seem to have pushed that much further than most of the language implementations I know about. I guess the first question is, why do it that way?
00:05:26
Speaker
Well, for me, it's a testament to the expressiveness of the lambda calculus. So one of the things that always attracted me about functional programming is that it's a place where theory and practice come rather close together.
00:05:41
Speaker
So you can build practical programming language that rests very directly on some intellectually robust and small and elegant foundations. So in the case of the lambda calculus, the lambda calculus in its essence is just a lambda term is just a variable, an application or a lambda. And what's amazing is that you can translate an enormous variety of programs into that. Indeed, you could almost imagine
00:06:11
Speaker
two different foundations for computation. This wasn't the way that it happened, but when Alonzo Church and Alan Turing, they were actually in the same university at Princeton at the same time, and Alonzo Church was defining the lambda calculus and Turing was defining the Turing machine.
00:06:27
Speaker
And both of them are computationally complete. That is to say, if we want to say, what can computers do? Fundamentally, we say, well, they're just Turing machines. If a Turing machine can do it, a computer can do it, and vice versa. But you can also say, they're just lambda calculus machines.
00:06:44
Speaker
right if a computer do a lambda term can do it and in fact it was entirely non obvious but in fact it turns out that machines and lambda calculus are inter definable that is you can write a machine which run any lambda term you can write a lambda term that will run a machine like that equally express it good yeah.
00:07:03
Speaker
So now then, it didn't really happen like this, but you can imagine that all of imperative programming is built on Turing machines. What is a Turing machine? It has this tape and it mutates the tape. It reads and writes things on the tape. The tape is like the store. It's fundamentally a mutation machine. What is the lambda calculus? Well,
00:07:22
Speaker
At any moment, you have a term, a lambda term, and you rewrite it one step at a time. You just keep rewriting and you get some answer. This is a completely different model of computation. Functional programming is built on that purely functional model of computation. Imperative programming is built on a more mutation-based Turing machine-style thing. It's almost as if two entire approaches to programming have grown up
00:07:48
Speaker
built on different foundations, but that ultimately they're equally expressive. So for me, I always loved the lambda calculus as a basis for computation. And it turns out that you can translate, well, all sorts of languages, including Haskell, into this very small core. And moreover, the thing that appealed to me particularly is that that's not just a theoretical idea. It's a practical idea. You can actually build a compiler
00:08:15
Speaker
that translates into the small core and moreover the lambda calculus had a long history of development including typed lambda calculus and in particular a typed lambda calculus called system F.
00:08:31
Speaker
And it turns out that it's not just most compilers typically, most compilers in the world typically take a language that might be statically typed and translate it to some kind of intermediate language and then optimizes that. But the intermediate language is typically not statically typed.
00:08:48
Speaker
But in GHC, our compiler for Haskell, we take the Haskell and all its glory and translate it into this core language, System F, which is statically typed. So very unusually, GHC is one of the very few production compilers, I know, that maintains a statically typed program from the front of the compiler right through to the code generator.
00:09:14
Speaker
I've not thought of it that way. Yeah, that's that's incident. You might understand. You might ask why bother to do that, right? Well, after all, all that matters is that the source program is statically typed, right? We reject it then, but since the intermediate program is statically typed, if the if there's a bug in the compiler. A very common, you know, which is rather common as I'm a compiler, right? Then then what happens often is you get a typo in the intermediate code.
00:09:45
Speaker
So the compiler has a bug, it optimizes program A into program B. Alas, program B is tight incorrect, and when you run it, it will crash. So if you don't have a tight intermediate language, what happens is you have a bug in your compiler, you compile a program, you try to run the program and it crashes. You think, why does it crash so you can get gdb out and work out why it crashes? Then say, oh,
00:10:07
Speaker
The code generator is generating the wrong code why is that all because the intermediate program are you know some many levels of transformation back are the optimizer introduced the bug. The optimizer turned a type correct program into a type incorrect program which crashes now it's way better.
00:10:26
Speaker
For the compiler to say, oh, I find that I have turned a type incorrect program into a type correct program into type incorrect program and report so that that reports the error. Immediately it happens rather than compiling it all the way to machine code, running the machine code and then having to backtrack up to find the bug in the compiler.
00:10:42
Speaker
Yeah, yeah, okay. That makes sense. But there's this thing with having this tightly defined core, which actually, if I'm right, is fairly small. I mean, it's implementable reasonably if you've got the knowledge to do it. Yeah, very small. I'm just thinking,
00:11:00
Speaker
A lot of people come to this idea as being like LLVM in that there's this low level, LLVM is not as tightly defined. But I'm just trying to get a sense of where something like GHC's core might appeal to language implementers and people interested in how languages are implemented when they seem to already have this popular thing called LLVM. Well, by the time you get to LLVM, you've already
00:11:30
Speaker
descended several layers of abstraction. Compilers typically work at various levels of abstraction. If I've got a Haskell program, I might say to a programmer, if I'm trying to explain how it runs, I say, well, you've got this function call, you can replace the call by the body of the function, replacing the arguments of the function with the actual arguments that you passed in the call. You just keep doing that, and that's how the program executes. Of course, that's not really how the program executes,
00:11:57
Speaker
really the program does machine transfers and pushes things on stacks and have things in registers and stores things in the heap and runs a garbage collector. Nevertheless, if you want to explain to a programmer how it runs, you might think in this rewriting term, rewriting way, and indeed the optimizer works in this rewriting way as well.
00:12:19
Speaker
To get to llvm, which is a low level imperative language, you've now had to say, oh, I'm going to allocate closures for as yet unevaluated thunks. And they're going to consist of a pointer to some code together with some free variables. And those are going to be stored in the heap one at a time. So there's a lot of very low level operations that mean that doing the optimizations you could do
00:12:44
Speaker
In a rewriting system at a higher level become essentially completely inaccessible by the time you get to the low level. So this is this is not new. Every compiler has this idea of we do some transformations and optimizations using the high level code and then we sort of move down to a lower level closer to the machine.
00:13:01
Speaker
And at that point, the high-level optimizations become essentially inaccessible, but new lower-level optimizations, like which register shall I put it in, become accessible. They were not expressible in the high-level stuff. So the LLVM stroke C minus minus part is really in GHC's backend. The intermediate
00:13:24
Speaker
language, this system F language that I was describing, which GAT uses mostly, that is the middle of GAT, the long middle that does lots of functional programming kind of optimizations. Does that make sense? They're very different purposes is what I mean. It's not the one is better than the other, it's just they're different. We're almost talking about a three layer lasagna.
00:13:47
Speaker
of programming. You take Haskell, you turn it into Core, you optimize, then you turn it into C-, or LLVM, and you optimize some more. What makes that middle layer the core some kind of design sweet spot? Is it that it's got a solid theoretical backing, or is it something else?
00:14:10
Speaker
It's clear that you want to do some transformations at a high level when you still have the notion of first-class functions and doing beta reductions and so forth. So you could imagine doing that on source Haskell. But that wouldn't be much fun because source Haskell has dozens and dozens and dozens of language constructs, all of which are really syntactic sugar for some smaller
00:14:34
Speaker
So it's much more economical to do a one-off transformation that gets rid of 20 constructs in favor of one and then optimize the one rather

Revolutionizing Programming with Verse

00:14:45
Speaker
than try to optimize the 20. So what you want is to have an intermediate language that retains all the high-levelness that you need for the optimizations without any of the superficial syntactic sugar. That's what makes it the sweet spot.
00:15:03
Speaker
I wonder if I can ask a similar question in the other direction then. So, if in a sense Haskell boil away all the sugar and you get down to this core, but then you're saying the core of GHC is a very expressive language.
00:15:21
Speaker
Could it have been boiled up to a completely different-looking language? Is there something about Haskell that pops out of this core language, or is the core a substrate on which many languages could be built? Oh, no, the core is a substrate on which many languages could be built, you know, that look quite unlike Haskell, really, because it's really just the satically-type lambda calculus. The big thing is that the thing that would make I mean, like, could you take ML, for example?
00:15:49
Speaker
and translate into core. Well, let's see. So mostly you could. The thing is that GHC's core is a lazy language. It has call by name semantics and ML is call by value. And there's quite a lot of research has been done about could you make a single intermediate language in the style of this core thing that was equally good for call by value and call by need languages?
00:16:17
Speaker
No, source languages. You compile many languages. I think the answer is probably yes. But the details are quite difficult to do a really good job of both at once. And in practice,
00:16:32
Speaker
You know, Haskell's core is skewed towards lazy evaluation, though it spends a lot of effort on call-by-value as well. And similarly, a compiler like the OCaml compiler that the folks at Jane Street are building, which they have an intermediate language called, I think, F-lambda, they call it, they will be skewed towards call-by-value.
00:17:03
Speaker
Right. So, this starts to lead into your current research topic, because I know you're working on another core for another language that has a third core semantics. You've been working on verse, which is neither lazy nor strict. Is lenient? Is that true? Yeah, lenient, but also even more significantly, it's a functional logic language.
00:17:28
Speaker
So we talked about imperative languages, which is, I mean, do this and then do that. Things like C or Java or Fortran or C++. That's a whole class of language. Then we got functional languages, of which Haskell is a particularly, what's the word?
00:17:48
Speaker
a pure example in Haskell's default mode of computation is purely functional. If you want to do side effects like input-output or mutating variables, you have to use a monad, so-called, so that there's a type system that keeps side effecting computation separate from pure computations. Whereas ML is mostly functional, but you can have side effecting computations
00:18:15
Speaker
as well without delineating them in the types. So we've got imperative programming, functional programming, both of which, I mean, imperative programming is the mainstream. Functional programming is the thing to which I have essentially devoted my professional life because I thought it was so cool when I first came across it when I was, you know, 21 or so. And I became kind of addicted to it. And I
00:18:37
Speaker
I didn't think about it like that at the time, but essentially I've spent my professional life trying to say, how can we take the idea of purely functional programming and really make it go?
00:18:47
Speaker
right, knock down all the practical obstacles that make it hard to do in practice, or running too slow or too awkward, just sort of crush those obstacles one at a time. And that's what I spent my time doing. Haskell is the particular vehicle for that research endeavor, if you like.
00:19:09
Speaker
Sure enough, as it turns out, research endeavors don't often succeed in that kind of way, but sure enough, it turns out Haskell on its own has become relatively successful, as in it hasn't died out after 30 years, which most research languages don't last nearly that long.
00:19:27
Speaker
But also, functional programming as an enterprise has proved to be quite influential in the mainstream. So, languages like, and I don't see Sharp and Python and so forth, you can see many other languages, you can see absorbing functional language ideas. Now, alongside these two, there's been functional logic programming.
00:19:46
Speaker
Now that is a much smaller niche. If you think functional programming is niche, right? I don't think it's niche. It's a big niche, but nevertheless, it isn't the mainstream. Functional logic programming is much less popular. I haven't even heard of it. I consider myself fairly well informed. So what's functional logic programming? Well, you've heard of logic programming, like Prolog, right?
00:20:07
Speaker
There you have logical variables and unification is a key concept when you're working with Prolog. You can make new logical variables, you can call functions, passing them arguments that are not yet completely filled in. It's a very different approach to programming. Just as functional programming makes you rewire your brain, logic programming makes you rewire your brain too.
00:20:34
Speaker
functional logic programming has grew out of an attempt to say, well, logic programming is quite cool and functional programming is quite cool. Could we sort of somehow merge them? So one way to think of it is this. In Haskell, you can say let x equal, I don't know, f of 17 in blah, blah, blah. So that says x is a variable that denotes a value which is the result of f of 17.
00:21:02
Speaker
So it does two things. It brings X into scope and it tells you what its value is. It doesn't tell you the answer of F17, it just tells you that to compute its value, you compute F17. But nevertheless, X stands for just one value in its scope. X doesn't stand for three and then four.
00:21:25
Speaker
Yeah, right. But like in C, if you say if you bring X into scope, it's a mutable cell that you then assign to and maybe assign a different value later and a different value later. So X is the name for a box whose value changes over time. In Haskell, X is a name for a value and only one value. OK, now in a functional logic language, you split the let into two parts. One is one brings X into scope, just says there is an X.
00:21:53
Speaker
right but says nothing about its value and then separately to that you can have equations that give that explain something about the value of x so you can say you would say bring x into scope which in um the first calculus we say exists x right there is an x and then somewhere in the scope of that exists you could say x is a pair
00:22:17
Speaker
So now you've given some extra information. You said it's not a number, it's not a string, it's definitely a pair. But you might say, and it's the first and second components of the pairs, I'm not going to tell you. They're just y and z. So you might say, exist y and z, x equals the pair y,z. So you've, as it were, refined the value of x a little bit. OK, yeah. And then somewhere else, you might say, first of x equals 3.
00:22:46
Speaker
Okay. Right. And that's saying, oh, if you compute first of x, which now we can, since we know that x is pair, then the first component remember was y, then y must be 3. That's a very unusual thing in itself, because it almost feels like we're defining the something twice.
00:23:06
Speaker
Well, we're only saying, if we said we could say X equals X is the pair YZ and elsewhere, you said X equals three, in that case, the program would fail. If you give contradictory information about one of these logical variables, you say exist X, X equals three, X equals five, fail,
00:23:31
Speaker
Right, so deeply built into the idea of functional logic programming is the idea that a computation may fail, that is, return zero results. And failure is not necessarily bad, right, failure is any more than
00:23:45
Speaker
false is bad or in Haskell, a function that returns a maybe value, which is a maybe value is either just x or it's nothing. So a lookup, for example, might return a maybe value. So in Haskell, if you do a lookup and it returns nothing, the value nothing, that's not wrong. It's just that the lookup didn't find the key in the dictionary. That's fine. Programming's like that. Indeed, that might be partly what you want to do. So
00:24:16
Speaker
So in verse and in functional logic programming, failure is not like, you know, segfault, crash, bad things happen. Yeah, it's not a moral judgment. It's just there correctly is no answer to that. Correctly is no answer. That's right. In fact, verse and indeed, a very old language called icon did the same thing. In verse, there are no Booleans. Instead, you say if you say if E then, you know, then branch else else branch,
00:24:44
Speaker
then the semantics is if we evaluate E and E fails, we take the else branch.
00:24:53
Speaker
Right, if there's no possible answer to E. Yes, exactly. Just like I say in Haskell, if you call the lookup and then you can pat and match on the result, and if it's nothing, you do this. If it's just X, you do that. In inverse, you can just say, if the thing fails, take this branch. Otherwise, take that branch. Failure is not bad. Anyway, in this functional logic programming paradigm, you've got
00:25:15
Speaker
Deeply wired into the very fabric of computation is the idea that a computation may fail, return zero results. And in fact, inverse, it can also return many results. So that's another crucial component of functional logic programming is choice. So the expression one vertical bar two,
00:25:40
Speaker
returns one and then two, right? So an expression can yield zero results, we call that failure, one result, or maybe two results, or maybe seven results. So this sort of multiplicity of results is part of the core computation model of verse. So as you can see, it's very different to ordinary functional programming. Yeah, I know someone who's listening to this is going to be thinking this, so I have to ask it.
00:26:08
Speaker
In what way is this not just processing lists?
00:26:14
Speaker
Well, it is a bit like making Haskell's list monad into the fundamental monad of the language. But, and you could simulate this by, it wouldn't just be a list monad, it would be a logic T monad for anybody who wants to go and look up the logic T library, or something a bit like that. Because remember, we've got to do all this unification as well.
00:26:41
Speaker
Yeah, that's the big difference. So one possibility would be to say, well, why don't we just do all this as a library? Well, the answer is it would be possible, but inconvenient. Programs would look cluttered.
00:26:57
Speaker
Another way to ask the question is, what is the core computational paradigm of this functional logic thing? Could we, as we were saying, we can distill Haskell down into Lambda calculus, what would we distill verse into? And this has been your job recently, right? That is my job recently, that's right. So if we're going to do this functional logic programming thing, we'd better know what our
00:27:25
Speaker
what the absolutely essential irreducible essence of functional logic programming is, if we're going to undertake this endeavor at all, and we are. So indeed, we spent the last couple of years identifying what we called the verse calculus. There was a paper at ICFP last year about it.
00:27:49
Speaker
which indeed describes a little calculus. It's bigger than the lambda calculus. Quite a bit bigger, actually. Lambda calculus, remember, it only has variables, applications, and lambdas. That's incredibly sparse. Verse calculus has about eight constructs or something. It's a bit bigger.
00:28:07
Speaker
But we've also found a very nice way to then give it semantics in a similar style that we give the semantics of the lambda calculus. How do we say what a lambda calculus program means? We give it rewrite rules, in particular, beta reduction. So in the first calculus, we give it rewrite rules too. I think you need to define beta reduction for those that don't know. Yes. So it's a bit difficult to do this without a whiteboard. But what is the
00:28:37
Speaker
It went along to church first to find the lambda calculus he defined this incredibly tiny calculus which is just variables applications and lambdas and the only the only rewrite rule is if you have a lambda applied to an argument think about a function applied to an argument then what do you do you take the body of the of the function. And you replace the call with the body having replaced the bound variable with the actual argument.
00:29:04
Speaker
So you're sort of rewriting the function to be... You're rewriting the function call to have a copy of the function body. Yeah. So if I have, I don't know, if I say one bat of X is X squared plus one, then if I see one bat of three, I can replace that with three squared plus one. If I see one bat of 77, I replace that with 77 squared plus one. Yeah. So each call is replaced by a copy of the body.
00:29:33
Speaker
with the actual argument substituted for the formal parameter. That is beta reduction. That step of replacing the call by the copy of the body after substituting, that's beta. And astonishingly, that single thing is enough to define a computationally complete language. That is enough to define any computation whatsoever in the Turing complete sense. It's amazing that that one rule is enough.
00:30:04
Speaker
That's punching massively above its weight. It is amazing. It just gobs backingly brilliant that Alonso Church, right back in the late 1920s, early 1930s, was defining this calculus with a single rule, defines something that was as expressive as a Turing machine. They're inter-expressive.
00:30:25
Speaker
It's not too surprising that we figured out how useful that was from Turing's side, building machines upwards. But it's interesting that now time has rolled on and we've realised that you can go from the maths down to get into exactly the same place. Yeah, that's right. They realised that they were each as expressive as the other. They realised that very early on.
00:30:49
Speaker
but the thing about your machines is that real machines you know actual you know what whatever it is you know x eighty six processors they look much more like cheering machines in which the memory is the tape that is mutated by the process of this like the head of the cheering machine so it's quite close connection between.
00:31:06
Speaker
your Turing machine model of computation and what really happens in our microprocessors. Yeah, yeah. And their sort of functional programming stuff has taken a bit longer to say, well, it's equally expressive. But if we build a good compiler, we can map it onto the same x86 processor. Yeah. And that is the, that actually is the step we were discussing earlier, when we go from lambda calculus down to LLVM or C minus minus. Yep. Okay, so
00:31:33
Speaker
Going back to this core, which I really want to explore more, boiling down, we've got to get high level in a second, but boiling down this functional logic programming language into verse core, you've called it. You've ended up with a batch of, you say, eight rules.
00:31:54
Speaker
Oh, no. I think I forget the exact number. We have a rather more than eight rules. There's like 20 or so. I think I couldn't look at the paper, but I was meaning the number of syntactic constructs. It's only three or four. We have more like sort of eight or 10, I think, in the verse calculus.
00:32:14
Speaker
One thing I want to know is, can you look at the two different cores and get a sense of what the final language is going to feel like to work with? Is there functional logic zen, the tower of it, buried in the core language? A core calculus is the essence of
00:32:39
Speaker
what computation means in that paradigm. Incidentally, the lambda calculus is a subset of the first calculus. Oh, is it? Yeah, so every lambda term is a first calculus term.
00:32:52
Speaker
So it's just that the first calculus has a bit more. That doesn't make it computationally more powerful, because we know that the lambda calculus is computationally complete. But it has these additional functional logic features built into its infrastructure and rewrite rules to support them. But indeed, I think when you stare at the rewrite rules, if you look at the paper, you will get some sense of, you won't get a sense of what it's like to write programs in verse in the large, but I think you'll get a sense of what
00:33:21
Speaker
uh what computation means and what what you can say in this little little language and what we're saying is i get the sense is a version of programming where there's so imperative languages you define what needs to be done functional languages you define what is it feels like the verse you're going to be defining
00:33:48
Speaker
the shape of things as best you know them now, and gradually building up a picture of the constraints around the answer. Yeah, that's a bit like that. And in some ways, that's an even higher level, right? So that functional, in functional programming, we say it's declarative. What do we mean by declarative programming? Like it's what you were saying, what is rather than, you know, how to compute it? No.
00:34:10
Speaker
So, crudely, you might say imperative programming, you say, how? And in functional programming, you say, what? And we've rather sloppily said, well, that must be better, right? But in a functional logic language, you're even higher level rather than saying, how can I give an example? If I say, find me the x such that x squared is 22,
00:34:38
Speaker
Then I'm asking you to find the square root of X, right? So that's a very high level thing, right? It's not a square root algorithm. It's completely non-constructive, right? It simply tells you a property of the number you want. Yeah. Right? But it's very expressive. That's good.
00:34:57
Speaker
Right? And you can see immediately what it does. If I showed you a square root algorithm, it would be hard to figure out what it does. If I just say my spec is, you know, the x, such that x squared is 22, then you say, ah, now I know what you're doing. So this kind of tell me the properties of the result way of describing what you want a computer to do, because after all, that's what programming is.
00:35:25
Speaker
describe the properties of the result is a rather high level way to tell your computer what to do. Alright, as we've seen from this x squared thing. Now, inverse, you can say exists x, x squared equals 22 and try to run that, but it will get stuck.

Verse's Challenges with Complex Problems

00:35:46
Speaker
It's not an illegal program, but it's one that we're not going to succeed in executing, because the property we've asked it to figure out is too hard. We've asked it to guess a square root algorithm. But if I say an x and y and z such that x is the pair y, z, and the first component of x is 3 and the second component of x is 4,
00:36:15
Speaker
Then I've given you enough information, very straightforwardly, to solve those equations and say, ah, y must be 3, z must be 4, and x must be the pair 3, 4. Yeah. Right. So the so so versus if you like a bit.
00:36:32
Speaker
on a higher level in this sort of continuum of expressiveness than purely functional programming, but not as high level as, oh, I can just ask you to solve arbitrary polynomials, or solve Fermat's last problem. There is a danger that we're going to ask it to solve the NP-complete problem for us. Can you
00:37:00
Speaker
Give it the same problem as the spec and expect the answers you just magically pop out and that's not going to happen. Yeah, the latter part. Exactly. Yeah. So what is what is from a questions this can answer conveniently point of view? What is functional logic programming good for? Ah, yes. So that's that's a good question. Right. So so then the the
00:37:24
Speaker
We've got imperative programming, we've got functional programming, we've got functional logic programming. I've always thought that, since I was 20, I've thought, well, functional programming is just the right way to write programs, right? And it was very inspiring when John Bacchus, when he got the Turing Award, gave this lecture called, Can Programming Be Liberated from the von Neumann style? Which he was essentially saying, he, a famous person, was saying, look,
00:37:49
Speaker
They have no truck with these imperative languages, just go gangbusters on functional programming. This is a better way to do the job, right? In the task of telling a computer what you wanted to do, functional programming is just better. That is to say, you're less like to make mistakes. You'll be more productive and maybe your programs will be, who knows? It depends how hard we work on a compiler. Maybe our programs are one little slower, but maybe not much. And maybe they'll be highly parallel, who knows? But it's just a better way to do the job now.
00:38:19
Speaker
functional logic programming, maybe that's a better way to write programs. Maybe it's just, so I've argued that it's more expressive. To say it's more expressive and high level, it doesn't necessarily mean that it's a better programming medium because it might be extremely obscure, for example, or hard to understand programs for some reason. So I think that Tim Sweeney, my boss, and the founder and chief executive of Epic Games,
00:38:48
Speaker
firmly believes that functional logic programming is just the right way to write programs in the same way that I firmly believed, you know, age 20, that functional programming was just the right way to do things. And I've, you know, placed my bets on that. And Tim is putting his money on functional logic programming. At the moment, I think there's a, I think that's a, the good thing about that is it means that verse takes a view, right? It is expressing a
00:39:15
Speaker
a strongly held and well worked out view of what programming should be like. So that's quite exciting for me as a programming language researcher. Do we know that it's going to be a better way to write programs, even supposing programs run fast enough and so forth? Is it a better way to write programs? I don't know, it could be, but I think we'll get a lot. The only way we'll really find out is by trying it, and that's what we're doing. So Tim is convinced I'm
00:39:44
Speaker
I think there's an extremely interesting experiment to be done here, and I'm prepared to be convinced, but I'm not yet in the, this is just definitely so much better camp. That's fair, because you are on the spectrum of language researcher to marketing person. You're way over the first end of the spectrum, right? Indeed, yes. That's right. My principal goal in all of this is, it's an intellectual adventure,
00:40:12
Speaker
But of course, the intellectual adventures are dramatically informed by practical usage. I'm strongly motivated by the fact that hundreds of thousands of people use Haskell. And I'm strongly motivated by the fact that verse, because it's going to be the programming language in which you enter the epics, Fortnite world and metaverse, there's a kind of captive audience there of actually hundreds of millions of people
00:40:41
Speaker
who will start using verse. And that's very motivating too, because I want it to be a language that just works for them and is smooth and it's just sort of obvious what it does. Whether we succeed in doing all of those things, we'll have to see. But that's the aspiration. And we do have a big user base
00:41:07
Speaker
Yeah, it's reminding me of Objective C when Apple decided that you had to write Objective C to use iOS. It's like an entire world of programmers just popped up having to find out how useful and interesting this was. And we're going to see the same with verse and Unreal Engine, I expect.
00:41:28
Speaker
I think so. At the moment, you can program with Unreal using C++. That has its own barrier to entry. And you can also use a visual language called Blueprints.
00:41:39
Speaker
which is, it's a visual language in the sense that you draw boxes and arrows between them and the arrows can be data flow, arcs, they can be control flow dependencies. But of course, there's somewhat a limit about what sort of programs you can write in Blueprints. So then you have to descend to C++ and versus sort of filling the gap in the middle, because there's enormous numbers of users. I mean, we've got 300 million people floating about in the sort of Fortnite universe,
00:42:09
Speaker
Only a very, very tiny minority of those will bite the C++ bullet. But we hope that a lot more of them will find verse a happy place to play. Yeah, I can imagine a lot of people getting started with a visual language hitting the limit, but not quite wanting to leap up the mountain to C++. Yeah. OK, so let me ask this. I'm not asking you to bet on the marketing side, but here you are.
00:42:38
Speaker
an experienced language researcher being told, OK, we've got this language verse, it's a functional logic language, we would like you to boil this down to something semantically rigorous. What's it like to retrofit semantics to an existing language? Oh, well, it's quite entertaining, actually, and it's entertaining because, Tim,
00:43:02
Speaker
who is the progenitor of us, the unique single progenitor of us, he is fundamentally a geek. He cares primarily about beauty and elegance and a deep connection to mathematical logic and just doing it right.
00:43:23
Speaker
He has, incidentally, astonishingly, built a very successful company. He's also a very good businessman. I would say that his genetic core is, let's do this right. He has been designing verse in his head for the last two decades.
00:43:43
Speaker
Oh, really? That's just that he's built also, yes, a long time, a long time is also he's been busy making an epic successful as well. So it's been a kind of, you know, evenings and weekends activity for ages and ages. And so, so my job is, as it were, to, to figure out
00:44:05
Speaker
what verse is, what this thing in his head is, and it's full of interesting and original ideas, and then try to make sense of it all. Now, supposing we discovered it just doesn't make sense. Well, we haven't discovered that yet, but I firmly believe that if we could persuade Tim that something really didn't make sense, he would say, oh, perhaps we can't possibly do that.
00:44:36
Speaker
You must have found that as you dive right into the very guts of it, you find that actually this piece doesn't fit with this piece unless we amend the language. There must be inconsistencies that you only discover when you're trying to formalise something. Well, we are formalising from the bottom up. So we're sort of starting with the core calculus. That's what this paper is about. But
00:45:03
Speaker
Tim has been thinking about this a long time. The whole thing, the whole enchilada, including its type system and verification and transactional memory and side effects and classes and inheritance and so forth, and backward compatibility, the whole thing is jolly complicated.
00:45:26
Speaker
I guess what I mean is, once you formalise this calculus and you start building up that tower, are you going to find you end up with something that doesn't quite look exactly like verse? Are you going to say, okay, there's a new release of verse and we've got to change the semantics because... I don't know. So far we haven't.
00:45:45
Speaker
Okay. So far we haven't gone across any insuperable obstacles as it were, which I think is also testament to Tim's sort of single-mindedness. I'm reminded incidentally of Rust here.
00:46:00
Speaker
Rust was a language that was designed with a strong attention to sharing and borrowing and linearity and so forth. It didn't come out of a university and a sort of pointy-headed academic came out of a desire to do good practice. I was pretty sure that when somebody came to do the theory behind Rust, they would find all sorts of holes in the type system.
00:46:29
Speaker
Maybe not ones that occurred in practice very much, but just that there would be holes in which would allow you to write programs in Rust that simply weren't correct, disobeyed Rust's guarantees. Astonishingly, that turned out not to be the case.
00:46:47
Speaker
Derek Dreyer and his colleagues have done a lot of stuff on the foundations of Rust and actually didn't find any major theoretical inconsistencies. That's amazing. That means that the original designers had incredibly good intellectual, not just taste, but they actually got it right. It's an amazing thing, but it's jolly complicated, this business about linearity and sharing. Very difficult to get right.
00:47:15
Speaker
It's a remarkable achievement, I think. Do you have any theories on why that might be? Do you think it's just lightning struck? They're just very hard-working and clever, or do you think there's something about computers keeping us honest?
00:47:31
Speaker
Computers do keep you honest, but there's an awful lot of potential programs, possible programs that you could write. And many of them, you know, might have, you know, linearity flaws in them that were not really apparent or don't show up even when you run it. So, so no, I don't think it's, I think it's just extremely clever and had very, very well educated intuitions that they sort of developed over time.
00:47:53
Speaker
And so you must be claiming something similar for verse if you're going to formalize these things and finding there aren't that many. That's right. I don't know what I'm saying. So far, as I say, Tim's intuitions have proved remarkably, you know, one might disagree about matters of taste, right? So is it worth having this piece of complexity in order to allow that kind of expressiveness? You know, you can make choices about that. But what we haven't found is the thing is just a mess and doesn't hold together at all. It's just inconsistent in some way. Right. Yeah. And I think that is testament, as I say, to his single-mindedness.
00:48:23
Speaker
any sort of technical intuition. Okay, well, in that case, let's talk about climbing this ladder a bit more, because I absolutely don't want to sit a test on this, but I have read the verse core paper and enjoyed it. And you hint that you're going to be climbing up into formalizing side effects and types. Yeah. How's that work going? Oh, well, so let's talk about types, because that's really the next big thing.
00:48:53
Speaker
So versus type system is very unusual. So usually, like in language like Haskell, you have the world of terms and then separately have the world of types. And types are somewhat special language. In particular, you can statically reason about types.

Verse's Type System Flexibility

00:49:12
Speaker
So the type checker can say, this program is well-typed, this program is not well-typed.
00:49:19
Speaker
Now, inverse, a type is more like a contract in scheme. So a type is actually just a function, a function inverse actually. And it's a function which either fails in the sense we've been discussing, given an argument, it either fails, or let's say for now, or it's the identity function.
00:49:44
Speaker
So the int type is just the function that takes a value, tests whether it's an int, and if it is, it returns it, and if it isn't, it fails. Make sense? Right. Yeah, yeah. So it's almost like a filter, if you like. Yeah. But it fails if it isn't the type. So let's see the type int, int,
00:50:07
Speaker
The type of pairs of ints is a function that takes an argument, checks that it is a pair, if not, it fails, then applies int to the first component and applies int to the second component. Remember, applyTo means check that it has that type. Int is a function. So the pair int, int is also a function, is also a type which does this thing.
00:50:33
Speaker
So you can define the type, say, of pairs in which the second component is bigger than the first. That's an unusual type. It's just a function that takes an argument. I mean, you could write this function in verse, right? Just a function that takes an argument, checks if it's a pair. If it is a pair, check the first component is an int, check that the second component is an int, and then check that the first component is bigger than the second. And fail if not.
00:51:05
Speaker
So it becomes very much that there's a unification there between type statements saying we constrain these values to these rules, and functional logic programming in general saying we constrain these values to these rules. There's less separation between those worlds. Yeah, maybe. But I think the tight connection, I think,
00:51:33
Speaker
A type is just a partial function, partial in the sense that it can fail. So do you remember we said failure is deeply built into the fabric and we could make that done by some kind of list monad, but then it'd be much more awkward to say, and types do this. That's what a type does. But now since a type is just, since I can write new types, new types are just new functions that I write as a user.
00:51:59
Speaker
So I don't have a different language for defining new types. New types are simply ordinary old verse function definitions. But that means that the type checker has to understand, well, these ordinary old verse function definitions you've just written. So the type checker has to be to run verse programs. Exactly. The type checker has to be able to run verse programs, yes.
00:52:27
Speaker
And of course, you might write something that is extremely hard to check. So it would not be difficult to write a program with a contract or type that was sufficiently complicated that it was hard to be sure that the first argument was going to be bigger than the second. Like, I don't know. Say you had a type that was true only of positive numbers.
00:52:55
Speaker
And you said, is x squared in that type? Well, yes, it is because we know that if x is an int, we know that squares are always positive. But that's a property of numbers, right? So maybe the verifier knows about that property of numbers. But you can cook up some more difficult property of numbers. So you can see you'd rapidly get into arbitrary theorem proving.
00:53:24
Speaker
Yeah, yeah, so instead of calling it type checking. We call it verification. OK, and the idea is still going to be that just as the purposes of type checker is to say it eliminates certain classes of bugs. So the slogan is well typed programs don't go wrong. We're going wrong means some particular class of errors like adding an integer to a Boolean, right? Haskell means you can't that could never happen at runtime.
00:53:54
Speaker
So if you imagine a runtime which manipulated integers and Booleans are sort of tagged values so you could tell them apart at runtime, and then your addition operation did a runtime test, you would know that that runtime test would never fail. And therefore, you can omit all those runtime tests and indeed the tags that distinguish integers and Booleans. Now, Inverse is going to be like that in the sense that
00:54:24
Speaker
Verified programs don't go wrong. So if the verifier says thumbs up, your program is verified, then a certain class of errors cannot occur and the implementation could be corresponding more efficient. But it's possible that the verifier might say, I'm sorry, I can't prove that your program verifies because it requires me knowing, say that x squared is always positive.
00:54:48
Speaker
And I haven't been taught that yet. So over time, I expect the verse verifier to get more powerful, more smarter. Right, as it's taught rules for more and more complex time. Exactly. Yeah, there won't be unlike Haskell, there won't be a clean, bright line between programs that can be verified and programs that can't. Yes. Right. Because it'll be a function of just how smart has the verifier become.
00:55:13
Speaker
On any one day, we might hope to characterize as precisely as possible where that boundary lies, but it is a boundary that will move over time. Do you think that verifier will ever be user manipulatable in that you will write more rules for the verifier again in those? Possibly. Yes, I mean, you want to be a bit careful because if the user adds a rule that is simply false, then the program might crash. And it might crash in a truly horrible way.
00:55:43
Speaker
like it might just simply be a segfold. So you'd want to be a bit careful about allowing it, but if you
00:55:52
Speaker
If you actually add some new things to the verifier that says, make this part of the trusted code base, if this is wrong, all bets are off, then that's an imaginable possibility. Yeah. Okay. And do you think, I mean, I'm thinking verse, if the verifier is just more verse functions, this leads into side effect. I could, in theory, define a type which was the
00:56:17
Speaker
the type of all strings which are a valid username in my database. Yes, that might vary over time or space and who knows. So I think for us types are going to be pure functions. So now we need an effect system, right? To ensure that. So that we can check that
00:56:39
Speaker
types are pure functions. And indeed, verse has an effect system too. So you can see why it gets complicated. Where are you on formalizing the effect system? And are you taking a similar approach to Haskell?
00:56:56
Speaker
not a similar approach to Haskell because Haskell uses these monad things, right? We don't want to get sold out with monads and types. And because, yes, from a programmer convenience point of view, just being able to say relying on sort of left to right sequencing for side effects is extremely convenient. So
00:57:22
Speaker
And language is broadly classified into languages like Haskell that force you to use monads for anything side effecty. And that's a bit of a hair shirt, but it's an excellent discipline in my humble opinion. But it's a design choice and other possible design choices like verse to
00:57:41
Speaker
To make a not to make such a strong, syntactic and type level distinction between the side effects and computations and not once, but instead have an effect system that says that the verifier will for any term, it's going to be able to check whether it's pure. And you can see fundamentally, crudely put, that's not very hard, right? Is this expression pure? Well, does it use any side effects directly?
00:58:07
Speaker
And does it cause any, call any functions that can use side effects? Yeah. So at its crudest level, you can see there must be some kind of purity checker that isn't very hard. Yes. Right. But to do a systematic job of an effect system is indeed quite a challenge. And that's part of what we're up to at the moment. Yes.
00:58:32
Speaker
I should say, by the way, that Tim has an implementation of all of this embedded in his C++ implementation. He's written in the C++ compiler that he's been developing over the last 20 years. It's only five or 10,000 lines of C++. It's incredibly small. He is a virtuoso C++ programmer. But I think he would be the first to admit that it's
00:59:03
Speaker
an amazingly good stab at all of this, but it is not tight and elegant and complete and definitely right in every particular. So it's an excellent stab and it means that we have, it's a way that he has been using to educate his intuitions. To go back to your question about computers keeping you honest, I think building that has indeed educated his feedback cycle throughout. But where as it were, we're not building directly on that, we're trying to use that as a source of inspiration.
00:59:23
Speaker
I know it makes.
00:59:31
Speaker
to do this more foundational thing that we're talking about. Yeah, and that presumably is going to occupy you for a good few years to come. I think so, though in some of the scary bit is that we are designing the aeroplane at the same moment as we're launching it into the sky. There is a verse that has been released as a product. If you just Google for verse language epic, you'll rapidly get to it.
01:00:01
Speaker
It doesn't have any of this functional logic stuff in it, but it does have other things like classes, inheritance and modules. So it's a language that is already usable for building games and creative stuff in the sort of Fortnite universe and programming gets unreal. So it's already being used by thousands of people.
01:00:26
Speaker
So we're doing this foundational thing on the side, and then we sort of launch one to mix metaphors, we launch one rocket into orbit, and now we're building some more elaborate and sophisticated thing, we're going to launch it to orbit, then they're going to join up in the sky. So the scary bit is we need to do that quickly enough.
01:00:48
Speaker
that we can't go on for years and years and years doing this foundational stuff because the aeroplane is flying in the sky. We want to meet up sooner, within a year or two rather than in 10 years. Yes, I think working out all these technical details and writing papers about them will occupy us for several years. But there is a pretty big urgency at the same time that I personally find a bit scary. I can understand that. One last question then. Do you think that
01:01:18
Speaker
Do you think that these ideas will be pinned particularly to verse, or are there juicy things that other language designers could pull out of this and steal? Oh, well, I mean, one of the whole purposes of writing the paper
01:01:34
Speaker
is to distill the essence, right? So of course, any particular language implementation will be encrusted with particulars, like Haskell is no exception, by the way, it's an enormous language, and with a lot of detail to it, and you need to get to know its ecosystem and cabal and stack and HLS and on and on, right? So and first will similarly be encrusted with lots of stuff. But if we want to, you know, the purposes of the purpose of
01:02:03
Speaker
sort of research enterprise is to isolate the key ideas and distill them into a form that they can be digested by other language designers and academics, right? And so indeed, it is my goal that our first paper and hopefully subsequent papers about verse and the verse calculus will be influential, will be much more than just saying, well, this is how verse works, right?
01:02:33
Speaker
But they will rather embody and make precise a collection of ideas that may, I hope, be influential in their own right. As a language researcher, your goal is to make the ideas bigger than any one language.
01:02:53
Speaker
Oh, indeed. When talking to research students about writing papers, I often say that the least durable thing that we do is to build artifacts, like compilers, in my case. The most durable thing we do is write papers.
01:03:11
Speaker
Because those papers will be, in the best case, will be read and will inform stuff decades later when the particular language implementation is dust. I mean, just look at the fact that we're still, you know, Alonzo Church's ideas, we are talking about in the podcast today, which is, you know, a hundred years since he was first thinking about them. That's amazing. When we go to a concert hall, we listen to people reading Mozart's papers.
01:03:37
Speaker
he called them symphonies, but they are his intellectually written down stuff and symphony orchestras read them. Of course, there's a great deal of expressiveness and nuance in what they do. But the fact is that Mozart infected our brains with an idea that we still find rewarding.
01:03:57
Speaker
And I think the purpose of research papers in some ways is to infect the brain of the reader with an idea that is so persuasive and seductive and exciting and creative and interesting that they can then use it to inspire and build on new things. That's my hope. Yeah. We had a guest a couple of weeks ago who was hoping to build a version of C that would last 100 years.
01:04:27
Speaker
But I certainly hope he succeeds, but what I can definitely see is some of the ideas you've been percolating up still being discussed 100 years from now and inspiring new languages. Yeah, I think it's, I think, yes. Languages are, programming languages actually are an example of something that can be surprisingly long-lived, like COBOL is still alive and well. Yeah. But not in a good way, I think. Right. So, you know, it's not, COBOL is no longer
01:04:57
Speaker
a source of inspiring ideas, just that we have so much code written in it, we have to still run those compilers. So I suspect that if Haskell is still around in 100 years, it would be not in a good way. Whereas I hope the ideas might be visibly embodied in some more wonderful thing that we have worked out by then.
01:05:16
Speaker
Yeah, yeah, you'd hope that the ideas there are fundamental enough that they'll outlive the programmes and us, right? Which actually leads me to the other big arm of your work, which is passing the big ideas on to the next generation.

Teaching Computing as a Foundational Subject

01:05:31
Speaker
Tell me about that. Yeah. Computing education is super important and I think we as a, you know, as a, what's the word, as a professional community,
01:05:42
Speaker
We owe it not to our discipline and in particular to our children to be thoughtful about what should our children be learning about our discipline and in a way that would make them empowered citizens and ones who think it's just the most exciting thing since life spread, which it is.
01:06:06
Speaker
And let's not put up with an education that's sort of substandard in that respect. So I think there's an endeavor that we should all be involved in in some way. Where would you go with that? Because I'm thinking of my own children's computer science education. It's a bit of Python, a bit of Squeak and too much Excel and PowerPoint. Yes, well, it's probably a bit better than it was.
01:06:35
Speaker
When we started computing at school, CAS, the organization that is sort of behind a lot of the changes in the curriculum, the national curriculum said you should do ICT, information and communication technology. It was very technology focused. And that is, it was all about artifacts and not about ideas to go back to the conversation. And it was often no more than
01:07:00
Speaker
learn PowerPoint in Excel, Excel if you were lucky, PowerPoint in Word. So then over the next sort of five to eight years, following about 2007, we managed to reposition the UK's national curriculum to state explicitly that children should, all children should learn the fundamentals of computer science in the same way that they learned the fundamentals of natural science.
01:07:30
Speaker
So that reimagines computer science not as a narrow sort of rather vocational operational skill that's useful for operating computers, but rather as a foundational discipline, like maths or like physics, that an elementary understanding of which is kind of essential for understanding the natural world that surrounds you and being a citizen in it who can have some agency
01:07:59
Speaker
Well, it's some ability to influence events, some understanding of what's going on, ability to make informed choices and to make well-judged decisions. Do you think there's a parallel? I mean, one of the big reasons we teach children science is not so they can use a Bunsen burner, but so they can think critically. Yeah. But it's a bit more than just think critically. They also need a knowledge base.
01:08:26
Speaker
If you knew nothing about heat combustion, mass, mass velocity, nothing about any of that, but you were very good at critical thinking. It's just your logical processes were good, but you had no knowledge, that wouldn't be any good. You need to have an elementary understanding of how to think logically, yes, the scientific method, all that, but you also need to know some science facts.
01:08:53
Speaker
Otherwise, how can you possibly make informed choices about global warming or about whether it's safe to unscrew the front of your electric light switch? In the same way, I think the critical thinking and logical thinking are absolutely part of computer science. Indeed, computers are rather good at training you.
01:09:14
Speaker
to think logically because they are so non-judgmentally but absolutely brutally cruel about exposing flaws in your logic. That program just doesn't work. And it doesn't say, oh, I feel sorry for you today, I'll make it work.
01:09:31
Speaker
Which is good when it's controlling the plane that's about to land you. Yes. So I think just as you need to, I think all children should learn some elementary aspects of natural science. I think they should learn elementary aspects of computer science. So they're in a position, they have enough knowledge base as well as logical thinking and skills to make well informed choices about the digital world that surrounds them so pressingly and intimately.
01:09:59
Speaker
And practically for you, is that influencing the curriculum at the national level? Yeah, absolutely it is. I mean, as I said, it's completely transformed. Before it said, you should learn about digital technology. Now it says all children should learn the fundamental principles of computer science, which is a huge challenge for schools. If you're a primary teacher, you might think computer science. I mean, at the time it was put in the national curriculum. We were not long past the time when people thought, well, computer science, that's just the university's course, isn't it?
01:10:30
Speaker
And then suddenly we say, oh no, primary school children should have some elementary understanding. And the same way they're understanding of physics is not very deep or biology, but a primary school teacher sees themselves as part of their task is to give an elementary understanding of natural science. And so there's a big challenge there because
01:10:52
Speaker
We, it's easy enough to say, children should get an elementary understanding of the principles of computer science and should have practical experience of writing programs. Because that's, that's like the lab work of the subject, if you like, it's programming is the computer science as lab work is the physics, you know, it's really important. It's easy enough to say that, but to turn it into a practical reality, you've got to say, well, okay, so specifically, what should children learn at primary school?
01:11:20
Speaker
And how should they learn it? In what order should they learn the concepts? And then even once you've got that all laid out, you've then got to say, and how can we train teachers to be good at doing that? Right? Same at secondary schools. That seems like one of the hardest tasks. Yeah. So ever since 2000, this became part of the National curriculum, which was in 2014.
01:11:44
Speaker
computing at school, this sort of guerrilla movement in partnership with the BCS, the Professional Society for Computing in this country, the British Computer Society, Chartered Institute for IT. We've been working together to answer all of those questions. And the government, having changed their curriculum, then waited five years, but eventually they did. Hooray.
01:12:11
Speaker
find some money, actual money, to build something called the National Centre for Computing Education, which is a national teacher professional development organisation aimed at upskilling teachers and generating the teaching resources and materials and curriculum sort of sequencing that we're discussing. So that's amazing. That was that started in 2018, I became its first chair. Yeah, wow.
01:12:35
Speaker
It's on its second iteration now, and I hope that there will be a third. Impressive. Quite a lot has taken place in this country, which is not at all to say job done, because it's a big job too. We are far from the place in which every child gets a great computing education. Yeah.
01:12:58
Speaker
But that's circling back to why the audience for this podcast might, I would like to say, might be interested. But and I would I would like to say should be interested is that I think this task, the one that I've just described about saying, what should we teach exactly, you know, when and how, and with what materials, all of that is too important to and too difficult just to leave to school teachers to make up for themselves.
01:13:26
Speaker
or even from some, even if government funded Gwango like the National Centre for Computing Education, I think we as a technical community should be involved as individuals and our companies should be kind of institutionally involved in trying to make computing education into just great across the whole country in every classroom. Okay, so I would understand if I want to
01:13:52
Speaker
If I want to move verse forward in some way, I would go and download it, write some code in it, or download the paper and try implementing it. How do I, I have no idea how I would get involved in moving the education of children in computing forwards in the way you're describing. How would someone started in that project? Yeah. So, uh, uh, uh, I can give you the first step is join CAS, right?
01:14:20
Speaker
computing at school. Many of the members of CAS are teachers, but there's also a third to a half are IT professionals of one kind. Then there are various thematic interest groups, there's one on AI, there's one on primary that you could join.
01:14:42
Speaker
Joining Kazan is going to the Kazan National Conference, just becoming better informed about it. There's probably a Kaz local community, a sort of physical getting together of teachers in your area. So going to one of those would mean you face to face met with some actual live teachers who are grappling with this stuff. Because for me, it's not so much about doing some grand national scale thing. I think what we as individuals and even companies do is just get stuck in locally.
01:15:10
Speaker
That means meeting teachers are not saying, no, I'm going to tell you what to do, but getting alongside them and saying, I can see you've got an exciting but challenging job here. How could I help you? Is there any way in which I could be helpful? And there is a way in which I can help without necessarily being in a room of 30 children. Yeah, for example, you might be a mentor to a teacher.
01:15:39
Speaker
or possibly sometimes a mentor to a child, particularly at the latest ages, like a sixth form. So can I say a number of ways for professionals to get involved in this kind of thing? I wish I could be, I wish it was, oh, you can become a STEM ambassador. What a STEM ambassador. So that's
01:15:58
Speaker
you can just Google for STEM ambassadors, but basically it means someone who's sort of on a national register of people who are willing to help in some way, and schools will, when they're looking for speakers, will often look for STEM ambassadors. So when you're a STEM ambassador, they will also do all the CRB checks that mean that you are, you know, all the legal obligations are done so you can go to a school, like in person.
01:16:23
Speaker
Yeah, they have restrictions on that quite sensibly. Exactly. Yes. So and all that is kind of done by becoming a STEM ambassador. So it's really hard to give you a crisp answer for how to get involved because education is complicated and schools are different. But you're saying you've got children and they're at a particular school, you could do worse than find who is the head of computing in the school, say, could I come and talk to you about what you're doing? And, you know,
01:16:52
Speaker
Can we talk about, is there any way in which I, or perhaps by recruiting others, could be helpful. And it might be anything from, well, just come and give a talk about what you do in your life. Because many children think that computing is all about
01:17:13
Speaker
you know sporty use in in windows basement staring at glowing screens right but i have a creative discipline which people do lots of different things and you your role as a podcaster is very different to.
01:17:29
Speaker
other you know the spotty person in a bundle of swim writing where do you write your professional communicator right so the computer is very diverse profession so that's one one thing that a teacher could find helpful because they want to give their if they say to their children looks an interesting and diverse profession that can go yeah yeah yeah right but when a person comes
01:17:56
Speaker
and can speak in a persuasive and articulate way about the richness of the subject and all the things you could do. That's much more compelling. Yeah, yeah. Okay. Final question, because this is an international podcast. Do you know if there are equivalent organisations in, say, America? Oh, yeah. Yeah, definitely. All around the world. So just to say, CAS is UK-centric, but you can definitely join CAS.
01:18:25
Speaker
from anywhere. I have quite a few international members. That just means you have access. You can talk to other members of the community. We call it a community of practice. Think of it like, here's a good model. It's like an open source project. If you, I don't know, subscribe to The Economist, you pay them a subscription in exchange for a service. If you join the GHC open source project, you don't pay anything
01:18:55
Speaker
you bring the contributions that you have, which you give for free. And in exchange, you get the richness of the community that you get to, you know, get all of the free stuff that that group of people has produced. So it's not it's the one is a transactional exchange. The other is more a shared community grassroots bottom up gift economy.
01:19:18
Speaker
Yes. CAS is the gift economy model, right? Very much. It's not a subscription you pay to get a service. You just become part of a community. And then, and of course, that means you make of it what you can. So going back to your international question, so people abroad can
01:19:36
Speaker
can still join CAS and anywhere there will be a local version and will probably be in a particular country and you need help finding out who, then I can probably help you because I've talked to people in lots of different countries at various times. But in America, there's something called the Computer Science Teachers Association, CSTA, which is a sort of equivalent of CAS, a little bit more teacher centric than CAS is, but sort of equivalent of CAS in the USA.
01:20:01
Speaker
And I think in every country there would be activists who are busy trying to do exactly what CAS is trying to do here. Cool. Okay. I'll put links to all of that in the show notes and hopefully some listeners will get involved. Yeah. That would be great. I do think it's exciting and rewarding because everybody cares about education.
01:20:23
Speaker
You can't find a person on the planet who says, well, education, who cares, just doesn't matter. Maybe there are a few, but not many. Also, for everybody listening to this podcast, you probably think, computer science, programming, this whole world, this is just the most exciting thing. That's why I've devoted my professional life to it. Happily, it's well paid as well.
01:20:49
Speaker
So I just want to share that. Oh, and you also have a strong idea of the richness and depth and excitement of the discipline. I want to share that with our young people so they don't get the wrong idea. And so they do have plenty of input and opportunities to find good on-ramps into computing. So all of them end up well informed.
01:21:11
Speaker
able to make good judgments. And some of them end up particularly underrepresented groups, like women, find a pathway into computing as a discipline. Because I think we need lots of people there. And the better informed, the better. Yeah, absolutely. And we need to help. We need to help. We can't just leave it to the education system to do. They've got the message. They're trying hard. But they're educators. They need subject expertise. That's us. That's us. Yeah.
01:21:41
Speaker
Good. Okay, ideas and education, a perfect note to end on. Yeah, great. Simon Paikin-Goans, thank you very much for talking to us. It's been a lot of fun, thank you.
01:21:51
Speaker
Simon, thank you very much. If you want to get involved in CAS in the UK or CSTA over in the States, there are links to those organisations in the show notes. And if you're not in those countries but you know of a similar organisation in your country, please drop me a line. I'll gladly add the links for wherever we can.
01:22:11
Speaker
How do you drop me a line? You look in the show notes, my contact details. You can find me on Twitter, LinkedIn, Mastodon. You can leave a comment if you're on YouTube. You can leave a like if you're on YouTube, a rating if you're on one of the audio ones. You can subscribe and rejoin us next week, or you can share with a friend and share the love and knowledge. What else is there to say? I have started scratching out an evaluator for functional logic programming in PureScript, of all things.
01:22:40
Speaker
I make no claims about how far I get, but it's good brain food and I've been enjoying it. So if you want to do something similar, you'll find a link to Simon's paper that explains it pretty well for an academic paper, very well for an academic paper in the show notes. And that should give you plenty of brain food. And if you want it plenty of homework until next week. I've been your host, Chris Jenkins. This has been Developer Voices with Simon Payton Jones. Thanks for listening.