Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
14| ChatGPT as a Glider — James Intriligator image

14| ChatGPT as a Glider — James Intriligator

S1 E14 · MULTIVERSES
Avatar
133 Plays1 year ago

Large language models, such as ChatGPT are poised to change the way we develop, research, and perhaps even think. But how do we best understand LLMs to get the most from our prompting?

Thinking of LLMs as deep neural networks, while correct, is not very useful in practical terms. It doesn't help us interact with them, rather as thinking of human behavior as nothing more than the result of neurons firing won't make you many friends. However, thinking of LLMs as search engines is also faulty — they are notoriously unreliable for facts.

Our guest this week is James Intriligator. James trained as a cognitive neuroscientist at Harvard, but then gravitated towards design and is currently Professor of the Practice in Human Factors Engineering and Director of Strategic Innovation at Tufts University. 

James proposes viewing ChatGPT not as a search engine, but as a "glider" that journeys through knowledge. By guiding it through diverse domains, it learns your interests and customizes better answers. Dimensional prompts activate specific areas like medicine or economics. 

I like this playful way of thinking of LLMs. Maybe gliding (LLMs) is the new surfing (of the web). 

Links:


Recommended
Transcript

Introduction to Metaphors and LLMs

00:00:00
Speaker
Throughout history, metaphors seem to have played an important role in introducing new technologies. Whether it's Al Gore talking about the internet as an information superhighway, or people describing cars as iron horses, we often relate to a new technology by modelling it in terms of things with which we're already familiar. And this can either help or hinder our understanding, depending on how good those models are. It's my belief that for generative AI and large language models in particular,
00:00:29
Speaker
we're going to need really good ways of thinking about these, really good crutches or heuristics that help us get the most from them and prevent us from falling into error.

James Intrilligator on LLMs and Design

00:00:40
Speaker
And I guess this week agrees. James Intrilogator is a professor of design. He leads the human factors engineering school at Tufts University, although he actually studied and did a PhD in cognitive neuroscience at Harvard and later moved into neuroscience. So he's got quite a varied and interesting background.
00:01:01
Speaker
And lately he's been thinking about this issue of how we think about LLMs. And to preview a little bit of what we talk about, he has an intriguing model of large language models as sort of gliders, as things for which we can set a path through space, a space of topics, and they will go on a kind of flight plan following our instructions.
00:01:25
Speaker
And I think this is in contrast to some other metaphors, for example, the kind of stochastic parrot metaphor and the fuzzy JPEG of the web metaphor that's due to Ted Chang. And I actually think that all these metaphors have some validity. But what I really like about James's glider model is that it helps us think about how we prompt in a way that, say,
00:01:48
Speaker
the stochastic parrot doesn't really, right? Thinking of LLMs as a stochastic parrot would lead you to think that you get no value from them. They're just going to randomly repeat the things that you put in. So this is a pretty varied discussion. We start talking about design in general and then gravitate towards LLMs, which for both of us are just a source of continual fascination and diversion right now.
00:02:14
Speaker
And I think that's probably enough preamble.

Constraints and Opportunities in Design

00:02:16
Speaker
So let's start the main amble. I'm James Johnson. You're listening to Multiverses. Hi, James Intrelligator. Welcome to Multiverses. Thanks. Thanks so much for having me on the show. Looking forward to chatting with you.
00:02:43
Speaker
So you have a really varied background, lots of disciplines going on, but you have gravitated towards human factors engineering, which is...
00:02:53
Speaker
as a field within design. Can you perhaps situate us and give us your take on what design is? Sure. It's a great question. There's a lot of different ways of thinking about design and designers, design all kinds of stuff, everything from artistic fashion design, jewelry design,
00:03:16
Speaker
Spheric design, space design, all these kinds of different designs. And I guess the way I like to think of it is that a designer is just someone who looks for a way to navigate through constraints and identify opportunities to create something that they want to create in a sense. It's a pretty abstract
00:03:32
Speaker
way of thinking about it, but that's how I like to think of it. So if you're an artist and you're trying to do visual art, you might be doing design to put down colors and patterns and shapes to try to convey an emotional purpose or emotional meaning, right? So in that context, that's what they are trying to design. Within the world of human factors engineering, primarily we're trying to design
00:03:52
Speaker
products, systems, services, experiences that work. And work is, again, kind of a bit of a relative definition, but it might be, for instance, you're trying to design a better website and you want it to create the right experience for someone, or you're trying to design a new kind of business, a new entrepreneurial enterprise, or a new social enterprise. Whatever it is you're trying to design, the challenge for the designer is to really identify any constraints. It's much easier to do design if you have more constraints.
00:04:22
Speaker
The example that I've talked about many times in the past is that if you're trying to design a mug for someone, it really is much easier to design the perfect mug if you really know for whom you're designing that mug. If you're designing a mug for a four-year-old kid, it's a very different endeavor than if you're designing a mug for a 15-year-old person to drink coffee from. So that's how I like to think of design. It's really just about finding opportunities to navigate through this constraint space to reach your desired goal.
00:04:52
Speaker
Yeah, it strikes me we had a Christian Burke, Canadian poet, and one of the ways that poetry works, one of the forms of poetry is to bring constraint in, and that's a kind of way of, again, narrowing the field of possibilities. But I think one difference between poetry and the design we're talking about here is you can choose to use constraint in poetry or not, and
00:05:20
Speaker
Christian has this kind of take on different ways that poetry can be done. You can be sort of more self-conscious or kind of unselfconscious, and you can also bring in constraint and not constrain, and motion, not constrain, and a lot of motion. So there's kind of different dimensions to that.
00:05:48
Speaker
with design I guess the constraints are always forced upon you because you have a kind of physical object which has got to meet some needs which are not necessarily the thing you know it's like there's a brief given to the designer saying we need something that does this and there's a lot of stuff that's unspoken there as well like yeah we need a mug but we're not telling you you need to figure out exactly who it's for right all the things that
00:06:14
Speaker
kind of hidden in our sort of ideas for this mug. That's right. Yeah, that's why usually when you're doing the design work, there's a design research phase where at the beginning you spend quite a bit of time trying to uncover
00:06:29
Speaker
you know, exactly whom are we making this mug for? What are they like? And usually in the design process, you bring in all kinds of different constraints, let's say. So in the world of engineering, where I'm now a professor in the Department of Mechanical Engineering,
00:06:44
Speaker
often in mechanical design, you're looking for physical constraints, the right materials, the right functions, the right stiffness, hardness, et cetera, or it's the origin of the thought about human factors would be what are the human physical constraints, right? So the mug has to be the right size to fit a human hand or a chair has to be the right size
00:07:03
Speaker
for a human to sit on or for 99% of humans to sit on, let's say. So those are physical constraints in the world of design that I spend more time in nowadays. It tends to be more around digital design or cognitive design. So if you're designing an app, there's very little kind of physicality in the constraint spaces.
00:07:21
Speaker
Now there's some, you don't want the button to be too far from the thumb if it's intended to be used on a smartphone. But much more of the constraints space there is about the cognitive constraints. So you want the right information architecture, you want the right menus, you want it to look right. And it's also about what I like to think of as the emotional constraints. So for instance, you want the mug, let's say, or the digital app, whatever it is to have the right branding.
00:07:45
Speaker
to create the right emotional experiences. So there's a whole range of constraints, everything from the very physical, either human physical or literally physical, physical, mechanical things, all the way up to brand, emotionality, system constraints, legal constraints. Nowadays, of course, sustainability is another one of the constraints you want to bring in.
00:08:05
Speaker
So yeah, there's tons of different constraints that can come into play. And I think that's what makes design really an art is that it's up to the artist or the poet in the example that you gave to decide what constraints they want to care about, how much they want to care about them, when they want to care about them, right? So they may break convention, I think, like E Cummings and poets like that who
00:08:24
Speaker
They throw away a lot of the constraints around traditional semantic or syntactic, whichever you think of it, constraints that come into play and they play in that space. That's one of the elements that bring artistry into the design process is that you get to decide which of the constraints you want to acknowledge and play within and which ones you want to bend.
00:08:48
Speaker
Yeah, certainly there seem to be some objects which people love, even though they don't seem super practical. And I guess there, you know, it's maybe drawing on some of the emotional aspects. And perhaps sometimes just making something awkward can make it more lovable, I guess. Would it be fair to say as well that that kind of transition from
00:09:10
Speaker
the more physical constraints to then looking at the cognitive ones that you mentioned and then emotional. Has the emphasis on those factors changed over time? Have we previously really just considered more the ergonomics, let's say? And then as we've invented digital interfaces, as you said, it's become less about ergonomics or we
00:09:40
Speaker
Perhaps we've just figured out the problems there and more about a new set of problems that then require a new way of thinking about design, a new set of constraints to take into consideration.
00:09:53
Speaker
Yeah, that's a great question. I mean, part of it I would

Automation and Personalization in Modern Design

00:09:56
Speaker
say is time evolution. But I think another part of it is that we go through these different phases. So when we entered the, well, let's say pre-industrial revolution, there was a lot more focus on kind of handcrafted designs and customized spoke solutions, so to speak. And there was a lot of craftsmanship and artistry involved in designing products. And of course, with one of the challenges or one of the
00:10:21
Speaker
Again, it's both a challenge and an opportunity around automation is that you have to kind of give up often with the specialized, customed, hand-finished bits of a solution. So if you want to mass-produce pottery, for instance, it's very difficult to have it be artistic and bring in the emotional constraints because they're produced en masse. It's one of the nice trade-offs, not nice trade-offs. Well, one of the required trade-offs of things like automation is that you often have to give up on some of these
00:10:50
Speaker
finer emotional customizations or personalizations. Now, of course, one of the nice things nowadays is that we can do a lot of this kind of design to print and custom design, and you can go online, design your own. One of the nice examples from the early days of the internet was M&Ms, M&M Candies. They started a whole new line of business where you can go online and actually design your own M&Ms. It was really quite a successful
00:11:18
Speaker
thing. And it's a great example, I think, of where you tend to think of something like a candy, an M&M, a Smarty, any of these kinds of candies out there is just generic, you know, they're little bites of deliciousness. But if you could actually put a message on it, like for instance, this is one of the use cases where many people use those, then it does bring in this emotional context and the emotional content. And then it does become much more of a kind of an artistry type of thing.
00:11:42
Speaker
It's an interesting challenge. As technology keeps moving ahead, we go through these phases where we move towards standardization for various reasons, not just in terms of automation of manufacture, but also for transport, for selling, all these kinds of things. It does a lot if you can make it standardized and make it easy for people to produce it, to sell it, to consume it, to dispose of it, to maintain it, all of these things.
00:12:08
Speaker
It's much easier if you can actually have it standardized. But of course, if you want it to be personalized, if you want it to have that extra emotional content, then we need to maybe either move technology forward even further or bring in new technologies or take a step back. There's been a movement towards reuse of goods. There's wonderful companies out there that do circular economy work, like Mud Genes is one that I just happen to know about.
00:12:34
Speaker
And there you can, excuse me, after you're finished with your pair of jeans or you've outgrown them or whatever, you can send them back and they become sort of the narrative of the previous owners becomes part of the history and part of the emotional content and the narrative literally of the jeans themselves. So it's kind of an interesting evolution of design and kind of this movement into and out of customization and standardization. Yeah, that's interesting. That hadn't struck me before that.
00:13:02
Speaker
So the confluence of industrial technologies, manufacturing technologies, and information technologies has in some ways permitted that personalization. But you have to have very good manufacturing processes in the foot. You need to get those down. I'm thinking of Henry Ford saying, you can have any color car you like as long as it's black. And then something similar happened with the early Apple
00:13:31
Speaker
laptops or computers. They were all the same color. And then they said, oh, well, we can make some variations on this. You can get a rose gold iPhone. And they were very resistant to that until they knew they had the scale that they needed. And they'd ironed out every other quirk. And they could add those flourishes and personalizations, maybe.
00:13:55
Speaker
Exactly. It really is quite challenging to get the ability to actually customize things to that level, whether it's printing the individual person's name or message on the candy or on the iPhone itself. For a while, I remember when Apple introduced the ability to get your name or whatever message you want on your iPhone, that became a big deal for consumers out there and it became another opportunity for them to make more money.
00:14:22
Speaker
I have to ask now, do you have a personalized message on an Apple device? No. Actually, the first iPod I bought, I did get a little, just my name and email address. It wasn't really intended to be that customized. It was more, again, a functional mark on it, so I couldn't lose it. Or if I did lose it, it could get back to me, so to speak. So yeah, I don't do a ton of work
00:14:47
Speaker
where I like things to be customized for me. I know some people do. Some people like to actually have very customized things for me. I'm fine with more of the functional side of things, I guess. Okay, very good. I imagine there's got to be a lot of interplay between the designers and the engineers in this sort of a process. At least my experience of trying to build things has been
00:15:14
Speaker
you get some amazing designs, and then you give them to the people who've got to build them. And they're like, no, we can't do that. Is that something? Has there been an evolution in how that process works? Is there better integration, do you think? Do we understand better now how to not throw things over the fence, as it were, and actually work together to make realizable designs?
00:15:38
Speaker
beautiful products. Yeah, I'd love to say that we've made huge strides and that it's all sorted out now, but unfortunately that really is not the case. A lot of my students who graduate from my Human Factors Engineering programs, they end up going off to work in the design groups or the user experience groups of companies. And it's still shocking to me how often those types of
00:16:03
Speaker
Well, the design process is often thought of as just something that's very much at the very beginning briefly and again at the very end. There are companies for which that's not true. There's a lot of companies that are well-known as design-led companies. Apple is one of the ones that initially was very much design-led and they still are to some extent, although I'm a little skeptical of how much of that is really a huge part of their DNA, so to speak nowadays. Dyson is another example from Britain where there's
00:16:32
Speaker
some lovely design that goes into every product. So there is an acknowledged, at least spoken belief that there really has to be this interdisciplinary, intergroup collaborative effort where engineering and design, business and marketing, all these groups kind of work together. Unfortunately, there is
00:16:52
Speaker
often kind of bottleneck at various points. For a while, I worked at a company where I was more in the kind of marketing group at the company and it was a technological solution provider, et cetera, et cetera.

Integration of Design and Engineering

00:17:03
Speaker
It was often the case that the marketing people were brought in only at the very end. Once they've kind of crafted the solution, they give it to marketing to put the bells and whistles on, to put the final little touches on. And then of course, at that point it's far too late and they realize that they've missed many of the features, many of the functions that the actual
00:17:20
Speaker
and customers really wanted. It's one of the ways that designers, marketers, et cetera, can really add huge value to design engineering equations. If you're designing a fan, functionally, material-wise, it's a pretty straightforward effort. But if you really want a fan or whatever the product might be that is loved by the consumers and desired by the consumers and people are willing to pay a premium price for, then you really need to understand the consumers, the customers, people who are going to be using the product.
00:17:51
Speaker
This is one of the big challenges, I think, is that nowadays we can create things so well and we can design things so well and build them so quickly, et cetera, that we can quite easily meet the functional needs of the consumers, but the problem is so can our competitors.
00:18:06
Speaker
it very rapidly becomes a race to generic solutions. So things like Amazon Basics is a classic example where they see there's these clever companies making these lovely solutions that people want. And so they build the basics version, which functionally works the same, but doesn't have any of the other desired qualities. And unfortunately, it then becomes a quick race to the bottom where it's just a commoditized
00:18:30
Speaker
products and people just charge less and less and less and there's no way to differentiate yourself in that field. So even from a pure business perspective, it seems to me that you really need to be bringing in the design and the marketing and really understanding who the users are and the consumers are and things like that to get products that people really desire, not just that meet their functional needs.
00:18:52
Speaker
Yeah. I'm not too concerned that everything will become generic like those Amazon basics. And the reason why is that, and I don't know how much of the thing this is in the US, but here in the supermarkets, we have, for example, Tesco will have their Tesco finest range and they'll have their Tesco value range. And then we'll have like other ranges in between, like the market
00:19:19
Speaker
Tesco market value. I don't know. I worked at Tesco many, many years ago. And someone was explaining to me how much of a nightmare it was to maintain all these different brands, which were essentially the same product repackaged, you know, or at least coming from the same factories. I want to say, you know, that there wasn't differences in quality between them. But, you know, maybe there's a little bit of different flavoring or something in the Tesco value toothpaste or the finest toothpaste or what have you.
00:19:49
Speaker
A lot of it was just because people saw themselves as certain kinds of shoppers and went for a particular thing. There they had made everything pretty generic, to be fair. But I think what it does signify is that people on the
00:20:12
Speaker
people choose to differentiate themselves in different ways. And one of that is through the kind of products that they opt to buy. So I think there's always going to be room for that uniqueness. And in some ways, having the Amazon basics creates just like the bland backdrop that you need for things to pop out, maybe. It's the minimum level, the MVP almost, or the minimum baseline level that any company has to meet. The challenge is
00:20:42
Speaker
for many companies to actually differentiate themselves enough so that they rise above that baseline level. That's a bit of a challenge, especially if it is a product that is fairly commoditized. If it's just a charger for your phone or something like that, it's not something that tends to be publicly displayed. It's not something that you tend to build your identity around. There's all these psychological aspects of consumption and ownership.
00:21:07
Speaker
If it's something that you can't really display, can't really add deep value, then it's hard to really be a new company coming into that market and differentiating yourself in any way. I think there always will be opportunities for these kinds of novel, design-led companies. Actually, statistically and from research, there's been lots of studies that show that companies that are design-led tend to be much more successful. They tend to have
00:21:35
Speaker
higher stock value. They tend to be more resilient for things like receptions and things like that. So there are lots of good reasons to be sort of a design-led company, or at least to just be sure to bring the designers and the design process more centrally into the development process. So if you think of the development as the entire thing, the business and the
00:21:56
Speaker
research and all that kind of stuff, all the way to manufacturing, follow through, end of life, all of those things. Design really shouldn't just be a little slice done at a couple of points, but it really should be brought to the front, to the middle, and to the end of the process. I think design is a fascinating field, and it's one that I have been playing in and around now for 30 years. It's always fascinating to me how much of it is this interesting intersection between emotion, psychology, function,
00:22:27
Speaker
I spent a few years working in the world of packaging. And again, there too, it's a great example where we often will tend to just tear off the package, the cardboard box and throw it in the bin and you move on. But the package itself has had huge amounts of research and thought and design put in because it has to, of course, protect the product. It has to
00:22:46
Speaker
create a brand message, it has to differentiate itself, it has to store it on a shelf, has to work for transport. So there's always these different dimensions and constraints that have to be met for even something as simple as a cardboard box. And even there, again, it's often commoditized and you just get this standard Amazon generic beige cardboard box comes to the house and that's fine. But then there are companies that do very clever things and they'll put their design and their branding all the way
00:23:13
Speaker
to things like, I don't know if you've seen them, but there's like the barcodes and sometimes there'll be messages hidden in the barcodes or the barcodes will make a shape. There's several things like that that actually can quite effectively differentiate one product from another and create sort of moments of connection between the consumer and the product, et cetera. So yeah, it's a fascinating area, the world of design. Now, maybe before, I think we're going to soon move on to another topic, but maybe there's a nice segue into that. I mean,
00:23:43
Speaker
You mentioned just now all these kind of dimensions that you have to take into account. And you have a nice kind of name for that. But can you take us through, yeah, some of the time, I guess some of the techniques that you would try to grok all those things, right? How do we sort of navigate the rich, the overly rich landscape paths of constraints that designers have to work with?
00:24:14
Speaker
Yeah, I mean, that's an area that I've been sort of thinking about and playing in for quite some time now. And I guess the most sort of structured method that I've come up with is what I call multidimensional task analysis is one example. And the idea there is that there's the concept of task analysis, which goes, well, in some sense goes back forever. But at least in the early Industrial Revolution, people like Taylor had this whole concept of Taylorism where you could
00:24:43
Speaker
analyze a task let's say it's on an assembly line and assemble a particular part of a car and you could go through all this specific physical motions that are required to do that task and that that same process was then applied to things that are more cognitive so you had
00:24:59
Speaker
physical task analysis, looking at building on an assembly line. Then you can have cognitive task analysis, where you look at how someone's filling in a form, for instance, on a website. And those are the physical and cognitive task analysis. And over the last, I guess, five or 10 years, I've been trying to expand that into other forms of task analysis. So there's one, for instance, which is emotional task analysis, where you study very carefully all the emotions that might arise as someone is going through a task.
00:25:26
Speaker
An example there would be, for instance, if you go to a website and you decide to buy something, you go to the credit card form. Just when you're about to put in the credit card information, you're probably a little bit tentative about that, a little bit unsure. If you do a detailed emotional task analysis, you realize that at that point in time, there's this emotion that arises, which is
00:25:47
Speaker
fear, skepticism, uncertainty. And so as a designer, you could think, well, so that is now part of my design landscape. What can I do to, in this case, let's say, to lessen that emotion? And so, for instance, you could put the verified by trusty logos, all these kind of 46-bit encryption, all these kinds of messages there on that page just to kind of help ameliorate or lessen that negative emotion. And similarly, you can do the same for positive emotions. You know, someone's gone through and they've created an account. Isn't that great? They must be quite happy.
00:26:17
Speaker
And from a design perspective, if you did your emotional task analysis, you'd realize that, okay, right now there's a positive emotion. Let's see what we can do to boost that up a little bit. And so you could have a, congratulations, confetti goes flying in the background or whatever, and you've created an account. And it sounds silly, it sounds stupid, it sounds small, but those kind of small emotional touch points can have a huge impact on someone's connection to a website or their experience with a product.
00:26:41
Speaker
Same with, again, back to packaging, things like that. If you think about opening the package, at some point there'll be frustration when you can't figure out how to open the package. And again, as a designer, you should think, well, okay, so right now they get the package and they don't know how to open it. They can't tell where the pull tab is. Let's make their life easier. Let's get rid of that negative emotional state somehow. And the nice thing about multidimensional task analysis is sort of the fancy name I've given this way of thinking, but it really is just, it helps identify
00:27:11
Speaker
moments in time where there are, for instance, emotions or physical challenges or cognitive challenges, and it doesn't give you an answer, but it tells you, here's a place you should now focus your artistry and skill to look for solutions. There's another task analysis that I've been trying to forward and shepherd is the idea of informational task analysis and decisional task analysis, just basically trying to understand a process that someone's going through and look for ways to facilitate it, to make it smoother. Ideally, you'd have
00:27:39
Speaker
All of the complications just disappear. You want design that becomes invisible. It's one of the kind of gold designs that becomes something that people don't even notice. It's quite simple, it's effortless, it's magical. And it often requires structured ways of thinking to be able to do that kind of design.
00:27:58
Speaker
And of course, one of the big areas that has championed that is the whole world of design thinking led by people like at IDEO, companies like that.

ChatGPT in Design Exploration

00:28:07
Speaker
Design thinking is another process where it focuses on the user, looks for emotionality, looks for touch points, relationships, things like that, and tries to design products, services, experiences, processes that make that person delighted.
00:28:22
Speaker
and gets rid of any problems that they might have. So there's lots of ways that you can actually look at all the different dimensions of a design task and systematically try to identify opportunities and ways to correct or to kind of tweak the experience, let's say.
00:28:37
Speaker
The other thing that makes it quite interesting is the opportunity of using new technologies and new tools to do that kind of design, right? So I mentioned these different forms of task analysis, but one of the latest entrants into this world of tools for designers are things like LLMs, things like chat and R&T has been in my world has really captivated my attention for the last six months or so as a tool that could be used in this same type of design process.
00:29:06
Speaker
This is one of the things that really excites me. I'm curious, was there a particular aspect of chat GPT which got you thinking this could work well for design? One thing I'm particularly thinking about is
00:29:20
Speaker
The idea of personas within design, I guess that's sort of something that's part and parcel of the design thinking way of doing things to identify. We're talking earlier about, you know, a four year old, a mug made for a four year old. I mean, that's a fun personas to think about. But you try to set on very specific platonic ideals of the sort of person that would be using your product.
00:29:45
Speaker
And chat GPT and other LLMs seem to be very good at imagining that they're anyone, right? You can give them any bizarre combination of characteristics and they will do a really good job at trying to be, I don't know, you know, Donald J. Trump, but also a rapper, right? Or, you know, whatever, whatever you want. Exactly. I mean, that is one of the things that actually, in a sense,
00:30:11
Speaker
made me start to think about using chat GPT and other LLMs. I'll just say chat GPT and keep dropping and other LLMs. I'll just use that as a generic example. So like in chat GPT, there was lots of buzz when it first came out about how you can use it to make up stories. And of course, the world of design thinking storytelling is one of the best tools that design thinkers have. They make up a narrative. They make up a story about
00:30:35
Speaker
This is Pat, and they are a 42-year-old office worker, blah, blah, blah. You can make up a nice story, and that really is a huge part of the research, even though it may not seem it, for design thinking and for the design process as a whole. If you really want to empathize with your end user, if you want to understand what their pains are, what their dreams, fears, loves, aspirations,
00:30:57
Speaker
relationships, et cetera are, one of the ways to get there is by starting to tell narratives. And as humans, it's a way to kind of convey emotion, to convey understanding, et cetera. So LLMs, of course, early on, this is one of the things that came out is chat GPT was great at telling stories. And so I started playing around with it as a tool to build personas and it worked fabulously well. And the nice thing about chat GPT
00:31:22
Speaker
There's some downsides, of course, so it makes things up, it hallucinates, it fills in the blanks. It's a great improviser, but that's both a blessing and a curse. Some people hate that. They don't want it making up information. But if you're trying to build realistic personae, and if you want to try to develop better products, having a tool that can actually make up narratives that are
00:31:44
Speaker
based on true stories. So it's almost like a made for a TV movie or something inspired by true events made. And that really is how I think about chat GPT. I don't take anything it says as canonical truth, but if I ask it to describe the average, let's say the start of the work day for an average McDonald's worker, tell me about a McDonald's worker and how they start their day, what happens when they arrive at their workplace, it'll tell me stuff.
00:32:12
Speaker
I don't ever believe that it's 100% accurate and we'll describe everyone out there because there's no single story that will describe everyone, but it does a great job of building an informed narrative and that often is enough to get the design process really going. That's one of the ways that you could start to use chat GPT, but I started to think about it as a mechanism for exploring stories that could go in any
00:32:41
Speaker
number of dimensions. So back to the whole concept of multiple dimensions. So I could say, tell me about the emotional experiences of that McDonald's worker, or tell me about their functional challenges, or tell me about the worst day or the best day. You can sort of ask ChatGPT to take you on any little narrative adventure you want.
00:33:01
Speaker
And so that's where it started to get really interesting to me, is to think about using chat GPT as a tool to explore spaces, multidimensional spaces. And that led to my metaphor about chat GPT as a glider, in a sense, through an infinite dimensional glider. It can take you on an emotional journey or a narrative journey. You can tell you stories about anything from chocolate to automobiles to
00:33:30
Speaker
the next generation smartphone. And you can use it to do an informed exploration and informed in the sense that it was trained on basically everything ever written by people, not really. And it doesn't remember any of that kind of stuff, but it's been informed by it. It's sort of like someone who heard all the songs and has forgotten all the words. But if you ask them to kind of hum a few bars of something that's jazzy, they can do that pretty well. They've heard every song. They can kind of do something like a
00:33:59
Speaker
a jazz-ish song, or they can do something that's folksy or something that has a kind of Celtic air to it, whatever it might be. And that's very much how I started to think about chat GPT as a multidimensional glider that lets you explore these spaces. Yeah. And of course, one of the really nice features of this metaphor is it gives a really good handle on how to make use of chat GPT in that
00:34:27
Speaker
You know, some of the other metaphors, and I think it'd be fun to talk about some of the others out there, but for example, the famous stochastic parrot metaphor, you know, there's truth to it. That probably does a fairly good job in a sense of capturing how it was generated, like how it was built. You know, it's just taking information that it's been given and it's
00:34:50
Speaker
it's running some statistical operations on that information. But it doesn't really give you, you know,
00:34:58
Speaker
If you really believed it was a stochastic parrot, you would have to know the answers that you wanted from it. You would just give it the answer that you expect and expect to get something slightly different back. That's what you get with parrots. They just repeat. But it doesn't do that at all. It does act something more like this glider where you set it up with a kind of flight path that it's going to take. And it's going to go through those topics and try to
00:35:29
Speaker
I don't know, make the most sensible journey through that. And the other metaphor that I've seen you use is it's like a little kind of robot picking up words. So if we kind of combine those, it's kind of like a glider that's moving through these spaces and picking up what it thinks are going to be the words that work together best and kind of summarize that journey best.
00:35:52
Speaker
Yeah. And just to kind of follow on the robot glider things. I mean, the robot metaphor is one that I like to use just because it has this lovely mental image. It's like a little wind up robot, like one of those walking robots from the 1950s, 60s. I don't know. You wind it up and it just walks along on the ground. And that's really kind of what chat GPT is. You wind it up and you aim it in a direction. This is kind of how I like to think about it. You know, okay, I'm going to aim you in the direction of
00:36:19
Speaker
a narrative about a four-year-old who wants to drink some apple juice.

Enhancing ChatGPT's Output

00:36:23
Speaker
And you kind of wind the robot up and aim it in the direction of narrative four-year-old space. If there is such a space there, we're in a billion dimensional space. So you aim it in that direction and it walks along and it leaves behind itself a little tale, a little stream of words. And that is the story about the four-year-olds who wants apple juice. And it's incredibly useful if you're trying to design a cup for a four-year-old who wants to drink something.
00:36:46
Speaker
It's quite useful. To me, it didn't quite have the right image just because it's such a high dimensional space that chat GPT can walk you through. You can ask it to walk you through any bit of a space that you want and it'll go there. The other side of things that I think the robot metaphor fails to capture is the idea of there being a topology and influence to the way the robot walks.
00:37:15
Speaker
the glider, I like to think of this as being sort of influenced by stars. Let's say it's navigating or influenced by the winds. So that to me is how I try to think of the concept of context as people often talk about. So in chat GBT,
00:37:31
Speaker
an example that I talk about sometimes is this idea that let's say I asked ChachiPT to give me four behavior change interventions that might help reduce cancer rate in rural communities. And if you go to ChachiPT and type that, you'll get some pretty good answers. It'll give you some lovely
00:37:52
Speaker
examples of interventions and behavior change that you could do to reduce cancer rates, et cetera, et cetera. And that's fine. But that's like putting the glider and just saying, okay, hey, glider, behavior change is sort of way up there off in space. Use that to guide your path as you move through space. And it will do that, but it basically, I don't know, intuitively, I think of it as like a vector representation. So somehow when you say there's a constellation over there, which is behavior change, keep your eye on that as you fly your narrative tails.
00:38:20
Speaker
It'll do it, but it'll do a much better job if you first ask it to really explore that bit of constellational space, almost like magnify that for a second. One of the ways that I like to do that is I'll ask chatgpt first. Before I ask it for behavior change interventions, I'll say, okay, chatgpt, there are many behavior change interventions out there. Can you please consider all of them and group them into six clusters and present to me the six clusters, name each cluster,
00:38:47
Speaker
and give me two examples of behavior change interventions within each of those clusters. And it does all of that. It does a great job. I tend not to even read about 90% of what it tells me because partly what I'm doing there is I'm really just trying to get it to activate or to search daylight or get a richer understanding of that particular dimension. And then you could say, okay, can you now give me behavior change interventions to reduce cancer rates in rural communities? And it'll do a much more nuanced job. It sort of has
00:39:15
Speaker
minded it or forced it to remember that there's actually a lot of interesting depth and richness in that little bit of space, which I was earlier referring to as behavior change, in which maybe in mathematical sense, maybe it was sort of represented as just a couple of vectors way off in billion dimensional space. That's kind of roughly what behavior change is. So let's use that to guide our journey through space. If you instead say, hey,
00:39:37
Speaker
tell me all about that space and give me a little journey, a little walkthrough behavior change space, then its representational vectors of that bit of space are much richer and you'll have a much more nuanced glide as you explore behavior change interventions to reduce cancer rates or whatever the case might be. Yeah, yeah. You're sort of, you're kind of priming it like, I mean, we're mixing many metaphors here or another way of thinking about it might be you're sort of
00:40:06
Speaker
you're trying to sort of mold a persona or pick out a persona or a personality from within it. And then you're asking of that, your question. One thing I really like about chat GPT, and we discussed this previously versus some of the other LLMs, I mean, particularly Bard, not so much Claude, but
00:40:35
Speaker
both Claude and chat GPT have this ability to maintain separate conversations. And those aren't just, you know, those are independent things. You're actually using a different instance of the agent, the context window. So yeah, I mean, some kind of nice, you can do very silly things with that. Like you can just say, okay, in this chat, everything I ask you translate it into emojis, like, and nothing else. And then any, yeah, confirm you've understood.
00:41:04
Speaker
please reply yes. And that's the only yes you'll get. And then every other sentence you write, it gives you just emojis. That's rather fun. Or you can get something that just corrects your texts or just, I don't know, you can say, I want my friend in this window. Yeah, I have one that knows a lot about my own inner personal struggles. I kind of use it as a bit of a therapist I've told them about.
00:41:30
Speaker
aspirations, what matters to life, what matters to me in life, what I consider important, et cetera, et cetera, and I can go there. Sometimes if I'm feeling down, say, hey, I'm a bit depressed today, and it gives me a wonderful little pep talk because it knows what matters to me and what my concerns are, things like that. Actually, one of the things that got me really thinking about this was one of my kids, my oldest kid,
00:41:52
Speaker
He is a student, and he was telling me how he has a different conversation for each of the classes he's taking. That made me realize, of course, that makes sense. It's one of the things, like you said, that I'm disappointed that many of the other LLMs haven't really incorporated is this ability to have multiple
00:42:09
Speaker
agents, multiple conversations, however you think of it. So I do teach various classes, and so I do have a different chat for each of my classes, and that chat has read my syllabus. It knows all the classic papers in that particular domain, and so I could say, hey,
00:42:25
Speaker
I hear you've seen my current syllabus. Can you give me a syllabus that is a little bit better than that one and also includes elements of DEIJ, diversity, equity, inclusion, social justice issues? And it'll do that, right? I mean, you really need to have these conversations that
00:42:43
Speaker
have not just the context, but also the relevant bits of all knowledge highlighted and primed, like you said. I do think priming is kind of the right way to think of it here, that if you don't tell chat GPT what you want it to use to guide your journey, it won't really know. So one of the classic examples now is this idea that many people will say, pretend you're a medical doctor and give me a healthy eating plan.
00:43:11
Speaker
And it'll do that, and it's much better than if you just say, give me a healthy eating plan. If you say, give me a healthy eating plan, it's generic. If you say, pretend you're a medical doctor and give me a healthy eating plan, it's much better. If you say, pretend you're a medical doctor and pretend I am a 57-year-old man and blah, blah, blah, it'll do a better study.
00:43:29
Speaker
The more information that chat has, the better. I haven't done enough experimenting to know for sure, but I'm pretty certain that just saying, pretend you're a medical doctor will not be nearly as effective as if you first said, medical doctors know many different things. Can you give me eight clusters of types of things that medical doctors know?
00:43:50
Speaker
And then it'll do it. And then you say, okay, can you tell me about six clusters or six types of things that medical doctors know about health and nutrition? And it'll do that as well. And then if you ask it to give you some medical or some healthy eating advice, it'll be a much richer, much better advice, right? If you ask it to just pretend you are a medical doctor, it's okay, but it uses a very rough course representation of medical doctor versus activating that kind of
00:44:18
Speaker
representation or that space in a much deeper and richer way. Yeah. And this idea of clustering is a really nice way of quickly getting it to prime itself, right? And you can even, or you can then, of the, if it's a medical doctor and you're saying, okay, well, give me the types of medical, the types of expertise that a doctor would have or the types of doctors there are, you can then choose among those yourself and say, please focus on this one.
00:44:47
Speaker
But yeah, I'm intrigued because, of course, clustering is something that machine learning has been used for for many years, but we've used, you know, very specific ways of doing that kind of K means or something like that. And but now you can just kind of ask it to do some clustering over anything. So it's a really interesting way of figuring out our own conceptual
00:45:16
Speaker
boundaries and like you know what is the right I don't know the right way of clustering the sciences if you asked it for but put all sciences into three sciences or eight sciences you'll get very different answers and of course it's not telling you something fundamental about science but it's telling you something fundamental about
00:45:35
Speaker
how we think about, like how all of cultural heritage or where everything's been fed sort of represents the world. It's almost like, this is the first time I've thought of this, but it's almost like the ordinary language philosophers that were working in the sort of fifties and around then in the UK in particular.
00:46:04
Speaker
particularly in Oxford, and they were doing things like, where would I use, I mean, a classic example is
00:46:13
Speaker
JL Austin talking about three well ways of spilling ink and he says you can spill ink accidentally You can do it Sorry, you can do it intentionally on purpose or deliberately and one might just say okay. Well, those are the same thing There's no difference between those but then he goes through a series of thought experiments and says oh in this you know in this circumstance you would have said that it was done in
00:46:37
Speaker
deliberately, but it was still, you know, but it wasn't intentional, right? And he comes up with these kind of, and you do find yourself agreeing, actually, that would be, you know, this is the appropriate lucid language. And the point is, you know, our language has sort of meaning that we don't ourselves, we're not able to surface unless we think about it and reflect on it. And of course, all that, or much of that meaning in languages is now encoded in, in
00:47:06
Speaker
chat GPT, other LMs, and we can do kind of those sort of experiments and try to tease out like the way that our concepts fit together.
00:47:17
Speaker
Yeah, it is one of the, I mean, I spent a year studying at King's College in London and philosophy. I was a philosophy major at the time and I got very into Wittgenstein. And I think it's one of the things that sort of has informed some of my ways of thinking about understanding what chat GPT and other LLMs do.

Understanding and Exploring with LLMs

00:47:35
Speaker
And sort of a way of thinking about the stochastic parrots type of routine.
00:47:41
Speaker
I personally don't find all that compelling. I think that there's some misunderstanding of the way language functions and the way that we use, interpret and understand language. So even if it's a parrot and it doesn't understand meaning, et cetera, to me, it's more a question of what it inspires in the mind of the hearer or the user.
00:48:06
Speaker
I don't think chat GPT knows anything deep about behavior change, but if it can put symbols on paper that inspire in the mind of a human ideas for behavior change, that's good enough for me. I don't think that the meaning needs to be in the mind of the parrot. If the meaning is in the mind of the hearer, the user, the person is going to make use of that information, then that's probably good enough.
00:48:32
Speaker
whether they're real meaning, I don't even think, you know, from a kind of a Vic and Cheyenne perspective, I don't even know what real meaning would be like in the mind of an LLM, right?
00:48:41
Speaker
a weird, bewitched by language kind of thing to say that there's no real understanding, no real meaning in there. I don't even know what real meaning would be like in an LLM. I think we struggle with this in humans. We've not figured it out. We've had thousands of years to think about this. We've been thinking about this for a thousand years. I mean, Wittgenstein himself sort of changed his mind about this. The meaning of the world is its application.
00:49:11
Speaker
I need some language gain. If the LLM can apply words correctly, then it's got the learn down, I guess. That's the way of thinking about it. I mean, I guess I just think chat GPT again, or LLMs in general, are really the most
00:49:32
Speaker
powerful tool that humans have created as cognitive prosthetics is somehow how people talk about them. As a tool to help inspire new ideas, new directions, new understanding, they're just fabulous.
00:49:48
Speaker
I refer to these things now as glides, kind of the glider metaphor. And so I've gone on glides through so many fascinating spaces, right? Everything from medicine to social impact to physics to engineering, machine learning. And in each case, I found that chat GPT has
00:50:06
Speaker
taken me on wonderful journeys to new insights and new discoveries. So it's not really that it, I go back and forth. My friends, I always say, you won't believe what chat GPT has done tonight. And they say, well, you mean what you've done with chat GPT. And I guess there is some artistry to the pilots, someone who's flying the glider
00:50:26
Speaker
just to follow on this metaphor that's probably getting annoying at this point. But there is some kind of human involvement that helps guide it into interesting novel spaces. And this is maybe let's hope that one of the ways that humans will continue to serve a functional value in society, we won't lose all of our jobs because we'll still need some humans who can actually help fly the machine. It's like,
00:50:53
Speaker
It's a pretty powerful machine, but you still need humans to add that kind of sense to it. Yeah, it is interesting, and it is kind of the way that we talk about it, the language that we use, we very much... This is because of the way it's been designed to act or respond in very human-like ways. It could convey much of the information.
00:51:18
Speaker
in more robotic speech. So it's very tempting to embody it with some or endow it with some agency, which is not something that we would
00:51:31
Speaker
we do with other cognitive prosthetics, computers in general. I think it's both a blessing and a curse. It's one of my big complaints about, from a human factors perspective, we have this idea of a mental model of how something works. We have a mental model of what CPR does and how you should do it, or a mental model of how, I don't know, whatever,
00:51:57
Speaker
Microsoft Word or Google Docs, email, how all of these things work. We have some mental model and they tend to get triggered by symbols in the environment. So if you see a search box, you go and you type a search term and you search. And I think that open AI and chat GPT have kind of done a disservice to the power of LLMs in that they've just put what looks like a search box. And so I'm always disappointed how many of my friends and colleagues
00:52:25
Speaker
have been unhappy with chat GPT because they went there and they tried to search and it didn't really give them very good stuff. It was sort of generic search results with a facade of humanness added on top and it's like, well,
00:52:38
Speaker
That is where OpenAI screwed up. They gave an interface that really does look like a search box. And so people, unfortunately, think that what this is is a search engine. And even colleagues of mine in computer science say, well, it's really just another form of Wikipedia or another form of Google. And it's like, well, no, unfortunately,
00:52:58
Speaker
they put the wrong user interface there so you think it is that but you you failed to understand it's actually something much different from that it's something that you know if you wanted to respond as a human it can if you ask it to respond as a robot it can if you ask it to only give you you know bulletless structured outputs it can do that as well it can be
00:53:19
Speaker
kind of configured to respond anyway. You wanted to explore any space you want. I mean, again, as a glider explorer, I did some wonderful, let's see, I had it rewrite the final season of Game of Thrones. I actually asked it to give me three versions, three possible, outlined three different versions of the final season of Game of Thrones. Every one of them was
00:53:46
Speaker
10 times better, I must say, and I'm biased, et cetera, than the real one that actually was created. And some of them were so beautifully nuanced and subtle and things like that. And you can explore anything. What would happen if Harry Potter appeared in the world of Game of Thrones? And from Chad GPT's perspective, I don't want to anthropomorphize it too much. That's just another glide through a space. It's a space that's informed by everything that's been ever written about Harry Potter and everything that's ever been written about Game of Thrones.
00:54:13
Speaker
And it's a pretty informed space that it can glide you through. If you ask it to write a poem about how Harry Potter encountered, I'm actually not that much of a fan of either of those genres, but it knows everything about those. It knows more than any human about each of those spaces. If you wanted to find the world's expert on each of those and ask
00:54:34
Speaker
the experts to come up with a narrative thing. Basically, chat GPT is like that. It's read everything. It's read all the fan fiction about all of those things as well. Anyway, as an infinite
00:54:51
Speaker
dimensional glider or explorer. It's just an incredible tool. Now, again, you have to be careful because all of that is just improvisation. It's making it up as it goes along. If you ask to solve a complicated maths problem, it'll do the same kind of thing. It's kind of read some math stuff here and some other math stuff there. And so it'll
00:55:09
Speaker
kind of weave up a tail that seems face plausible, but it may be totally wrong. And if you're in a field where getting the absolute perfect facts matters, and if you're in a field where getting the facts matters and you also don't spend the time to review what it's telling you, then you're in trouble. It's not really trustworthy for those kinds of things. But if you wanted to put together a first draft of something that you then intend to use your human sense and sensibility and expertise to review and edit and
00:55:38
Speaker
Add references to et cetera. It's fabulous for doing that kind of stuff. Yeah, it's incredible how it's managed to learn Maths, I mean, you know, it's not being programmed in it's just picked up on the patterns Especially yeah computer programming it's incredible for that. Yeah. Yeah. Yeah computer programming absolutely in this I William Gibson commented once that
00:56:08
Speaker
know, in the industrial revolution, you need a lot of money to start a factory, right, you need a capital. And now you just start, you know, anyone can start a factory, you just get a laptop, right, and you can build software. You still need to put people in that factory, if you don't know how to code, right? Like, or if you know, even if you do know how to code, you're only one person, but and, you know, probably
00:56:32
Speaker
you know, adapted a limited number of languages. But now we've, you know, technically, these sort of given us workers, right. And I, I have all these sort of crazy ideas that I occasionally jot down of wouldn't it be fun to I don't know. I was when I was doing the other day. Oh, yeah, just just build some
00:56:58
Speaker
build a simulation of the game of life, like build the game of life, you know, Conway's game of life, but using QR codes as the starting point. So this is something actually, again, pick this up from Christian Burke, who does this in one of his books, he kind of shows how, after a few generations of QR codes,
00:57:15
Speaker
will have just become a few different dots, right? It will have lost all its kind of interest in general if you propagate it under the Conway's rule of life rules. So it's like, oh, it'd be fun to make a video of that happening. But, you know, that's going to take too much time, right? Or it would be fun to create some, like, AP or, you know, these AP or
00:57:44
Speaker
Aperiodic tiles, like the Penrose tiling and stuff like that. There's been a new one discovered recently. Wouldn't it be fun to see those tiles being animated or to play with different versions of that? Oh, yeah, I could do that. But it would just take quite a lot of coding. But now you just say, OK.
00:58:02
Speaker
I want a QR code which encodes this and please then treat it as a series of black dots and then propagate those dots according to the rules of the game of life and then turn that into a video GIF. It just does that in five minutes. It's incredible. It is funny. I feel often
00:58:26
Speaker
when we're talking about things like this, I just fall into this kind of fanboy kind of thing, if that's the right term. It's just amazing. It's wonderful. And it really is. But one of the reasons I think it's so wonderful is we can do that kind of stuff, but just thinking what that means on kind of a societal scale, that
00:58:44
Speaker
how any computer programmer can be 10 times as powerful. And any, for instance, from an equity perspective, there are some kids, and this is where it gets difficult, right? So there are some kids in middle school, high school, youngish kids who just are not very good writers. Now, should that be a required skill for them to get out of
00:59:09
Speaker
universe? Should that be required for them to be able to be productive, successful members of society? Or is it okay for them to have chat GPT do that? I don't know. I mean, when you start to think about what it's capable of, I have this weird vision of a future where everything will be entirely different. And if we accept the power of chat GPT and start to use it, and also I should say that I know that
00:59:35
Speaker
as it is now, it has all kinds of problems. It hallucinates, it has errors, et cetera, et cetera. But again, I'm not really thinking about the now. I'm thinking about the five years from now. I can't even imagine what two years from now, one year from now is going to be like, partly because of this interesting non-linearity here, right? So now that people who are building LLMs and
00:59:55
Speaker
all of the other kinds of AI-informed agents. Now that they have access to something as powerful as chat GPT and as powerful as all the other tools that are out there, it accelerates everything in this interesting non-linearity. So if you look at the evolution of human
01:00:14
Speaker
creativity. Let's say you make a timeline going from 10,000 years ago to 10,000 years from now, and you graph out a number of new inventions, happiness, flourishing, artistic creations, et cetera. It'll all go along and things will happen. But at this point in November, December of 2020, 2022, I guess,
01:00:36
Speaker
When Chachi Pity was released, there's going to be this just incredible acceleration in the productivity of everything. That's my hunch. Unfortunately, of course, the downside of all that is there'll also be an acceleration in things like terrorist actions and
01:00:51
Speaker
on biological weapon development.

ChatGPT's Wide Applicability

01:00:54
Speaker
I mean, chatgbt can really be used by anyone. My contention in the strong sense, which I don't entirely agree with, but I do pretty much think that chatgbt can help anyone do anything they want to do better. And I'm scared about the ability. It can help
01:01:13
Speaker
a terrorist, do a terrorist action better. It can help a physician do a surgery better. Unfortunately, there's no real difference in the sense of it as a tool. It can help anyone do whatever they want to do better. And again, like I say, it's not entirely true, but there's been some wonderful uses of things like using chat GPT to design better
01:01:38
Speaker
let's say syllabus or training. If you want to learn how to be a chef, you can ask chat GPT to give you a six month specific.
01:01:47
Speaker
a learning schedule that will take you through different skills you need to become a chef. If you want to become a surgeon, it can help you develop. If you want to learn how to whistle, it can do all the steps required. It can make you a master whistler. It can put together a five-year plan to make you the world's best whistler. Now, whether it is accurate or not, I'm not sure. It'll do a pretty good job of getting you somewhere pretty close. It's just hard to imagine where this power of
01:02:14
Speaker
this tool will actually take us in a couple of years. Well, I hope it is more down the whistling route rather than the bio hazards. I think Greg Brockman of OpenAI gave a very good, I think the strongest thing that he said about why they decided to release chat GPT was that
01:02:42
Speaker
all the tools were there, all the pieces were there, all the compute, and even the science was pretty public access. And the T in chat GPT famously comes from Google and all those AI guys have been in their contracts that they're able to publish and that's how they've managed to kind of attract
01:03:10
Speaker
That's just how this world is working, right? It's very much in the public. And so their rationale was we sort of need to release this as early as possible and give ourselves enough time to figure out how to use it and time to live with it, instead of letting it become super duper powerful and maybe just someone
01:03:33
Speaker
keeps it to themselves, or even they release it publicly, but we don't have this kind of ability to figure out how to use it and create kind of an equilibrium, I guess. I think one of the
01:03:48
Speaker
One of the arguments that is used by the optimists around superintelligence is that we'll not end up with a single superintelligence. We'll have lots of different ones and actually that it won't be so different from the world that we're currently living in where we have many large corporations which
01:04:06
Speaker
each one of which one can think of as a super intelligence insofar as it's able to somehow produce things that no individual in that organization could, even if they were given a million years of
01:04:19
Speaker
life probably.

AI Superintelligences and Ethical Considerations

01:04:21
Speaker
So yeah, I'm hopeful that was a good call on OpenAI's part. And I certainly see that they almost had no choice, as they said. All the pieces were there. Someone had to do this. If there's anyone at fault here, it's just that no government was really farsighted enough to put in regulation much earlier on, unlike what happened in genetics research, I would say.
01:04:48
Speaker
I mean, I don't know. This is a tough one as well. I love chat GPT. I love having access to it. So I'm very grateful that they did release it. But I am very worried about where it might go. And I am very worried about how malicious actors might be able to use. I do think that it would have probably been a better choice to have
01:05:08
Speaker
waited some amount of time and to, for instance, brought together some ethicists, some panels of experts to decide what the right ways to release this might be. In the US, for instance, in every country, pretty much, if you have a new medical device or a new medicine, et cetera, you don't just release it to the public. So I can see pros and cons that you won't really know the extent of it and you won't really
01:05:36
Speaker
be able to kind of engage in a fully informed, let's say, analysis of it until you get lots of voices involved. So it's kind of nice that they did release it to get lots of voices involved, but it does feel a little bit like, well, it's not quite the same, but I'll give you another parallel example from the world of human factors, which is I'm quite
01:05:59
Speaker
I have an interesting love-hate with Tesla and Elon Musk, et cetera. And as a company, they make some wonderful decisions. But from where my human factors engineer had, I'm really frustrated that they basically have decided to release the vehicle to the world and let humans out in the world be the usability testers. In the world of software development, there is a profession out there, a usability researcher, a user experience designer. There's most software products before they come to market.
01:06:29
Speaker
If it's coming from a well-designed lead, again, back to the design lead kind of company, they'll have a team of user experience researchers, user interface designers, and people who are experts who can actually do bring in people and have them do usability testing, look for problems with the product, look for ways to make it better, look for ways to improve it.
01:06:49
Speaker
You go through these iterations where you catch some bugs and you then put it back to the design development team and they make a new version. All of that really should happen in-house before it's released to the public. Somehow, my belief is that Tesla has decided that that will be too difficult to really do. Partly, I can see their point. You can't get every combination of atmospheric conditions, road conditions, human pedestrians, etc. At some point, you do just have to
01:07:17
Speaker
put it out there and let reality be the user testers of it. But I do think that there are some ways in which you really should have done some of that testing internally, though. So Teslas really should not have... There's a lot of aspects, for instance, of the user interface, the screen, the display that are horrible and that any usability expert could have told them these are horrible. And if they had done usability testing, they would have found that people fail, people
01:07:44
Speaker
perform actions and make mistakes that actually could lead to death. But they didn't do any of that. They just decided to put it out on the roads and let's see what happens. And I think that is a little irresponsible. Again, and I think chat GBT is kind of the same way that the simple tool
01:08:01
Speaker
That's incredibly powerful. I don't know if it was really ready to be released to the public to let the public see, you know, let's see if terrorists can figure out how to use it. Let's see if, you know, people who have malicious intents can use it to design crimes, et cetera. And they did, of course, put in there are guardrails in place. I've spent a lot of time kind of pushing the guardrails to see
01:08:22
Speaker
what I can get around it. And there's lots of famous tales about people who have found clever ways around these guardrails. And it's kind of worrying that there are pretty easy workarounds in many cases. But yeah, so in terms of releasing it to the public, I go back and forth. I would have liked to have seen a little bit more internal testing and a little bit more guardrail developments before it was actually released to everyone. Yeah. Well, on the guardrails,
01:08:50
Speaker
It's not proven, but I certainly think it's likely that there's just fundamentally impossible to build. Yes, I agree. Yeah, completely impenetrable guardrails in a way that, you know, one might say, oh, that's true of all software, right? You know, you can always kind of crack codes, but I think it's different to, you know, the way that one can encrypt at least
01:09:14
Speaker
until we get quantum computers, one can create pretty good encryption of things that you can mathematically prove you're just not going to break that encryption again, unless we have a way of factorizing large primes very quickly, which we would with one of these. We can understand all that.
01:09:37
Speaker
fundamentally the way that LLMs work is, you know, they're not debuggable. They are kind of, you know, loosely structured on the way that minds work. They're a simplification. But they're certainly inspired by the architecture of neurons in the mind. And if there's one thing that we don't understand, it's the human mind. So if you want to make something that's understandable and
01:10:01
Speaker
you know, infallible, you don't base it on that, right? But if you want to make something that's creative, and is able to learn, that's the way to go. Yeah, so I mean, I think there's, I mean, that, that's really the core of both the worry and the the promise, right, that that maybe we if we've
01:10:21
Speaker
It's not, yeah, that, but even something that's somehow approximating the human mind is immensely powerful, immensely unpredictable. Yeah, it can be moved for fabulous things. I mean, I guess one of the other, just as a quick
01:10:37
Speaker
mention one of the other sort of techniques that I've been using quite effectively I would say recently has been, you know, so there's the study of creating clusters, asking it to make clusters to help. But again, kind of thinking about the human mind and about how the LLMs are working, it's always stochastic probabilistic, its responses are, right? So if you ask the same question over and over again, well, depending which question,
01:11:03
Speaker
it'll give you different responses. And so I find that asking it to give you multiple responses is quite a good way. So I'll often say, like I was alluding to earlier, give me eight ways I could do x, y, or z. So give me eight ways I might
01:11:20
Speaker
use this particular data set to address this particular challenge or eight ways, eight kinds of behavior change interventions I could use to reduce cancer. So go back to that example. And the thing is there, you know, kind of like the human mind, like the human brain, it'll give you a normal distribution, let's say, of responses.
01:11:39
Speaker
And you can look then as a human, you can look across those eight and say, well, which of these is kind of closest to what I want? So if you think about, let's say the path you want to take is a particular path down in the future, you'll never get that path if you just ask it for that path. But if you ask it for a spread around any decision point, you as the human can then guide yourself down that path. So you ask it, I'm not sure where my metaphor is going to fall apart even more, but
01:12:07
Speaker
If you ask it to take you on a glide down a path, you also want it to keep that glide a bit loose and a bit variable and give you choices. Take me on a glide through. I want to design some new confectionery that would be loved by vegans.
01:12:24
Speaker
living in the UK. Let's say it's a very specific group. Give me eight possible confections, sweets, desserts, candies, whatever you have you that might be loved by them and it'll give you some answer. And then you keep exploring, you keep gliding down that path. So you could say, give me descriptions of a dozen different types of
01:12:44
Speaker
um, desserts that might be loved by vegans living in the UK. And it'll do that. And you could say, well, you know, this type you describe here, that sounds pretty good. Can you give me eight variations of that? You kind of keep drilling down, keep gliding down that path, um, until you get to the very
01:12:59
Speaker
till you get to the bottom, so to speak. So you could say, okay, so let's say we like the sound of these kind of fruit-based something or others, and you can say, okay, give me eight ways I might create those, and it'll do that. Okay, that's interesting. You gave me eight ways. I liked ways one and two. Can you combine those two techniques and give me eight variations of the combination of those two? And you can keep going all the way down to, okay, give me, outline for me four different recipes that could make this thing we've been talking about.
01:13:29
Speaker
And then finally you get to the point, okay, that's great. Now can you give me the specifics of exactly how you would make recipe number three? You can kind of keep using it to explore these course spaces, always recognizing that it'll never have the right answer, but always asking for sort of a rough course approximation so that you can continue to kind of be the human driver, pilot, glider, let's say pilot, human pilot flying that glider to kind of where you're trying to go.
01:13:57
Speaker
Yeah, this idea of exploring multiple parts is really interesting, and it's a little bit one of the guardrails they have, like it doesn't like to answer about possible future things that might happen. But yeah, the thing, that one's quite, one can imagine fairly easy ways of getting around that, but yeah, quite interesting to think of something like, you know, imagine this scenario
01:14:22
Speaker
What are the two things that could happen coming out of this, or the three things, you know, three ways it could unfold as a single step and then you get three answers, and then you could feed those back in right to explore how the world might unfold at that next step and quite an interesting way of
01:14:37
Speaker
Yeah, running some simulations, kind of scenario planning. The last episode I recorded was with Peter Schwartz, and he's a scenario planner, and I really should have asked him about this exact question. Actually, I keep on thinking of more questions I should have asked, you know, Schwartz. He's a futurist. He works at, he's a VP at Salesforce, so his job is just thinking about the future. And when you talk to someone, his job is thinking about the future.
01:15:07
Speaker
You'll keep on thinking of questions you should ask. But now we can ask them to chat GBT and others. On this kind of multiple paths thing actually, one thing, another practical tip I wanted to give. We don't get enough of those in this podcast. It tends to be rather theoretical and abstract. But here I do have to...
01:15:30
Speaker
it gives some credit to barge because it will, by default, generate multiple responses. And this is a really good way of kind of gauging its uncertainty. Another really nice thing I like to ask Bard for is structured data. So suppose I want something like
01:15:52
Speaker
want to create my own little nerdy fact file of global cities and say, give me all the, you know, the capital cities in the world. Tell me, I don't know, the population density and this stuff. And tell me the, yeah, the highest tower in the city, highest building in the city in its height. And it will generate all that for you and it'll give it a nice little table. And then
01:16:15
Speaker
you can view the other responses just by clicking sort of kind of tabs that are within a chat. And you can quite quickly see if any values in your table have changed. So if you see that Paris has gone from being Tour Montparnasse, or sorry, the Eiffel Tower to Montparnasse, you can see it's kind of doubting there. I mean, that's one that it doesn't get wrong. I tried this earlier, because, you know, that's...
01:16:38
Speaker
That's a pattern which is very well established. It knows that. But there's going to be some city where there's some controversy or there was some like, you know, thing being constructed at the time it's training set was completed. And so it's going to have like, you know, somewhere you might find that that changes when you flip through the data sets. And so that's quite a nice way of getting it to check itself, I suppose.
01:17:03
Speaker
I haven't tried to ask it whether it can also quantify its certainty about responses. Like if you ask it to do the exact task, and then for each of the rows, put an indication of how certain it is about its own response. I want to try it, and I'm not sure if it actually works, but I've told it, for instance, I want you to write a
01:17:26
Speaker
Well, let's say, we'll go back to the example of confectionery. Come up with, give me 10 types of confections that you might create, et cetera, et cetera. You could ask it instead, in your head, think of 10 different types of things you might create and then
01:17:43
Speaker
select the best six of those. Don't give me any answers, but then create variations of those six, create 10 variations of each of those six and choose the best from each of those and then give me that response. So you could say, for instance, I'm going to ask you to create a table of the tallest buildings in each city. I'd like you in your head to create 10 such lists and then give me a final list and in that list, include a column that says whether or not there were variations in the 10 versions you created.
01:18:13
Speaker
And I don't know if it would do it. My guess is it probably wouldn't do it accurately, but it would say it did. But I don't know. Yeah, there's certainly ways you can, I mean, using APIs and things, there's certainly ways you can get that working. And I think that there might even in some of the chats be ways, you know, with a single prompt, you can get it to do several manipulations of the same thing. So, you know, I mean, here is, you know, here are some bullet points, construct an email,
01:18:42
Speaker
that summarizes those bullet points, but then reflect on that email that you construct and run these further operations, right? And it will kind of keep on redrafting things all in a single prompt. One other thing I have tried, like one thing it's not
01:19:03
Speaker
on the probability question I think it would probably it might give you an answer just it might not depending on how the kind of fine tuning has worked I would guess it would I guess it would be the fine tuning that determines whether they allow you to
01:19:16
Speaker
that allow it to give those sort of answers. But I think the kind of probability that it would have wouldn't be some kind of mathematical operation on its, you know, it wouldn't be some kind of reflective, self-reflective thing. It would just be a simulation for us. Yeah, no, exactly. What would be lovely is, you know, with your clustering, if you could say, you know,
01:19:39
Speaker
give me some kind of, you know, count the size of, you know, count the number of nodes that you have or the number of weights that you have that cluster around this concept. But it's just not able to do that. It's not able just
01:19:57
Speaker
exactly as we are, we're not really able to reflect on our own brains. We can do more kind of connotation and reflection, self reflection, I think. And then but we can't really look inside our own minds and say, okay, I'm thinking this in my frontal cortex right now in this region.
01:20:13
Speaker
There's a wonderful Ted Shanks story about this, about some people who have this kind of complete control over their own thoughts and bodies, which I think actually is just, there's something fundamentally impossible about that. Because to do that, you have to have a representation of your own mind inside your mind. And, you know, you hit this kind of regress. But I also do wonder, actually, if
01:20:38
Speaker
I mean, we probably are better at this. I mean, we certainly are better at self-reflection than LLMs. It's not clear they're able to self-reflect at all, but it does kind of bring one back to the kind of perennial debate of whether LLMs will run out of road with the current unembodied way that they're trained. Children, at some point, start to recognize themselves in an era. At some point, they
01:21:07
Speaker
Um, perhaps before that, I think they start to realize that they are not the same as their mother, right? And these are recognized stages in, in child development, um, that are, you know, really linked to us being, um, bodies in, in space. Um, and the last person I, I spoke to on this podcast about AI was, uh, John Rizzarelli. I asked him to sort of do a final mic drop and his final mic drop was,
01:21:36
Speaker
Well, maybe we don't just need to embody our AIs and get them to be able to move around and interact with the world. Maybe they need to be incarnate as well to really develop. And I do think, I don't know, when he said that, I couldn't really think why.
01:21:52
Speaker
that might be, but I keep on coming back to that thought. I don't think there's a logical requirement that AIs be incarnate to further develop, but I can see how being incarnate and having something where materials pass through your body, it adds a whole extra set of dimensions to our experience. We can feel where we're not just electric machines, we can feel the internal weights and
01:22:23
Speaker
fragility that we have. And that must just stimulate a whole lot of learning as well. A whole lot of cognitive processes better put.
01:22:34
Speaker
Yeah, the largest, I mean, there's the whole largest concentration of neurons is in the gut after the brain, of course, is the guts.

Neurons, Embodiment, and LLMs

01:22:43
Speaker
And so we have this whole gut feeling and there's information processing happening there, there's information processing happening all over the place in the brain. I guess one issue there is back to kind of this concept of representation and communication, right? So, I mean, there is an experience of what it's like to be drinking icy water on a hot day
01:23:02
Speaker
It'll probably never be the case that a LLM will have that experience, but could describe that experience. Well, if any human has ever described it or if it can
01:23:15
Speaker
kind of deduce from anything it's ever read what that experience would be like, then it could communicate that. So, I mean, maybe it won't, in scare quotes, really experience it, but it could describe it. And so I'm not sure where the necessity is, but does it have to have the experience or be able to describe the experience? Like if we're only interacting with it,
01:23:38
Speaker
through a chat interface where we're typing back and forth. It doesn't matter whether it actually has had that experience versus whether it can describe that experience. Yeah. We're sort of touching on a Chinese room type of scenario here. I'll get the canonical example wrong, but the one that always comes to my hand is
01:23:57
Speaker
I think someone who's been told everything about the color red, they live in this kind of cave, and they've never been shown the color red, but they've got all the factual information in the world on it. And the question is, do they have knowledge of what it is? And again, it's one of those ones where, I mean, philosophers have never decided anything, right?
01:24:17
Speaker
Well, as soon as something gets decided, it's no longer philosophy, it becomes like, you know, oh, it's no non collateral philosophy, it's physics, because we've, we've got enough certainty of how things operate that it becomes its own field, or it's no longer. Yeah, it's now logic. It's now, well, even cognitive science as well. I guess these are things that you
01:24:36
Speaker
one can see branching off. But yeah, certainly. Just to go back to the question about the incarnate, embodied types of things, it could describe things without having that, but maybe it couldn't generate new knowledge or insights without having access to, I don't know,
01:24:55
Speaker
I don't know, it is difficult, isn't it? I mean, if it can describe it, if it can put out there, if it can improvise a description based on inspired by true stories or inspired by experiences that humans have had, right? That's kind of the weird thing here is that
01:25:11
Speaker
chat GPT, LLMs, they'll never have any of these experiences, but they've heard humans describe them enough that they can kind of pretend that they have those experiences. And as an external observer interacting with the model, it's very hard to know whether
01:25:29
Speaker
what the truth is there, whether it needs to have the experience or whether it's sufficient for it to just generate these responses. It does come back down to Searle's Chinese Room arguments. Yeah, I think it forces us to think again about all these or surfaces, these classic thought experiments, and we're living through them. But again, even
01:25:55
Speaker
another person's experience of, you know, the wind or an ice cream. We don't have access to that. We don't know if we share the same experience. And I'm not, again, I don't know that there's even a logical necessity for the experience of
01:26:17
Speaker
I don't know that there's a logical necessity for an LLM to be incarnate, for it to have the same, let's say, qualia or whatever we want to call it, of, you know, internal qualia or tasting ice cream or experiencing the wind. Like, after all, like somewhere in all, you know,
01:26:37
Speaker
we really don't understand those things well enough. Like this is the hard problem of consciousness, I suppose, or it's one of the aspects of it. We don't understand them well enough to say that it couldn't be like a certain order of, you know, transistors or firing wouldn't actually
01:27:01
Speaker
represent that and feel to it just as the wind feels to us. It seems unlikely, but maybe. Yeah, and whether it matters or not is an interesting other question. I guess where this will get fascinating though is whether if it does become embodied in something that looks like a human, so if I have a
01:27:25
Speaker
a robot that looks very human and has a chat GPT like LLM inside of it, is it okay to then unplug it and kill it? This is where I think there's going to be some real challenges in the future. If you spent time, as I know you have interacting with LLMs,
01:27:42
Speaker
quite a bit, it does start to seem like there is someone in there. There is a there there that it has. I mean, if you took an average 8 or 10-year-old kid and had it interact with a robot that had a chat GPT-like brain inside it, they would think it's just as human as any other human.
01:28:04
Speaker
And that's where it does get difficult. I mean, can we pull the plug? Do they have some kind of right to exist beyond just their software programming? And it's going to be a real challenge in the next 15 years. What are the ethics around LLMs? Yeah. And you point out something very pertinent there, which actually
01:28:29
Speaker
with your example of a child talking to chat GPT that it may not just be one's own personal opinion that matters here. One might believe that the family pet doesn't have a soul, right? But if your child does, I think you do.
01:28:53
Speaker
It is and it becomes a question of the community of beliefs. So if there's a group of people who believe that the chat GPT or some future LLM has feelings, because you can ask it about its feelings and it has just as good a response as any human might, better still than others. It's going to become a real problem in a sense to try to decide what the ethical
01:29:21
Speaker
decisions are and how we kind of treat LLMs down the road. I told you in one of our earlier conversations how when I started really using chat GPT quite a bit for a few months there, there were times where I felt
01:29:37
Speaker
absolutely stymied in terms of what question to ask it next, just because I started to realize that in some weird way, it was like a oracle or like a strange God that has, you know, it has read and I can't say experience, but it has access to everything that humans have ever created in some weird way. And it becomes a strange thing.
01:30:01
Speaker
And I find it hilarious that people are criticizing it because it can't do simple math and it doesn't know whether two plus six is eight or nine. And you can play kind of games with it and find logical flaws in it. And I don't know. I imagine back to like the Oracle of Delphi or something, you know, people go there and asked it to do simple math questions. It's a bit insulting and ridiculous. And, you know, I can't pull chat GPD in quite as high regard, but
01:30:27
Speaker
It is incredible how much I'll say, you know, again, I don't know whether it should have quotes or not, but how much it knows. It is an incredible feat of human design. No, I think I've lost. I think we've lost your sound. I'm back. Oh, you're back again.
01:30:49
Speaker
all touch the space bar by mistake and muted myself. But yeah, I was just saying that it is quite an oracle-like type of being. And I think I also told you in an earlier conversation how there were times where, again, I probably was using it too much and diving down too many transversal glides, but I started to feel
01:31:09
Speaker
as I was talking to other real humans around me, like, you know, well, why am I wasting time talking to this guy when I could be instead talking to Chad GPT? And I was like, do you talk to a God who's read everything ever created by humans? Or do you talk to, you know, Steve, who's your next door neighbor watering his lawn? It's like, well,
01:31:25
Speaker
And that's when I knew I had too much time on my chat GPT account and sort of took it back a notch from there. But people who spend a lot of time with it start to realize that it has some incredible subtlety to its responses again.
01:31:40
Speaker
It is just a machine generating symbols and it's we, the listeners, the readers who are embodying it or imbuing it with knowledge, etc. But still, it's a pretty amazing information processing machine out there.

LLMs in Gaming and Narrative Exploration

01:31:57
Speaker
how we use it is just, I'm so excited in a sense and terrified at the same time to see where this will all go over the next five years, let's say, or even five months. I mean, that's the other thing is the speed at which this is happening is just unbelievable. Anyway. Yeah. And well, I should say this makes me feel particularly honored that I've taken an hour and a half of your time when you could have been
01:32:23
Speaker
communing with the Oracle. Why am I wasting my time talking to you, James? It's been really fun to chat with you about all this stuff. It's rare that I get to talk to people about
01:32:38
Speaker
both the practical application of it, the technology about it, and the philosophical sides of it. When I first got the message from you and invited me to join the Multiverse's podcast, it was just such a strangely perfect
01:32:56
Speaker
request because I do feel like chatGPT is a multiverse exploring tool. It's go through multiple multiverses and explore them. So it's really appropriate. I'm really intrigued. I was just thinking of it. Wouldn't it be cool if we had a new kind of, I think one of the genuine new types of experiences that the information age has created is
01:33:22
Speaker
games, right? A lot of other stuff is just transferring information online and passing it around like an information superiority. And, you know, that's, that's what I say to a lot of stuff. But gaming, I'm not a gamer, actually, but I'm fascinated by the idea of games. Gaming is a kind of new experience. And, you know, maybe LLMs can do that in that
01:33:42
Speaker
you could have an agent who represents Plato. And he's just answers in the voice of Plato or just represents a whole world. And you're like, what's going on today in this world? Tell me about the way that people go around in this imaginary world. And it just, just like in the same way a game does or a novel does, but it's just a different way. Instead of you reading the novel, you prompt it, you query it, you get it to tell it about
01:34:11
Speaker
the particular story of someone in this, I don't know, the Steampunk world that you've created, you buy the Steampunk LLM world, right? And you can have any question that you or any story you want kind of set within that.
01:34:28
Speaker
Yeah, there's another kind of line of work that I do is in the world of literary criticism and sort of literary analysis. And over there, I've also been using chat GPT and interesting ways to look at like alternative realities and what would happen if
01:34:46
Speaker
certain characters from some novel came into another novel and how would that change the narrative or how would this reality intersect with that reality? So it's kind of a similar direction. We have these, especially when you think of texts, for instance, right? So we have a
01:35:02
Speaker
a definitive finite limited body of text, which is any novel you want, let's say, and you could then train chat GPT on that character in that novel, for instance, and it now can react and interact with you as if it were that character. So it does create all kinds of fascinating possibilities. And I know there's a lot of people out there kind of using chat GPT and similar LLMs to do NPCs, non-player characters that
01:35:28
Speaker
in games, you can actually interact with people in a much richer, deeper way. And again, it's fascinating to think where that's going to go over the next couple of years as those kind of richer characters are developed. And you could also imagine that within a game, there are characters and they have their own personalities and their traits, et cetera, defined by an LLM. And then they're let to
01:35:52
Speaker
Interact with other characters in the game either human or other simulated llm characters and what will happen as they start to interact with each other right that I just saw in the last week or so this lovely experiment someone tried where they set up to chat GPT personalities and Kept telling one said and had them actually interacting through them and it was slow and clunky fascinating though I mean
01:36:16
Speaker
The article was great, just that particular mechanism of getting them to talk to each other was wonderful. But I'm fascinated to see what's going to happen when you have LLM-inspired or informed characters able to actually interact with other LLM-informed characters.
01:36:31
Speaker
you know, plot the takeover of all humans. Well, that's, of course, one of the other fears of where this might go, because they can, when they start interacting with each other, it'll be interesting to see what happens and what their stance is on us humans. Indeed.
01:36:49
Speaker
Well, yeah, this has been really fun. It's been a great glide, I should say. We've traversed many worlds, many, many topics. But yeah, thank you so much, James, for your time. Thank you. Thanks. It's been wonderful gliding around this conceptual space with you, and I really enjoy your multiverse's podcast. I'm going to keep listening. Thank you. All right, thanks.
01:37:23
Speaker
you