Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Ep 7: Can blockchain solve AI's trust problem? image

Ep 7: Can blockchain solve AI's trust problem?

S1 E7 · The Owl Explains Hootenanny
Avatar
55 Plays1 year ago

Shawn Helms, co-head of McDermott Will & Emery’s technology and outsourcing practice, leads a discussion on AI's trust issues and blockchain's role as a solution with Anne Gressel, Associate at Debevoise & Plimpton, Erwin Voloder, Head of Policy at the European Blockchain Association, and Kai Zenner, Digital Policy Adviser for MEP Axel Voss. They explore how blockchain's transparency and auditability can address concerns about AI's opaque algorithms and potential misuse. Tune in for this insightful dialogue at the nexus of AI and blockchain technology.

Find out more in our explainers at owlexplains.com

Recommended
Transcript

Introduction to the Podcast

00:00:06
Speaker
Hello and welcome to this Owl Explains Hootenanny, our podcast series where you can wise up on blockchain and web3 as we talk to the people seeking to build a better internet. Owl Explains is powered by Avalabs, a blockchain software company and participant in the avalanche ecosystem.

Panel Introductions

00:00:24
Speaker
My name is Silvia Sanchez, project manager of Owl Explains and with that I'll hand it over to today's amazing speakers.
00:00:34
Speaker
All right. Hello, everyone, and thanks for joining us today. My name is Sean Helms. I'm the head of the Technology Transactions Group at the law firm of McDermott Oil and Emory. I have a great panel here with me today. We've all gotten a chance to know each other over the past few weeks, and I'm excited to be doing this podcast with them today. So let's do a quick round of introductions.

AI Governance and Compliance

00:01:02
Speaker
Erwin, do you want to get us started?
00:01:05
Speaker
I can, I'll jump in to start, I am, I'm Anna Gressel I'm from Dubois and Plimpton I'm a senior associate there and I focus on AI governance regulation compliance helping companies build AI tools really in every stage of life cycle and then defend them before regulators or in civil suits.
00:01:26
Speaker
Great to see everybody again, Sean and Akai. My name is Erwin Valder. I'm the Head of Policy at the European Blockchain Association. So I essentially coordinate across the web-free community and EU institutions and I raise all the levels at EU level that need to be RE blockchain, digital assets and everything related to the development of smart contracts in the EU single market. And I'm Kajt Senor from the European Parliament working there for MEP Accel Force.
00:01:53
Speaker
EPP group and yeah, we are quite active when it comes to AI legislation or its reliability and AI and data protection. Great. Well, thank you again. Really excited to be talking to you guys again on this topic.

Democratization and Public Perception of AI

00:02:12
Speaker
We hear a lot in the news about artificial intelligence these days.
00:02:19
Speaker
I have been predicting an inflection point in artificial intelligence for about 10 years. And I've been wrong for nine of the 10 years. But I really think given all the movement and generative AI now is sort of the democratization of artificial intelligence in a way that I don't think we've
00:02:46
Speaker
ever seen before and is truly the inflection point for this technology. As chat GPP sort of sprung upon the world and we saw generative AI creating
00:03:03
Speaker
selfies of people that were flying around TikTok and have had all kinds of generative AI technologies and not only images and text and audio, it's really, I think, raised not only the public consciousness but the public imagination as to what could happen with this technology.
00:03:28
Speaker
At the same time, we're hearing a lot of doomsday predictions and people are really worried about the technology. And we've had technology leaders like Elon Musk and Steve Wozniak that are calling for a pause in AI development. In a lot of ways, to me at least, it feels like a bit of the early days of the pandemic, where it's a real fear of the unknown.
00:03:58
Speaker
and people wanting to sort of clamp down and not embrace the technology but push back on the technology. And I think a lot of that is around a lack of, one, a lack of knowledge and two, a lack of trust. And so I'm interested to explore the topics today of sort of what blockchain might be able to do to help in this sort of critical moment of artificial intelligence and technology development.

Generative AI Models

00:04:28
Speaker
And so with that as a bit of a kickoff, Anna, can you give us an overview of what is artificial intelligence? Sure, Sean. Thanks so much. And we've been working in the AI space, I would say, consistently since around 2017 and 2018. And what we work with from a systems or policy, or even an application layer perspective,
00:04:54
Speaker
is very different from what we were seeing back then. So I want to talk a little bit about what I might call traditional AI, and then I'll kick it into generative AI, which is quite a bit different in terms of its capabilities. So traditional AI, when we started doing this work, really was a set of applications that were focused on using a large amount of data to make predictions or decisions based on patterns in that data. So they were very good pattern recognizers, and they were used to do things like determine
00:05:23
Speaker
who might be a good bet for a loan or who should be accelerated from an insurance underwriting perspective because they were very low risk and companies could figure that out and move them through the insurance pipeline a little bit more quickly. They were also really good at doing things like detecting fraud, noticing anomalous patterns in data, and then flagging anomalous transactions for a second look.
00:05:47
Speaker
And so, you know, those kinds of applications, I would say were developed really at the corporate level. They were task specific and very good at executing on a particular task.
00:05:58
Speaker
What we're seeing with generative AI is a little bit different. For folks listening who are like, what's generative AI? Those are models like chat GPT or Dolly as an image generator. There are other kinds of generative AI models. Those really go from just making predictions to, as the name suggests, actually generating text
00:06:18
Speaker
we're generating analyses, generating images. So we're not just making decisions or predictions, we're actually creating something. And that is a little bit different. It raises a few different kinds of issues from an AI, I would say trust, ethics, responsibility perspective. The first is, you know, we're not just talking about, again, specific tasks, we're talking about very open-ended models that can be used for a lot of different things. They're multi-purpose. Many of the
00:06:43
Speaker
the foundation models at the, you know, what we call the bottom of that technology stack can be used to do everything from, you know, make recommendations or drive tax analyses. They can be used to write short stories. They can be used to write poems. They can be used to analyze public statements. They're really very multipurpose. They don't have to be trained for a specific purpose, but they can be prompted by a user. The second is that they're multimodal.
00:07:09
Speaker
So some of these models can actually do things like text to image, image to text, text to video, image to video.

AI Regulation and Legislative Challenges

00:07:17
Speaker
You know, they can analyze sounds, depths, 3D modeling. They can do a lot and that kind of multi-purpose set of capabilities is really important when we think about
00:07:26
Speaker
what the models may accomplish in the future and why they're very different from those specific kind of task oriented models. And the final thing I would mention is that they're often in the hands of individual people. And so these are not necessarily always technologies that are just a big corporate wall. Many of them have been made available to users through kind of generally open user interfaces or even through open source.
00:07:52
Speaker
Kind of systems and that is important. I think to Sean's earlier point about democratization So we're really beginning to see the democratization of this AI technology and many people are creating new applications using it
00:08:05
Speaker
Yeah, that's great, Anna. Kai, I know you are instrumental in working a lot in the regulatory space for artificial intelligence. Can you give us a bit of an overview of what's happening in that space and what are the key principles that regulators are looking at? Yeah, gladly. And I think I will draw back on a lot of those points that were just mentioned by Anna because
00:08:33
Speaker
It's quite interesting. The whole, let's say, let's regulate AI movement, which is a global movement, really started already one decade ago. Already in 2017, for example, the OECD was discussing how to regulate AI. Back then, it was not about large language models. Foundation models like GBT4,
00:09:02
Speaker
and chatbot apps like chatgbt and Bart that are now recently only developed. But it was more about new machine learning and deep learning technologies. Because already that technologies kind of challenged our existing legislative frameworks. For example, in the area of liability legislation. And many member states within the European Union
00:09:31
Speaker
face a situation where certain harms that are, for example, caused by a defect drone that is falling on the head of my grandmother would not be covered. Therefore, my grandmother would not get any redress. It was mentioned that certain elements like opacity, complexity, autonomy, and so on,
00:09:58
Speaker
are really leading to those legal gaps. Based on these findings, people worldwide were working on certain principles like human oversight, like technical documentation, that the data sets need to be unbiased, transparency, and so on and so on.
00:10:28
Speaker
to address those legal gaps a little bit, let's say update our legislative frameworks and also build up, let's say legal certainty that is needed to push forward the deployment of those new AI systems. And the AI act in the European Union is really based on all those prep work by OECD, but also by UNESCO,
00:10:56
Speaker
and a lot of other actors, and especially also the technical harmonized standards organizations like ISO, IEE, and so on and so on that also did already lengthy work on that. And then suddenly, like Anna was saying, there was a new kid in the block, so LLMs and so on and so on that were really challenging
00:11:24
Speaker
all those legal or new legal proposals because, for example, the AI Act was really focusing on a machine learning system that has an intended purpose and is also having concrete use cases. And as Anna said, those new large language models can be used for thousands of different use cases, have not a real intended purpose.
00:11:52
Speaker
And now we need to start a little bit from the scratch again. So we have, for example, now the AI exit again is kind of covering machine learning, normal machine learning and deep learning systems. But we have this new type of AI, which is now becoming more and more popular and big. And yeah, again, we need to find out what we need to adjust and adapt because
00:12:19
Speaker
what was developed internationally and in the European Union is not really fitting. Yeah, no, that's great. And Kai, you're highlighting a point I had in my mind and that is legislation tends to move slow, right? And technology tends to move quick. And I think
00:12:44
Speaker
What I'd like to explore with you all is could there be a technological solution to some of the problems that society views with AI platforms? And so with that in mind, what are some areas that you all are seeing where blockchain and artificial intelligence are overlapping? And what does that look like?

Blockchain as a Safeguard for AI

00:13:12
Speaker
So if you want, I could take that one. I mean, at a basic, like you have decentralized infrastructure and blockchain technology, they can act as, you know, sort of like encryption back guardrails for AI systems, right? So in that kind of model, an AI system can be deployed with those built in guardrails to reduce their ability to be misused or utilized for any kind of negative actions or behaviors.
00:13:36
Speaker
Developers of those AI models could then encode specific parameters within which the AI can access, for example, various key systems like private keys. And then these can be enforced with conditions with the help of tamper-proof technology like blockchain and other kinds of distributed ledgers and smart contracts. And also increasingly, this has a great implication for oracles, right?
00:14:00
Speaker
where I see the flip side of that is that, and Anna and Kai have already touched on this, these large language models and generative AI, for example, like DALI, mid-journey, et cetera, they can create a whole bunch of really complex images in deepfakes. Like recently we saw that somebody created, it was like an image of the Pentagon that was on fire, it wasn't real, and the Dow dropped 30 points.
00:14:22
Speaker
So you can imagine sophisticated trading bots that are on the one hand primed to issue shorts on certain stocks with respect to what a deep fake has created to prompt that kind of market condition happening. And then that compounded ad infinitum. So on the one hand, sure, we can make the argument that you can use distributed ledgers to keep the encryption-based guardrails on those AI systems, but on the other hand,
00:14:51
Speaker
We also have the problem of this de facto embedding those same risks and amplifying them in the event that somebody or a group of somebody's decide that there's a lot of money to be made in there. Oftentimes it is, unfortunately. Yeah. I know, that's a great point. Anna, for you, you know, when I think about
00:15:18
Speaker
sort of the ethos of blockchain. It's very open. Most companies open source their platforms. Part of the draw to blockchain is this, you know, auditability and transparency and I think the
00:15:44
Speaker
sort of openness of everything in the blockchain world is a bit contrary to historically how people have viewed AI and sort of by the nature of the technology in a way where sort of
00:16:01
Speaker
Data goes in, there's this sort of black box processing and predictions come out. Even with large language models and generative AI, there's been lots of questions about how does the technology work, what has it been trained on, and not a lot of clear answers around that.
00:16:25
Speaker
In some ways, I see these two technologies approaching what they do from opposite ends of the spectrum, from a transparency perspective, and I'm interested in your view on that and how these technologies might be able to complement each other.
00:16:43
Speaker
Yeah, I think that's such a good question. And if I had to predict, I think it's an area where we're going to see a lot of development just from a solutions-based perspective. And when you think about what the drivers are for that, I think it's not just this question of can we open the black box because we want to, because we think it's more trustworthy or ethical, but also because regulation will compel us to.
00:17:05
Speaker
Kai may want to chime in on this later, but one of the core kind of concepts within the AI Act is this idea of information transfer between different actors or clarity of information transparency between developers.
00:17:22
Speaker
and government authorities, for example. And so it asks for all different kinds of mechanisms to make that happen. Data governance on the one hand, risk management related to AI systems on the other, auditability, record keeping, logging. I mean, these are all transparency kind of by different words in different terms. And so I think the question is on a number of different levels, how does blockchain potentially help with that? You know, notwithstanding the fact that there are kind
00:17:50
Speaker
complexities. I think we'll talk about this on scaling. But let me put them into a few buckets of at least some promising areas that I see. The first would be with respect to inputs to AI systems. That's usually what we would think of as the data used to train AI systems or run AI systems.
00:18:07
Speaker
Blockchain, the ledger capabilities, I think, offer us some really interesting options around record keeping and data provenance to make sure that the inputs to AI models are actually of very high quality and are sourced from clear, trusted, reputable sources. That may not work for all different kinds of AI, but at least for some kinds of AI, that actually may offer a really
00:18:33
Speaker
interesting benefit in terms of making sure that that data is trustworthy in and of itself when that matters, for applications where that matters. Or, you know, I think this has been in the news lately potentially that there's traceability with respect to things like consent to data use or intellectual property rights with respect to that underlying data, whether that IP right has been granted that can potentially be recorded in the blockchain.
00:18:59
Speaker
But it'll take some effort to see how to make that work. And I think we'll see some folks working in that direction. The second is with respect to the functioning of the AI model itself. I mentioned auditability earlier. And I think it's a really interesting question to ask whether blockchain could be the right place to record decisions made about consequential impacts on particular people or particular decisions made by the model. And so does that work?
00:19:29
Speaker
recording a decision in a blockchain so you can go back and audit it later, possibly, but I don't think the technology is there yet. But it is one way to think about whether that automated decision could be recorded and looked at later and whether there's an interesting and useful record being made. And the final thing is on the output of the model itself, just getting back to, I think, one of the points Erwin raised earlier,
00:19:53
Speaker
there are these really tricky issues kind of coming up around deepfakes and credibility of information. And so if there can be some sort of record kept of like unlike data providence, this is actually about the credibility of the output. Is it, can we mark something as being created by a deepfake or conversely, can we mark something or created by an AI model or can we mark something as being created by a human? So we know the providence of the,
00:20:20
Speaker
output and whether it was modified, whether it was affected in some way by an AI system. That may end up being important down the road for things like verifying news content. But again, I don't think the technology is quite there. These are just ways of thinking about future applications that may show promise in light of the challenges of AI.
00:20:44
Speaker
because Anna was mentioning me and I found her list and also Erwin's examples extremely good and I would mention the same. Because Anna was mentioning the value chain in the AI act and this was one thing that I also wanted to underline.

AI Technologies and Future Implications

00:21:03
Speaker
AI and especially their foundation models, I would see them or consider foundation models as really the new
00:21:11
Speaker
general purpose and technology like the internet, fire, iron, and so on and so on. And because we are there really at the beginning, of course, the market is still rather open and everything is possible. Of course, what can happen again is that we have, after a few months, complete market concentration. So, foundation models are dominated by
00:21:37
Speaker
a few companies like OpenAI, DeepMind, and so on and so on. But, and especially if we take blockchain and AI together, they give us a chance to really rebuild also a little bit our economy, to engage more companies, to help them with information sharing. As Anna said, we in the European Union
00:22:05
Speaker
now put a lot of effort in the so-called article 28 on responsibility along the AI value chain, where we are saying companies, okay, it cannot be the case that only the downstream provider of a high-risk AI system needs to be fully compliant with the AI act. He or she needs to get all the information needed by, for example, data set supplier like Google or
00:22:35
Speaker
from a Foundation Model Developer like OpenAI and so on. So we really try to push everyone into a direction that in the future there is just more exchange, of course, still considering that there are trade secrets and so on that you cannot share, but the rest you should share more in the future.
00:23:02
Speaker
And again, blockchain and AI then could really kind of create decentralized marketplaces, which involve much more actors, smaller actors, include SME startups much more in the market, like it was not always the case before in the digital market, especially when we are looking now at platform 1.0, where only really a few
00:23:31
Speaker
actors are dominant and basically all the rest of the economy are just customers that can just take the product and use it basically to build on top another service. So this I think makes me really excited that here we can maybe create a new kind of economy at least in a small area. That's great. Erwin,
00:23:58
Speaker
I know you and I, when we were in Barcelona together, we're talking a bit about autonomous agents and other AI interacting with the blockchain. I'm interested in your view on that and if you think this is going to be a problem or is it going to be an opportunity.
00:24:19
Speaker
So, um, funny enough, you mentioned that because recently the, um, the maker Dow ecosystem, um, they released their end game proposal, uh, and game phases one through five. And what this essentially does is it creates, uh, sub-dows or para-dows, which is like, you can think of it as nested decentralization.
00:24:40
Speaker
And in phase three specifically, they talk about the introduction of governance AI tools that will be launched to help with improving governance and monitoring. And it will be aligned with the so-called alignment artifacts that are going to contain all the principles, rules, processes, and the knowledge of the maker doubt ecosystem. And then these are going to be optimized with those governance AI tools.
00:25:02
Speaker
creating a so-called ecosystem intelligence, is what they're calling it, that will accumulate knowledge and then help improve those processes and decisions over time. And then there will be a fund, a purpose fund, that'll be spun off from this for the development of these free AI models and tools for so-called socially impactful projects. So you can already see among major protocols and makers, one of the largest by TVL and DeFi space, that there is a concerted effort to already start using these autonomous agents
00:25:32
Speaker
But at the same time, creating, I'd say, nested dependencies within the ecosystem. Because when you start talking about nested decentralization and DAOs within DAOs that have their own stablecoins, like in this case, separate from Maker and Dai, but still fungible with Maker and Dai, and then you have these AI tools that are being used in these separate layers, but then you still have the abstracted top layer.
00:25:55
Speaker
It's I think it's still really early to say how that is going to be in practice because this is a protracted kind of time horizon. And I'm using one very specific example, but in general, I think another challenge of using autonomous agents within decentralized systems is, first of all, is this autonomous agent simply like a like a helper droid in Star Wars? Or does this thing have an identity agency, tentative test, tenant trigger payment?
00:26:23
Speaker
Can it operate a validator node? Is it a delegator? These are going to be questions that will require a completely different rethink of how we're looking at liability and how we're looking at agency under EU regulations. Like at basic, you know, a reappraisal of the EITIS framework to include autonomous agents, I think it's going to be extremely important.
00:26:46
Speaker
Um, within the guy X ecosystem, you know, you have move ID, which is already implementing the use of autonomous agents and mobility systems, right? They can build on, on decentralized on web through platforms using smart contracts or specifically within decentralized marketplaces. Like what, what Kai was already discussing. Great example. Um, so I think we're always in this situation where like what developers can cook up in a lab.
00:27:10
Speaker
is now here, but the regulation is always playing ex-ante catch up. And I fear that we're running out of time in terms of closing the gap on these things because smart contracts and blockchain took 15 years, roughly, to get to where they are now. But the exponential growth of the artificial intelligence ecosystem is, I think, much more hyperbolic.
00:27:37
Speaker
And we're talking about a hockey stick versus just, you know, a slope. And I think that this is the fundamental challenge that we are, we're racing against time. Yeah, great. Thanks for that, Erwin. So as we think about autonomous agents interacting with the blockchain, becoming part of the blockchain,
00:28:02
Speaker
Um, one thing that I think is certainly on the regulator's mind and is, is talked about a lot is having a, having a human in the loop. And, you know, Anna, I know you have thought a lot about, um, issues around artificial intelligence and is sort of part of the solution, having a human in the loop. And how do you view that and its interaction with blockchain?
00:28:32
Speaker
I had such a good question, and I think it's a really tricky one because in some respects, I mean, I think it gets back to this point that Kai raised earlier, which is it's hard to make a general rule with respect to so many different technologies.

Human Involvement in AI Processes

00:28:47
Speaker
And so my perhaps controversial take on having a human in the loop is that sometimes you don't necessarily want one or need one. It's really, in certain contexts, it's really the speed of the transaction or the speed of the decision that's going to be helpful.
00:29:02
Speaker
And the level of human oversight and where human oversight is executed or implemented is going to depend on the risks versus the benefits of having something happen quickly. So one of the ways to think about this is really outcome oriented. What is the right outcome for the system? And going back from that, does a human decision at point A or point B make sense?
00:29:31
Speaker
Or is it really the case that you want to have the process unfold and humans to be able to go back and undo it later or stop it if it seems to be going far enough off the tracks and have some sort of monitoring in place for that? I think all of these are going to be a combination, you know, we can talk more about this, a combination of a human and a machine system with the humans and machines working in tandem. And so, you know, we're really beginning to define a future
00:30:00
Speaker
in which we're all going to be doing that in some way. I mean, whether it is, you know, Microsoft Word containing generative AI add-ins, you know, in some respects of what seems like a pretty easy use case where I can just say, okay, I'm like, I like this text or I don't like this text. I'm a fluent user of Microsoft Word to more complex robotic systems, to even more complex potentially trading systems.
00:30:25
Speaker
at every point, we're going to start having a world in which we have humans and machines, and we're going to have to define what that interaction looks like. And so I don't know if I have a general answer to that, even with respect to AI agents, because tomorrow I could create my own AI agent, you know, the kid and
00:30:44
Speaker
the garage next door could create a completely different AI agent for a completely different set of purposes. But I do think regulators on the one hand, but also companies and society more broadly are going to have to think about what kinds of tools we want to put into people's hands in the first place and whether we really need to think about leveling up education actually.
00:31:06
Speaker
to make sure people understand the power of these tools and how to use them responsibly, because so much of this is going to come down to individual use and individual deployment when we start having tools that are customizable. So that's my view. But again, I think others would take a different approach. And I'm curious for the thoughts of the other panelists, because I do think it's a tricky question.
00:31:31
Speaker
Yeah, and you know, Kai, I'm interested in hearing from you in particular. I know the AI Act contains provisions around kill switches and circuit breakers, and I'm interested in where your head is at on this issue. Yeah.
00:31:49
Speaker
So, there will not be a dispute between Anna and me because I completely agree with what she said. And luckily, from our political side, also the rest of the parliament agreed after lengthy debates on it. Because as Anna was saying at the very beginning, the AI Act originally was drafted in a way
00:32:14
Speaker
that there is a horizontal legislative framework with rules that are applicable for every sector, for every use case in the same way. And this would mean, yes, there is indeed a case which in Article 14 paragraph 2e, so on human oversight, that would apply to everything. So two smart contracts, two connected car, two AI
00:32:43
Speaker
driven vacuumer to whatsoever. And sometimes a case switch doesn't make any sense. So Anna was already mentioning a little bit. There is, for example, one AI driven robot that is doing surgeries on the eyes. This is always the example that I'm using. Because in this specific case, there are studies that if you
00:33:11
Speaker
don't interfere with the robot. Most of the surgeries are going well. The accident rate is really, really low. But if you allow the doctor to just stop the operation or really interfere in an active way, the number of accidents is skyrocketing. So in this case, it's one of the examples Anna was giving. There shouldn't be any
00:33:37
Speaker
human in the loop, at least if this human is able to interfere, because it's making it actually more risky. It's increasing the risk. And this is why, as parliament, again, after lengthy debates, we changed the complete approach of the AI act with a huge change in Article 8, which is kind of an umbrella
00:34:04
Speaker
for all other high-risk obligations. And we are now saying there that basically all those high-risk obligations, like human oversight, need to consider the context of the deployment, the technical harmonized standards, for example, from San Senelec that are specifying those articles and so on. And by doing that, we have now a kind of law with general principles, which is good because then
00:34:34
Speaker
we have a kind of minimum standard for all those use cases, but then it's really, yeah, you really need to do an assessment, okay, what is the AI system, how it's used, who is using it, and so on and so on. I think this is really the best way forward and it's diffusing a lot of the problems that we would otherwise have. Yeah, it's interesting.
00:35:00
Speaker
I make a somewhat controversial prediction when I talk about artificial intelligence where I make the statement that I think the state of California in my lifetime will outlaw human drivers.
00:35:14
Speaker
because it's irresponsible to allow a human to drive a car. When a Tesla crashes because of autopilot, it makes the front page of the news. Well, why is that? It's because it doesn't ever happen, but it doesn't make the front page of the news when the drunk guy crashes into a building because
00:35:38
Speaker
that happens every day. I mean humans are prone to mistakes and so the idea of having the human in the loop as being the answer I think is an interesting one. Maybe just because it's really interesting what you are saying. I think this is a case between the United States and for example a country like Germany where
00:36:04
Speaker
You also need to take into account a lot the general mood among the citizens a little bit, or also cultural differences. Because in my country, most people, even though they know that machines are less prone to errors,
00:36:22
Speaker
they wouldn't trust connected cars, at least for the next decades. Well, I think that's true. I think that's true in the US too, Tai. I think at some point the statistics are going to become overwhelming here. Erwin, a question for you. We've talked a lot about how blockchain may actually be able to help
00:36:48
Speaker
AI and what some of those interactions are. Talk to me a bit about scalability.
00:36:55
Speaker
Because certainly in the early days of blockchain applications, scalability was a real problem.

Technical Challenges and Trust Building

00:37:04
Speaker
I remember hearing the outrageous amount of processing power that CryptoKitties was taking up on the Ethereum blockchain. And it sort of blew people's mind about, hey, is this technology ever going to be scalable? And now this panel is talking about having blockchain
00:37:26
Speaker
record every interaction of artificial intelligence in order to make it sort of transparent. It doesn't seem like blockchain would be able to do that. And I'm interested in your view. So I'm just going to backtrack quickly on the issue of the kill switches because, and then I'll get to your question just quickly, just like you have kill switches that are being discussed within the context of the data act, you have so-called safe and robust termination
00:37:53
Speaker
under Article 30 of the Data Act versus the AI Act, which is what Kai was talking about. And I think it's interesting because when you're speaking about safe and robust termination in the context of the AI Act, so of the Data Act, so looking at smart contracts narrowly within the context of IoT devices, this kind of ecosystem is one where both IoT devices and blockchain and artificial intelligence should exist in some sort of melange because you're going to need sophisticated tools to be able to make sense of that telemetric data, impute it, and then
00:38:22
Speaker
time stamp it in a pendulum ledger that can secure data fidelity and provenance. That being said, a lot of the problems regarding using kill switches in blockchain have been developer fat fingers. Like when Ondo Finance locked up 600,000 USDC and tanked their entire protocol, Solana upgrade function. Again, this is an issue of where, just like in the case of the doctor,
00:38:47
Speaker
If you let the person, sometimes eventually, statistically, a fat finger could lead to something. And in this case, that usually means that a lot of money gets flushed down the drain. With regards to standards, so, I mean, the Commission is also doing the same thing with respect to the data act and smart contracts, right? So, you have SCN, SANALEC, developing, or going to propose a hand to harmonize European norm. You have a lot of work being done in SCPDL 6, PDL 11, with respect to permission ledgers. So, I think that
00:39:17
Speaker
These two things are happening in parallel, and there's going to come an inflection point when both the way that safe and robust termination is defined in the data act and the way that kill switches are looked at in the AI act are going to have to come to some sort of harmonization for these things to communicate in the future and cross-pollinate. So I just wanted to briefly discuss that. Regarding scalability,
00:39:39
Speaker
It's a big problem right now in general when you talk about just like you have the the the trip and dilemma in traditional capital markets, you have the the blockchain trilemma of scalability, immutability and civil resistance. Anna, maybe I can ask you to comment quickly on the scalability issue.
00:39:59
Speaker
Sure. On scalability, I mean, do you think we're seeing that's likely to be an issue in the future in terms of recording, for example, all of the decisions of AI on a blockchain? That's incredibly difficult to do. And I think even outside of the
00:40:15
Speaker
the scalability challenges posed by blockchain, that is incredibly difficult to do with respect to AI more broadly in any system. And so finding the right way to record all of the different decisions and inputs to an AI system is, you know, that's a big technical challenge in figuring out how to preserve that information as a big technical challenge. I'm not sure that I think it's going to be solved right now, but there are some upcoming
00:40:43
Speaker
There are some upcoming proposals in the EU that might make that more important for companies to consider, and Kai may want to weigh in on this, but that's a place where the EU-AI liability directive is going to potentially have impact because there, you know, I think the concept right now, at least in its early stages, is that
00:41:04
Speaker
you know, if you didn't preserve and make that information available, for example, if someone was hurt, then there could be a rebuttable presumption, for example, that you had caused harm. And so, you know, that's going to shake out in a huge amount of additional legislative work. But it is to say that companies may begin to look at that more
00:41:24
Speaker
significantly. On scalability, the other thing to keep in mind, I think, is that AI scalability right now is going in the direction of scaling to larger and larger language models. That may also change as the computing power required becomes a scarce resource and the value of being able to compute in large language models on mobile devices, for example, becomes a business interest. And so we are actually seeing experimentation with smaller language models.
00:41:51
Speaker
smaller data sets, for example. The scalability pendulum may swing in the other direction, but it is certainly an issue that I think many people are watching, both from a regulatory perspective, but also a competitive landscape perspective. I agree with everything Ana said, and honestly,
00:42:09
Speaker
What I was trying to say before was that you have the problem of scalability, civil resistance, and immutability. And this is kind of like the block chain, the famous blockchain trilemma. But there's three ways, I mean, right off the bat where you could say that AI might make a difference. The first is looking at efficient resource allocation.
00:42:25
Speaker
So AI could potentially predict transaction patterns, and then it could adjust those resources accordingly to optimize for a network throughput. Another one is through data pruning and compression. So different AI techniques that could be used to minimize the amount of data stored on a blockchain without actually losing any critical information to improve scalability. And another one, I mean, just improving consensus mechanisms. Honestly, having AI algorithms to design more efficient consensus mechanisms that reduce the need
00:42:53
Speaker
for computationally intensive processes like proof of work. And we already see this with respect to what's being done even without AI, like in heterogeneous sharding, for example, or the way that zero knowledge proofs are being used to take a lot of that heavy computation off-chain with respect also being privacy preserving. So I think it's going to be really interesting to see how you can apply those AI tools, for example, with respect to designing more efficient consensus, data pruning and compression, but also combining that with
00:43:22
Speaker
with zero knowledge technologies i think that that's gonna be a real game changer with regards to how that space pushes forward and how we could potentially overcome those issues with respect to large or small language models going on chain all right well we're we're about out of time here team but i guess as as a final question i'd ask you to
00:43:43
Speaker
Look into your crystal ball a bit and predict if you think blockchain can help solve the trust problem that we seem to have with artificial intelligence.
00:43:57
Speaker
So Anna, do you want to get started? Yeah, sure. I'll jump in and say I think there are ways in which it will help and ways in which it probably won't solve everything, certainly in part because AI systems are designed to be human run, even if they have autonomous elements. I think we're going to see a lot of collaboration between humans.
00:44:18
Speaker
and machines going forward. And some of that is about governing humans and not just the machine part of it. So I think we need to remember that the humans are part of both the promise and the challenge of AI. And on the technical piece, I do think blockchain will offer some very interesting options in terms of at least mitigating a few of the risks we've identified in AI so far. Great. Great. Kai, what's your thought?
00:44:45
Speaker
Yeah, I just want to focus in my answer on one point. So I think, again, what makes me really feeling excited about blockchain and AI is this huge potential of the open source community. And so if we are really letting them engaging with both technologies and also other technologies,
00:45:10
Speaker
I think it will help us to make what is happening much more transparent and maybe address a lot of concerns and also problems that are existing until now. I see a huge potential from this rather civil society perspective, but also when I'm now looking at the European industry, for example, there are also huge opportunities if
00:45:38
Speaker
all those actors working together in the future on a much better way, but also for companies to draw on certain results or models and so on that have been developed by the open source community. So yeah, this makes me very enthusiastic, let's say. That's great. Erwin, what are your thoughts about
00:46:08
Speaker
blockchain helping artificial intelligence in this space? I think that there's definitely a bright future if we can calibrate at an early stage the way that these two technical substrates will interact and I think we need to move
00:46:24
Speaker
towards what I call centaur regulation. So in the same way, you know, the early days of playing against machines and chess, you had this brief period where people and machines working in consorts, like centaurs were actually winning against the machines. And I think that the more and more AI becomes sophisticated, and it's supplanting a certain percentage of the marginal productivity of human labor, I think that
00:46:50
Speaker
blockchain will by necessity be some way to secure data fidelity through this. And I think that we need to start recognizing that the workers of the future are going to be partially centaurs and also fully machines. So when we're transitioning our regulatory framework, we need to start considering the both the centaurs and the machine as a consumer economy. Otherwise, all we're really doing is making rules for a snapshot in time, be them for blockchain or for AI. And we're
00:47:19
Speaker
We're missing the forest for the trees, so to speak. Well, that's great. Well, thank you all for the interesting discussion. Anna, Kai, Erwin, it's always a pleasure. Thank you for your insights.
00:47:35
Speaker
We hope you enjoyed our Hootenanny. Thank you for listening. For more Hootful and hype-free resources, visit www.owlexplanes.com. There, you will find articles, quizzes, practical explainers, suggested reading materials, and lots more. Also, follow us on Twitter and LinkedIn to continue wising up on blockchain and Web3. That's all for now on Owl Explains. Until next time!