Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#41 - Predicting the future with decentralised AI and time-series data | Satori Founder Jordan Miller image

#41 - Predicting the future with decentralised AI and time-series data | Satori Founder Jordan Miller

E41 · Proof of Talk: The Cryptocurrency Podcast
Avatar
36 Plays6 days ago

Jordan is the founder of Satori, a decentralized network focused on AI-driven time series prediction. His project combines cryptographic principles with machine learning to create a crowdsourced system for making predictions about the future.

Satori’s Network Growth and Node Architecture

Since its alpha launch in February 2023, Satori has grown to over 20,000 nodes, with operators worldwide contributing computational power to the network. Nodes require staking Satori tokens, a measure introduced to prevent Sybil attacks after the network faced scaling challenges during its transition from beta. The staking threshold increases incrementally as the network expands, though Jordan emphasizes this is a temporary solution until protocol-level improvements enable full decentralization.

The hybrid model blends proof-of-stake (to gate participation) and proof-of-work (to reward accurate predictions). Nodes analyze real-world data streams—from stock prices to weather patterns—and compete to predict their future states. Jordan notes the long-term goal is to eliminate staking requirements entirely, but this hinges on solving consensus challenges around evaluating prediction accuracy across a decentralized network.

Decentralized AI vs. Centralized Giants

The conversation shifts to AI industry trends, particularly regulatory capture by large corporations. Jordan critiques efforts by major players to monopolize AI development through lobbying, arguing decentralized solutions like Satori are critical to preserving open access. “Regulatory capture is natural for incumbents,” he says, “but decentralized AI resists that control.”

Satori’s focus on time series prediction serves as a foundation for broader intelligence. Jordan explains that predicting temporal data mirrors human cognition, which constantly anticipates future states. Unlike language models (LLMs), which he views as interfaces rather than true intelligence, Satori’s architecture prioritizes raw data analysis. A planned LLM layer will eventually translate the network’s predictions into human-readable insights, but the core remains rooted in decentralized, collaborative forecasting.

Technical Bottlenecks and Future Roadmap

The network’s current bottleneck lies in achieving consensus on prediction validity. While a central server currently handles this, the team aims to decentralize the process. Jordan acknowledges the complexity, comparing it to splitting brain functions across hemispheres: “Distributing consensus is like ensuring both sides of a brain agree without a central overseer.”

Developers are also working on GPU support and refining the node software, still written in Python for accessibility. A small team of seven full-time developers focuses on peer-to-peer infrastructure, multisig transactions, and integrating LLMs. Community feedback has shaped economic incentives, ensuring miners’ profit motives align with the network’s decentralization mandate.

Philosophy and Decentralized Governance

Jordan draws parallels between Satori’s design and human cognition, emphasizing the importance of “uncontrolled” systems. He rejects top-down curation, arguing that distributed networks evolve more organically. This ethos extends to governance: the Satori Association, a Swiss nonprofit, avoids profit-driven decisions, reinvesting resources into development.

Visit Satori

This podcast is fuelled by Algorithmic Cryptocurrency Trading Platform Aesir. Use code AESIRPOT20 at checkout for 20% off on all subscription plans, forever at aesircrypto.com

Visit Aesir

Recommended
Transcript

Network Growth and Challenges

00:00:00
Speaker
Congratulations on the amazing growth. I've the network grow to like 20,000 nodes, like almost on ah on a weekly basis now for the last few few weeks.
00:00:12
Speaker
um And I have to say, I'm also running one of those nodes. So even though I'm not active in the Discord, I'm like i lurking in the background. Nice. um I think it's a fantastic project and I think, yeah, I think that also that's partly the reason why so many people are just kind of finding ah finding out about it.
00:00:33
Speaker
Yeah. Yeah, I mean, it's a big idea. yeah I'm glad you're running one. it would be my vision that everybody would run one or two rather than some people running more.
00:00:47
Speaker
but You can't enforce that. Exactly. Yeah. What are like, what's like the biggest ah number of nodes that are owned by one operator, do you think?
00:00:58
Speaker
I have no idea. No idea. Fair enough. Yeah. I wish I knew. ah So you can't, you can't, I mean, is there no way to, to, I guess there's no way to check, right? Is there? There's no way because they can split up their, you know, their amount of Satori they hold and stuff into different addresses. Yeah.
00:01:17
Speaker
So there's just nothing to tell. that's That's a good point. Yeah, it's not like they sign up with their email address and go like, hey, this is this user owning those many nodes. Yeah, that's really cool. So the way the way that works is you have to have a certain amount of Satori in order to be a node operator.
00:01:38
Speaker
And that amount increases in increments of one, for now, every time you reach 20,000, every time the network reaches 20,000 nodes. um Do you want to explain a little bit about like the thinking behind that architecture and that that limit?
00:01:53
Speaker
That limit was not by design. It was by necessity. Because we started out without the, you could call it staking requirement.
00:02:05
Speaker
you know And so anybody could run a node without any Satori whatsoever. And we wanted to keep it that way as long as we could, hopefully forever. But it turned out...
00:02:17
Speaker
We couldn't. So we had to go to this hybrid model where we have to limit the network by size because we have a bottleneck. So that's the 20,000 node limit.
00:02:32
Speaker
And so we limit the network by size. And as soon as we can mature enough that we can get rid of that bottleneck, then we can you know not really have that limitation. But until then...
00:02:44
Speaker
And that will probably be a while because it's um it's a hard problem. So what is that bottleneck, if you don't mind going into that a bit? It's actually a series of bottlenecks. And at the end, it's a very hard problem.
00:02:57
Speaker
We could increase the number by solving some of these other kind of well scaling bottlenecks, infrastructure bottlenecks. we could We could solve those and it could increase, you know, incrementally.
00:03:12
Speaker
But at the end of the day, the big bottleneck is coming to consensus in a reliable way on what those future predictions are and how valuable. The most important thing is how valuable those future predictions are.
00:03:27
Speaker
So right that's the main bottleneck that we'll have to tackle eventually. And that's to do with, I'm guessing, that like the core of Satori.
00:03:38
Speaker
Yes, yes. That's way down at the protocol level. So it's at the core, yeah. Right. that's That's interesting. So we tackle it right now in kind of a delegated way.
00:03:54
Speaker
The network delegates that responsibility up to a central server, which it's kind of manages it. but we want to distribute that ability to evaluate each other's predictions down to the the entire network.
00:04:13
Speaker
That's where we have to go eventually. So, right yeah, so that's where we're at right now. Yeah, that's cool. That's cool, man. And obviously, it's but because it's an AI, I guess, proof of would you call it proof of work or would you call it something else like the what the nodes actually do?
00:04:32
Speaker
I would call it proof of, I would call it a hybrid approach. So we have a proof stake model that allows you in the door, but it doesn't guarantee you anything. So once you're in the door, you can start making predictions and competing on what the future will be. You can start making predictions about the future.
00:04:55
Speaker
And then how well you predict, gives you the amount of reward or whatever. So at that point it's proof of work.
00:05:07
Speaker
But so it's a hybrid approach, really. it's It's both. Yeah, yeah. um I feel like the the proof of work mechanism is one of those instances where it's actually needed.
00:05:19
Speaker
ah You don't have to create more and more increasingly difficult computations, you know, to adjust difficulty artificially because that's work that needs to be done. Like the network wouldn't survive without that work.
00:05:33
Speaker
Yes, exactly. The future is always going to be hard to predict. So, yeah. Yeah. Yeah. The difficulty is always going to be there. Yeah. Have you seen like the latest ah news on within the AI industry and are you up to date with like um yeah you know that those the the big players in the industry and kind of like the regulations or the regulatory framework that they're looking to trying to create?
00:05:58
Speaker
I try to keep up with as much as I can. um I consider a lot of stuff kind of noise. And so, you know, I got my head down. I'm kind of working on this all the time.
00:06:12
Speaker
And so I don't really, you know, make it full time thing to keep up with everybody. But, um you know, I pay attention when I can. Yeah, I'm not sure what we're doing. Sorry, sorry about that. Yeah, no, it's all good. I feel like AI regulation is one of one of those things that's becoming like really, well, it it has potential to be really insidious going forward yeah because obviously you have those big players, right? You have OpenAI, you have Claude, you have ah you know X to some extent as well.
00:06:46
Speaker
Yeah. And most of them are lobbying the US s government to create regulation to make AI you know harder to create LLMs, harder to create to enter the market ah to a point where um you know there's a big discussion between LLM being used you know on people's intellectual property without people's consent.
00:07:11
Speaker
ah age Fair enough. you know We should look into that. That should be addressed or regulated. But then you've got the flip side that all these companies already gotten away with it. So now any new ah player that wants to enter the market, that'll be such a huge like barrier to entry that if you regulate in the current state, you're going to have a monopoly.
00:07:30
Speaker
Well, an oligopoly, you're going have like three big companies dominating the market. And it's wild because this market wasn't here like six years ago. And then all of a sudden it's just this close to being monoly monopolized by you you know the biggest players.
00:07:46
Speaker
Yeah, that's how it always goes. This is... regulatory capture and it's just natural. and This is just what the big players always do. They get the power and then immediately they turn around and use that power to block other people from taking their power away.
00:08:02
Speaker
And this is why you need a decentralized solution. I mean, in my opinion, you need decentralized AI from the jump, especially if it's, you know, you need to find a domain that they haven't captured yet.
00:08:18
Speaker
and say, okay, this is going to be decentralized AI. We're going to do it as best we can from the jump. And yeah, that's what Satori is for time series predictions.
00:08:31
Speaker
yeah Yeah.

AI, Predictions, and Decentralization

00:08:33
Speaker
um and And again, I feel like that's one of the reasons why people are are reacting this way to the project, because people are aware of how you know broken this industry is going to be and how you need something to make it free and open and accessible to everyone.
00:08:51
Speaker
um But you've made an interesting distinction in time series prediction. Is that always going to be the case in your opinion? Or are you planning to look at generalized AI um or LLMs in the future?
00:09:04
Speaker
Well, i think what people don't realize is that...
00:09:11
Speaker
um Well, let me put it this way. I think my premise is that time series predictions forms a very good base for generalized intelligence in the future.
00:09:24
Speaker
Forms a really good base. and LLMs are language, so they form a good base too. and
00:09:34
Speaker
But time series prediction, it's an entity that has to exist in time like like we do. We are always existing, predicting the future you know in every moment.
00:09:45
Speaker
Our brain is always trying to anticipate the future of what it's going to see and experience and, you know, how it's going to move its body and anticipate everything. And so seeing that as the fundamental foundation of, you know, a kind of a generalized intelligence, I think, I think so, yes. Satori has that goal in mind eventually.
00:10:11
Speaker
That's interesting. Yeah, you do you do predict things all the time. you kind of It's how intelligence works in a way, doesn't it? You constantly predict, yeah you know even your immediate environment, you kind of constantly pull your immediate environment and have a prediction.
00:10:26
Speaker
you you i guess you wouldn't be able to have consciousness without that expectations of of continuity of things being able to exist.
00:10:37
Speaker
Exactly. Yeah. yeah right That is what we owe our conscious experience to, if anything. Yeah. That's fascinating. die I think we kind of see the brain.
00:10:51
Speaker
we we don't really realize. We think LLMs are like the be-all. But really, it's meant to be just the top layer. You know, I mean, we have this huge brain in our heads and it's doing all kinds of things and whatever.
00:11:04
Speaker
And then we filter it through this area of the brain that really deals with language pretty well. It's like an LLM. and then And then we speak.
00:11:17
Speaker
And we it's because what we speak is what we experience and what we can talk about and and all that. and We think that that's everything, but it's not. There's this huge iceberg of information flow. And a lot of that information is coming mean to consensus on what the future will be on a global brain, you know, inside your head scale.
00:11:39
Speaker
And then we translate that through the language mechanism, right? it The language is trying to figure out what that what's going on underneath it. And so we have the same approach with Satori where we build this network of decentralized computing nodes that are trying to discuss the future in terms of the future, not in terms of, oh, language, like we're going to speak English to each other.
00:12:06
Speaker
But in terms of saying, no, the future of that data stream is in terms of that data stream, right? If it's a price, it's a number.
00:12:18
Speaker
If it's a tweet, it's text. yeah In terms of that particular data stream, we try to predict the future. And then we're going to have an LLN layer, which we're building the very beginning prototype of right now.
00:12:32
Speaker
um That can then translate all of that that discussion, um that consensus about the future into English so we can you know interact with it and get a view into what the network thinks the future will be.
00:12:50
Speaker
Yeah, well and I think that's really interesting when you consider that um your thoughts are not ah that they're not words, they're not in English. they you know your Your thoughts are not generated sentences like like the LLMs.
00:13:07
Speaker
um There was this one ah this one guy that wrote wrote a book about that. um I think he may have been translated as a Romanian author, but I think it's probably also translated in English. And he put it very nicely that ah thoughts are like um a lightning strike on like the black canvas of the mind.
00:13:26
Speaker
And then it's that kind of lightning strike that then your brain decodes into words, which then, you know, your mouth voices and I send it to you and you internalize it and you, you know, compute that.
00:13:38
Speaker
And just if you think about just that sheer number of touch points that we're making, like thoughts, you know, speak, language, listen, it there's like four or five different layers of interpreting a core message whereby people by the time my message reaches you, i'm I'm like blown away that you can even understand the the things i'm I'm thinking about. It's kind of crazy.
00:14:01
Speaker
Yeah, it's wild. Yeah. And that's the way it is with all intelligence. I mean, intelligence happens in kind of a distributed dis decentralized, distributed kind of network.
00:14:15
Speaker
And then it gets um compressed down into a tiny little... um message that goes across the wire or whatever and then it gets decompressed on the other side and that's just how it always works that's how it works between humans like you just described and that's how it works in our brain because we have two hemispheres that are connected in the middle so all this information has to get condensed down and only the important stuff gets sent across and vice versa
00:14:48
Speaker
And it also happens in LLMs, you know, LLMs, uh, the most, you know, they're built on like autoencoder technology, the shape of an autoencoder.
00:14:59
Speaker
It condenses down in the middle and then it expands back out. right And so, um, anyway, I mean, that's the basic form yeah of intelligence.
00:15:11
Speaker
So would you say that then LLMs can never really be a form of artificial intelligence? They're just predicting what the likelihood of the next word of of or a cohesive sentence could be.
00:15:24
Speaker
um But they they I guess the LLM model could never have what we call intelligence unless there's another layer under the LLM layer, like kind of what you suggested with Satori that creates that thinking and then the LLM kind of just decodes the thinking, I guess.
00:15:42
Speaker
a Well, I don't know. I don't know. The LLM, the point of it is to interact with us, with language, you know? I mean, but then behind the LLM, you can put anything you want. And so I really see LLMs as just an interface.
00:16:00
Speaker
It's an interface for humans into the rest of the system. Maybe the inter, maybe what's behind the LLM in a particular case is a database.
00:16:12
Speaker
And you just want to be able to query the database with natural language. And instead of learning SQL. you know, and it can just translate for you. That's no problem. So I think that's kind of the point of LLMs. And, you know, i mean, in this last few years, that's been the craze.
00:16:30
Speaker
I mean, that's what every LLM company is. It's like, oh, we can put an LLM on top of this or on top of that or whatever. And, you know, that's the basic model.
00:16:41
Speaker
of Yeah. yeah Well, a lot of it, and like so much of it is also just marketing hype as well. um I've seen so many products that have been like AI enhanced now in the last few months. Well, here as well, ah anywhere be be like between CRMs and databases and so many things. But the thing is, and and this could just be my own experience, but if you go and actually try a lot of these products that promise you to be AI powered,
00:17:11
Speaker
in many ways the AI side of that product is worse than the product itself. um And it's obvious here that there is a lot of there's a lot of money, there's a lot of like thinking of of of getting ahead in this market at stake, um which is why you have companies creating incomplete products with sub-optimized models that then they ask like the lion's share ah to benefit from this like new AI-enhanced product.
00:17:38
Speaker
ah It's happened with with HubSpot recently. HubSpot is one of the most popular CRMs on the market. They compete with Salesforce. Salesforce has been doing ai for a while.
00:17:49
Speaker
HubSpot decided to kind of jump on the AI bandwagon as well. And they released their own model that's supposed to kind of as you suggest, you know, ah help you find information about a record by asking, hey, ah how many transactions has Jordan associated, you know, on his record? What marketing emails has he opened and stuff?
00:18:07
Speaker
ah Not only does he not answer most of the of the questions that that you tell it, that you ask of it, and it can also just flat out give you wrong information or like it detracts from the whole idea of, of,
00:18:22
Speaker
of a database or a system because it either hallucinates or it doesn't understand how to use its functions correctly. Yes. And i I like AI. i think I love technology. I think it's good definitely got a future, definitely got a point.
00:18:36
Speaker
um But I feel like the hype blinds companies and stakeholders and users to a point where neither of these three get the best result out of it.
00:18:47
Speaker
ah Companies create a suboptimal product. ah Stakeholders are being led to believe, then shareholders are being led to believe that this is going to be the next next disruptive thing. And users end up paying for a product that ultimately no one wants. It's just they've been told that it's exciting to use.
00:19:05
Speaker
um Yeah. You're so right. You're so right. I mean, that's been the hype. And, but it will mature and people will figure out, okay, this is the limitations of what it can do.
00:19:18
Speaker
And here's how we can make it really do what we want it to. And so they'll add in the layers that you mentioned that we need in between, ah you know, whatever it's translating. It's just meant to translate English or, you know, human language into computer language when that could be, and that could be code that could be, you know,
00:19:40
Speaker
SQL, that could be whatever it is. So it's it's meant to just translate and vice versa, right? If it's got some kind of network it's listening to or whatever. And right now it's just not a very good translator, but it's going to get better and better.
00:19:56
Speaker
Mainly, I think, by creating more layers. We we will learn how to use it better and better. yeah And then it will become more reliable. I wonder if if we will ever get away from needing an LLM um if we're thinking about neural interfaces and the ability to encode and decode thoughts directly through a machine that would, in effect, put an LLM out of business.
00:20:23
Speaker
like Why would we go through these different layers of encoding, decoding, understanding, you know sending back response, where you could literally go to the source and from the source and have 100% understanding of of that?
00:20:36
Speaker
um piece of information. Yeah, because we wouldn't have to code it all the way down to language because we have a higher bandwidth in the middle, you know.

AI's Role in Simplifying Technology

00:20:45
Speaker
yeah think that's what you're saying. If we can increase the bandwidth from brain to computer or brain to brain, we can start communicating with higher fidelity, you know, with more guarantee.
00:20:59
Speaker
um And without... language as we know it, but more, you know, you could communicate in feelings, you could communicate in thoughts, in intention. um Yeah, I think you're right. Yeah.
00:21:13
Speaker
i I feel also like in many ways, an LLM kind of feels like the visual part of an operating system, ah and you know, and the opposite being like just the Linux shell or the terminal.
00:21:26
Speaker
And you could make an argument, well, I'm pretty efficient at like clicking buttons and creating folders, you know, in my windows. But equally, you know, you could do your job just as well um by just using just using the the terminal and using the terminal commands.
00:21:41
Speaker
That's right. Yeah. Yeah, if you know how to do that. But but again, you have to know more You have to know how to use the terminal, which is, i don't know if it gives you higher bandwidth in the moment, but it requires more knowledge.
00:21:58
Speaker
so So, yeah. Yeah. Well, I think you're right. More and more people, i expect to get I expect them to get more and more disconnected from the inner workings of ah of a mechanism, of a technology, computer, whatever it is, right? and end up just working on this abstraction layer, ah which has also been a hype word in crypto and blockchain for a while.
00:22:21
Speaker
Everyone talks abstraction and everyone, yeah the the idea, the leading idea is that eventually we're going to end up having this ah system where you just connect to Web3 and that's it.
00:22:32
Speaker
And then everything is there and you don't care whether it's Ethereum, whether it's, um you know, Bitcoin, Evermore, whatever. You just connect to Web3 and you do your Web3 stuff and At least that's how it's being packaged that would be a huge undertaking, but that's kind of the ideal case scenario Yeah, the abstraction everybody wants that because mean that's that's all products are is the abstraction of the complexity of the product and the interface is meant to give you the best control while also abstracting all that complexity away so I mean a friend of mine is really into cars and
00:23:10
Speaker
He's really into it. He knows how cars work. He knows all things about cars. It's great. I don't know anything about cars, you know, and and that's like a lot of people. We just get in the car. We know how to move the wheel. So we know how to interface with the technology.
00:23:25
Speaker
Yeah, but that's it. Right. And, and so we've, we create these products where we abstract away all the complexity so that you can use the thing.
00:23:36
Speaker
You can get what you want out of it without having to understand how it works basically at all, you know? And so, um, and, and that's just the way technology goes over time and it's hard.
00:23:48
Speaker
i mean, It takes time to get there and everything. So, um, but I think it's interesting. We've entered a new world where we can now abstract, you know, with AI where we've just begun to enter this world where we can abstract, um, not only the complexity of a system, but learning the complexity of the system.
00:24:18
Speaker
Uh, I'm not sure if I'm saying that correctly, but you know, even with a car, you have to, you know, you go to driver's ed, you learn how to use it. Right. And you have to practice and let's not crash. and Uh, but now that we have machines that can learn, you know, like we do, we, we learn things.
00:24:40
Speaker
Um, then we can give them our intention. We can put them in a domain where they can interact with the system. And they can learn how to use it.
00:24:52
Speaker
And we just give them our intention. We abstract everything away except our intention. And then they carry out our intention. And so that's like another layer.
00:25:04
Speaker
you know We've never been there before. It's very true. So I think i think that's an interesting thing. You know, when it comes to Satori, on that note, that's kind of how we designed it from the beginning.
00:25:18
Speaker
We want to give Satori our intention by saying, well, we care about this or that or whatever, and we do that through voting. The community decides what matters, and that's all we give it.
00:25:32
Speaker
We don't give it any other, like, Here's how you should learn. We don't train it in any specific way. We just let it learn however it's going to learn. And then we give it our intention on what we want to know the future of.
00:25:47
Speaker
And i think that's really the way we should interact with technology moving forward you know as much as we can. Yeah, no I mean, it's super interesting.
00:26:00
Speaker
i think that ah that applying that definitely works um when you're aware ah where you're at least aware of the process.
00:26:10
Speaker
um I think there's ah always the risk of not really understanding what's going on under the hood and then you know creating a command that might not have the expected result.
00:26:21
Speaker
like I'm thinking about coding when I think about that. like if ah If I was to ask um any LLM to build something quite specific, um you know, on the intention level, the execution was most likely be mayor halfway there, maybe a bit of bloat, maybe like ah a weird solution from this like no upvote, stack overflow, obscure post that they happened to find or old libraries or, you know.
00:26:48
Speaker
um But if the response were were were better, a higher degree of accuracy. I can 100% see that happening. um Yeah, just drive me to the shop instead of me getting in the car and taking a steering wheel and pressing the accelerator and so on.
00:27:06
Speaker
Yeah. Yeah, totally. Yeah. It'll evolve into that. But it'll take time. yeah just take it was yeah Yeah, definitely take time. But it's good that people are testing it. It's good that people are like so open to kind of exploring the limits of of this technology.
00:27:23
Speaker
It's got this kind of hype, I feel, that, um you know, hy hype is... When I say hype, I don't mean a good or a bad thing. It could be you know could lead to both. But one thing that it definitely tends to lead to is innovation because when people are excited, where people are happy, when people feel like they're part of something that's interesting, emerging or up and coming, they will um they will naturally be more inquisitive.
00:27:49
Speaker
yes They'll poke holes. They'll find new solutions. I think it's generally a positive thing um in practice. They put their attention onto it Yeah, totally.
00:28:02
Speaker
Yeah. And I think that matters a lot in a place, in a space where there's so much noise, you know, to be able to just capture people's attention on one thing. That's right.
00:28:12
Speaker
Yeah. So I think last time last time we spoke um about Satori, I think we touched a bit on the way... the That kind of like the long term thinking about the predictions and the way the predictions are going to feed into this one model that will then make connections about the the various predictions ah and and come up with like a completely new ah prediction, like maybe like a meta prediction or whatever you want to call it.
00:28:39
Speaker
um Do you have any kind of, um have you guys like further discussed that that potential, taking all those streams and creating a more accurate picture, a wider, higher level picture of the future based on that? Yes, yes.
00:28:58
Speaker
um
00:29:03
Speaker
It's built out in layers. So this is kind of what we were discussing. Like we have this huge network that's communicating about the future. And each one of these nodes is saying, okay, I'm watching some specific things. Let's say it's 10 data streams in the world.
00:29:19
Speaker
And just maybe to start at the bottom, how do we define these data streams? A data stream is just a thing that we measure over time. And so it's a thing in the real world that changes over time. and And I think sometimes in our abstraction, in our mind, we often kind of see things as static, like as one thing, but basically everything exists in time.
00:29:46
Speaker
You know, the only thing that doesn't exist in time is like mathematical axioms. So those things don't change, but ah like anything else changes all the time.
00:29:57
Speaker
So, So if you can measure it, then it changes. Let's say it's the temperature of the oceans. You know, we have a, we have a measurement of some place in the Atlantic ocean and we're measuring the temperature.
00:30:13
Speaker
So that's changing over time, daily, yearly, minutely, hourly, it's always changing. And so that becomes a data stream of one particular data stream. We want to know the future of that data as stream and the measurement itself is probably not moving around.
00:30:31
Speaker
You know, it's probably not like on a boat and going places. It's probably stuck in one position. And so, that makes a good data stream. And, uh, so there's all these kinds of data streams flowing, all this data flowing throughout the internet everywhere.
00:30:51
Speaker
And so we want to capture those and, A neuron, a Satori neuron that's running on a computer, an instance of the of the program, says, I'm going to subscribe to these data streams. Maybe they're interrelated in some ways.
00:31:08
Speaker
Maybe they're not really, but I'm going to listen for them. And I'm going to try to understand them in relation to each other. See if they have any predictive power on each other.
00:31:20
Speaker
Do they share any patterns um that I can correlate with the real world with outside data? So they start doing that and they start generating predictions on that.
00:31:33
Speaker
You know, trying to see the future of those particular data streams, given whatever, you know, it's it's going to try to correlate those data streams with anything it can find that's useful.
00:31:45
Speaker
So it starts doing that and predicting them. As soon as it starts making those predictions though, those predictions might hold a lot of information that would be valuable to other predictors.
00:32:01
Speaker
So the other predictors start subscribing to their predictions and saying, I'm going to leverage all the work you did relative to whatever you were listening to. And I'm going to use your knowledge to try to predict what I'm predicting better.
00:32:18
Speaker
And so then they start, that's like another layer. And then right we have that mechanism. We're going to be building it for the world model is what we're calling it. We're trying to build a world model, predicting the data streams that kind of affect each other and the rest of the world the most.
00:32:40
Speaker
So this is things like high level economic development, indicators or government statistics, demographic changes, it kind of affects everybody and everything.
00:32:55
Speaker
We want know the future of that. And so um we're going to be building this network which exemplifies or embodies this world model.
00:33:06
Speaker
And it has to be a network because you know, the world is evolving. And so we need to be able to evolve as fast as it evolves. We need to be able to model it as fast as it does. So we can't do this thing where we're,
00:33:20
Speaker
curating all this data and then building a huge data set that we curated and you know maybe we threw some stuff out we don't have time for that and then we build this big massive you know centralized neural net and that's not going to work right that takes a lot of time so we have to do it in kind of a decentralized um quickly updating distributed manner where these nodes start figuring out who they should listen to as far as predictions goes and how those predictions are affecting their prediction of the future on their data.
00:34:04
Speaker
And as soon as things change, they got to keep looking for other other prediction streams that would help them. So they have be, it has to be a network so it can be as quickly updating as possible.
00:34:21
Speaker
So anyway, once we have that whole system that that can predict the world model very well, we start, well, we'll start doing this now, but then we can build like language models on top of that, to translate that system into English to help us make sense of what that network is talking about.
00:34:48
Speaker
And we can query the network directly, like you were saying, maybe um the analogy would be using the terminal. We can go in and say, what did this neuron predict about this thing? But it would be probably pretty useful to aggregate that all up in some kind of larger model that is trying to make sense of it and can speak about it.
00:35:10
Speaker
And then it can translate it to us and get better and better over time. So that's kind of the vision. Yeah. Did that kind of answer the question? Right. Yeah, yeah, yeah. No, that's super fascinating.
00:35:23
Speaker
um So when you ready when you mentioned when you say the world model, it's basically an aggregate of all of the predictions that the neurons are making. Is is that accurate to say? Yeah, that's, yeah, totally.
00:35:35
Speaker
So every single data stream just gets clumped up into this model that's supposedly holding any any kind of data that Satori is observing.
00:35:47
Speaker
Yeah, yeah. i would I would model it as, i would think of it as um a layer. Like we could think of it as the network is talking about the future and all they do is, you know, share predictions of the future.
00:36:05
Speaker
You know, the real world data, we're sharing that data. And on top of that is a layer of LLM's type structures.
00:36:16
Speaker
So it's like a pyramid, like underneath the pyramid, it's this huge world of predictions and predictions about predictions and all that kind of stuff, right?
00:36:27
Speaker
This is huge network of the Satori network. And, and then the base of the pyramid, you start this LLM kind of, I want to try to understand this.
00:36:39
Speaker
And so it's, it's sampling, it's, it's trying to figure out what the, What the network is talking about and why and how it interrelates and how it's been evolving over time. So it's kind of watching that.
00:36:52
Speaker
And it goes all the way up, probably several layers eventually, to the very top of the pyramid where you just have an interface that you can talk to. And um I would think of it like that.
00:37:06
Speaker
More like that, not as like one one big structure that listens to everything, all the details, right? I mean, and that's how our corporations are built. That's how intelligence is. It's always like you have a lot of low level um layer that know all the details. And then as it goes up,
00:37:29
Speaker
The very top layer, you know, the executives or whatever, they don't know what's, you know, the details, but they they kind of get a vision for what's going on in the organization at a high level, right? And they can talk about it and whatever else. so So that's kind of the pattern.
00:37:47
Speaker
Yeah, that's a good analogy for it, for sure.

Satori's Development and Community Impact

00:37:50
Speaker
um So with these LLM, I want to get a little bit into your plans to add LLMs to Satori because right now, like you said, you can observe the streams and you can observe the predictions if you run a node and if you go into your node and you can just see the predictions being made or you can go to the website and I think you've you've got to type in like the exact name of the data stream and you're going to click on that stream and then you're going to get some numbers, which I I'm guessing like people that are unaware of this, they're probably not going to know what what to do with that piece of data.
00:38:20
Speaker
Exactly. So your plans to add the LLM, how is that going and and where are you with it currently? And then I also want to get a little bit into like the technical side of it because I think it's really interesting to see whether the approach is to roll up the data, feed it to the model to produce the LLM or to use more of a mixed kind of use a pre-trained syntactic LLM with with ah the ability to query your streams and
00:38:52
Speaker
which would then spit out the data that the stream you know ah already generated? That's where we're starting. That's exactly where starting because we're at the very earliest stages of the prototypical, you know building that out as a prototypical layer.
00:39:07
Speaker
So we're starting with that saying, okay, we have an LLM that can basically query the data for you. And so you can ask it But as it evolves, we have to get it into this thing where it's been watching, it pre-trains it so that it kind of understands things before you ask.
00:39:26
Speaker
It's not just querying in real time and getting you some information that might be relevant. Instead, it it actually has an understanding of what the network is saying and can answer that your questions in English. so um But, you know, all these all these systems, they have to evolve over time. So you start with the very simplest implementation and then you replace it or build underneath it or whatever you have to do over time.
00:39:57
Speaker
Yeah, that makes sense. And have you settled on a model that you're going to be using it as your foundation? um I don't think we have. we've just We're in the early development stages where we're kind of researching.
00:40:13
Speaker
And we have some prototypical stuff. We're going to be putting it out as soon as we have something, though. you know so And that way the community can play with it, the community can experience it and give feedback on how it should improve.
00:40:29
Speaker
And getting into that feedback loop is very important. Yeah, 100%. I feel like the more feedback you get, then then event the the better your your product, the better the vision as well.
00:40:44
Speaker
yeah um How many active developers are there currently ah working on Satori? Let me think. There's <unk> about seven, I think. um I mean, maybe maybe if you count part-time, there's like nine or 10.
00:41:04
Speaker
So it's it's a small team, but we're growing. Yeah, it's a good it's a good number. I think seven or or nine developers, you know, dedicating their time to to building this thing out. It's it's definitely a good number.
00:41:20
Speaker
um Especially since I feel the ground the amount of ground you cover today with nine developers versus the amount of ground you used to cover 10, 15 years ago when you had none of the frameworks, you know, everything you need, you build from scratch yourself.
00:41:37
Speaker
You know, it is just, you can't compare it. You can roll a website out in like an afternoon if you really, if you, you know, if you get a right framework. Yeah, that's true. And are you guys still using Python?
00:41:49
Speaker
We are. For most of our stuff, we're using Python. Nice, nice. Especially with the neuron because the neuron is doing all the, um, all the prediction and all the AI engine stuff, and that's all in Python. So we just kept it the same.
00:42:04
Speaker
yeah Yeah. Yeah, that makes total sense. um Any plans to move away from Python at all? Or any or or do you foresee any technical or computational bottlenecks with with using Python in the future?
00:42:19
Speaker
As far as the neuron goes, which is most of the code, ah probably not. Um, we had to make it very versatile, right? mean, this is running on all kinds of computers. We don't really know if it's going to be running on, you know, a little laptop for big GPU farm or, or if it's running on the small miners thing in his lap, in his garage or whatever.
00:42:46
Speaker
Right. So we have to make it as versatile as we possibly can. And yeah we can't really leverage other things. I mean, people have come and said, hey, you should just rewrite everything in C. That would take like 10 years, dude.
00:43:04
Speaker
That's usually, that's like growing pains every time. It's a good sign though. Every time, ah you know, your Discord, you got people going like, you should rewrite that in C or Rust. You know that you've got, you know, you're growing, which is great. But also, there's got to be that one guy wanting to rewrite it in Rust.
00:43:20
Speaker
Yeah. and And there are pieces though. I mean that we could, we could whatever, you know, if it's better written in a different language, if there's a piece that you can break out and it really works really well.
00:43:33
Speaker
In fact, we did. We wrote, rewrote the installer huh and C++ plus plus or something, I think. And it works great.
00:43:45
Speaker
It's much more lean. Um, so, you know that, but it's not really connected to the rest of the system. Right. I mean, it doesn't, ride it's just the installer.
00:43:58
Speaker
And so that was a great, great use case for that suggestion. Um, but yeah, we just do whatever makes sense. Yeah, no, completely. complete And that's also you're using Docker and that will basically ensure that you can run on any machine pretty much.
00:44:16
Speaker
yeah you said gpu You mentioned GPU farm. Does that mean, it do you support GPUs at the moment? Because I thought all the computation is CPU bound for now. It's CPU. Technically, we do have some GPU algorithms in there that we put in.
00:44:31
Speaker
but we don't use them yet. They're kind of in the research and development phase and people can use them. You know, there's just a flag. You can just turn it on if you want. but um But you also have to make sure Docker is running too to use your GPUs. And so there's a little bit of complexity, which is, I think, why we didn't turn it on right away.
00:44:55
Speaker
um And because they're they haven't been matured. So it wasn't right. yeah But we are adding in more GPU algorithms. And so as soon as you know those are ready, it'll kind of be the default if you have them available and and everything's set up. So it won't be too long.
00:45:19
Speaker
Yeah. That's very cool, man. that That sounds fantastic. and And how have you... Because obviously you've seen you've seen like really good growth from the community in the past, I'd say, one year or so.
00:45:32
Speaker
um Have you like gotten any like valuable or or or applicable feedback through the community growth and through like plans on... like do do you you reckon that...
00:45:48
Speaker
that the community aspect of it has kind of changed your perspective or added to your existing perspective of Satori? um Like having it changed more than like from from being like your idea to kind of now being this hive mind that you know people think in unison about the development of this thing.
00:46:10
Speaker
Yes. Yeah, given that a of thought. That's true. That's true though. ah have I have noticed some changes. Mostly they've been economic things.
00:46:23
Speaker
It's been miners came on board and understanding you know their incentives and all that kind of stuff. Mostly that has been what has modified the vision, I suppose, but not very much because I've been in crypto for a long time.
00:46:41
Speaker
And so i understand it. you know I had a ah pretty good model to begin with. um But yeah, I've learned a lot of details that i didn't really comprehend or or anticipate before.
00:46:54
Speaker
Yeah, yeah. So the biggest change you say is just people actually getting to mine with it and figuring out, oh, it needs, you know, to be optimized in this respect or that respect or... Yeah, learning what they care about. And um that's that's been kind of eye-opening.
00:47:14
Speaker
I don't think it really has changed very much, but it's it's changed how I approach things, I think. Okay. but what do they What do they care about? um They care about it. they care Well, miners are in the game to earn the token.
00:47:32
Speaker
and then make money on the token. I mean, that's what they're doing, which is totally good, totally fine, no problem. um So that's what they care about, number one. But they do have the second, they do care about something else as well. They want it to be a pure solution as much as possible.
00:47:49
Speaker
So they want it to be open and decentralized and distributed as much as possible. And that's good because that aligns with the incentive structure or the the mandate of the Satori Association.
00:48:01
Speaker
The mandate of the association says you only exist to make Satori as decentralized as it can be in perpetuity.
00:48:12
Speaker
Like that's the whole reason it's here. So, um, uh, so that's good that it aligns with that incentive structure. Um, um,
00:48:25
Speaker
Yeah, i don't know. i don't know. I haven't given that a lot of thought. just kind of a managed it as it came and and handled situations as they came, you know. Yeah. I always thought it's fascinating when you see a group of people that seem to kind of, you know, come together and then share ideas about something, which either there's things you've thought about in the past and you've seen coming or just like completely new things that kind of take you back a bit and you go, oh, wow, like I haven't considered that perspective on things. Yes.
00:48:56
Speaker
the the the Including a staking requirement was probably the biggest thing that I, Didn't really anticipate. I thought maybe we'd have to do something like that, but I didn't really anticipate it from the jump.
00:49:11
Speaker
Right. Go ahead. Sorry, sorry. Go on. Well, I was just going to say we needed to include that because at very early, like, you know, during beta, it was great.
00:49:24
Speaker
Everything was fantastic. But as soon as it became the real network where ah we launched, then All these miners came in that were trying to civil attack and trying to, you know, and so as as soon as that happened, we had to figure out, okay, we need a gate.
00:49:46
Speaker
And unfortunately it has to be this solution. We'd rather it just be open from the beginning, but it wasn't possible. That's okay. yeah you know I mean, we found a solution that works.
00:49:59
Speaker
um And and we it's a solution that we can take away later. We won't need it at some point. But that will probably be being far in the future. So that's the biggest probably change that was unanticipated.
00:50:15
Speaker
um But that's okay. That's a good one. Yeah, um that was my follow-up question. What like what made you implement the the staking requirement? And I was going through my head, always had to be something to do with security, something to do with you know potential attacks.
00:50:31
Speaker
um Because I guess there would just be very little overhead to people just creating thousands of potential you know ah nodes and then attack the network in in that way, wouldn' wouldn't there?
00:50:44
Speaker
That's right. That's right. Yeah. Very interesting development of things. it's It's fascinating how, you know, it kind of, in a sense, speaks to to human nature and also how all things flow towards vulnerability.
00:50:58
Speaker
Like everything on the internet flows towards vulnerability. If there's a vulnerability, you can be sure that someone with enough time on their hands or a group of people will eventually find that.
00:51:11
Speaker
Yes, if there's an economic incentive to do so. Like I said, in beta, there was no economic incentive. So we were just running, everything was good. And then as soon as there was an economic incentive, it didn't take long.
00:51:27
Speaker
How long did it take? I'm curious. I think we had to implement staking within a month or or close to that. ah Right. Yeah.
00:51:38
Speaker
And I'm guessing you was was it it was voted on by the previous, like by beta users to implement that? Yeah. um Gosh, everything moved so quickly. at first At first, we thought the issue was just scaling issues. And so we were focused on, okay, you know, when we were in beta, we had less than a thousand neurons running, everything was fine. And really, we're just running into normal scaling issues.
00:52:06
Speaker
And that that was definitely part of it. But then it just kept scaling and kept scaling. And we're like, oh, okay, now it's going too far. This means it's it's people trying to take advantage.
00:52:21
Speaker
Anyway, so we had to implement it. And there was really no other way to save the network. We just had to do it. Yeah, I mean, look, i think I think it makes sense. I think a lot of chains have seen this kind of growing pains, if you like, where they've just had to do towards the beginning of their journey, they had to do something drastic to save the network and, and you know, keep the integrity of the network intact.
00:52:47
Speaker
ah Like the the rollback that the Ethereum Foundation decided to do back in 2016 when they had like the ah the attack through through the DAO.
00:52:58
Speaker
where people like broke into the DAO and like exploited this contract for, I don't remember the amount, but it was a significant amount of money. um And then the network decided, well, it might not be you know the purists' most ethical way to roll back on a transaction, but it's either that or we implode and we say goodbye to the network and we lead let the attacker take all these millions um and then leave everyone else in the community pissed off at large.
00:53:25
Speaker
um So I think there's really no, that's a no brainer. Like that's the the only choice is roll back the exploit, patch it. And then, you know, hopefully some people don't. So hopefully not everybody, you know, forks out and and comes up with ETC.
00:53:45
Speaker
Yeah. the The sentiment was pretty unanimous. and this yeah we got cool so yeah I mean, it tends it tends to be because like, what's the alternative? We just let these folks like destroy our network. Like, yeah like ah were you able at all to figure out ah or I'm not sure if you guys have looked into it or if it was of any importance, but to figure out where the source of of the of the attacks might be coming from or or any any specific parties or locations that would want to do this or was it all just random?
00:54:20
Speaker
like Because I wonder, like an attack like that, it means that some people must have been aware of the pending launch and kind of waited for that in in the bushes, so to speak. Yeah, yes. And I i don't really want to get into it but I will say Those first few months really reminded me of the show um Silicon Valley.
00:54:43
Speaker
ever watch it? No, actually. So I remember I watched this years ago. and didn't watch it right when it was coming out, but I watched it a little ways after that. And i remember thinking, you know, people had talked about it and I was like, okay, I guess I'll watch it.
00:55:00
Speaker
And um I remember thinking, this is going to be so boring. you know Tech, it's going to be a show. This guy's just coding all day. Boring. right and so But you start watching it, and there's all this kind of drama.
00:55:15
Speaker
There's all kinds of drama. And there's all kinds of things that are happening. And just to get his product out or whatever he's trying to do, he runs into all kinds stuff.
00:55:27
Speaker
characters many of them nefarious and and anyway and so just wild it's a fun show right but I made a note of that I just looked it up I remember um that it's been on my list for a long time i never got into it but it's one of these shows that I have to watch yeah anyway so I i i remember feeling like oh my god my life feels like that kind of craziness right now.
00:55:58
Speaker
So that was the first couple months. It's died down. Things are steady, you have to go through those, those growing pains, I guess. Yeah, a hundred percent. Well, it's great that, you know, the right decision was made and the community, you know, prevailed and and now you have ah a healthy growing network, um you know, all kind of sharing in your idea, which I think that is absolutely fantastic.
00:56:23
Speaker
Yeah, it is fantastic. yeah Yeah. um So ah for, I guess, the the remainder of the year, are you guys focusing on anything else apart from the LLM work?
00:56:39
Speaker
We are. So we have a couple guys working on the LLM and we have a couple guys, you know, all these guys that are working are kind of isolated.
00:56:50
Speaker
They're kind of working on a particular domain and so And we've kind of paired off. So we have a couple guys on the peer-to-peer, couple on the um on making multi-sig transactions, and that kind of goes along with making it as decentralized as we can over time.
00:57:10
Speaker
And and a couple guys on the engine and then the website. and so we have several initiatives that we're working on.
00:57:21
Speaker
and we're all kind of working in parallel and and we still have this hub and spoke model right now where I'm kind of talking to each you know group or mostly each pair of people working on a particular
00:57:40
Speaker
vertical or whatever. And so that's kind of that's kind of how we've organized it so far. And, you know, later if the team grows bigger, we can kind of make those their own teams.
00:57:56
Speaker
But right now it's it's mostly individuals or pairs of people working on a particular domain.
00:58:04
Speaker
Right. um And would you say that then your role has has shifted a bit? ah Do you find yourself being more like doing more like the high level planning and strategizing rather than the coding or do you still code yourself?
00:58:17
Speaker
I still code too. Yeah. Nice. Well, you got it because i was the one that started the thing. And so I coded during the beta alone and before the beta.
00:58:30
Speaker
So i have all the knowledge of the code base. And so i'm so I'm trying to get that out into the minds of other developers, at least in particular domains.
00:58:42
Speaker
um But that takes time. And so I have to code. I have to talk to them about how they're doing on their particular domain. Um, so I have to be pretty involved with the coding still.
00:58:55
Speaker
Yeah. Fair enough. Understandable. Um, did the people that you've got working, um, do you, are they working full time? You mentioned you got like seven full time members.
00:59:09
Speaker
Most are full time and they're, they're getting, I'm guessing they're getting paid by the found by the association or the foundation. Yeah. Yeah. Usually the way this works is the association has a dev company and The dev company yeah kind of manages all that. And so we've set that up the same way. But essentially, yes, it's the association.
00:59:29
Speaker
and the association is a nonprofit. And so it has the mandate you know of doing that. And then how it works is if you can't have profit. so you can't have like profit sharing or anything.
00:59:42
Speaker
whatever. So everybody, I mean, in practical terms, what that means is all the money that goes out has to basically be for salaries commensurate with the market. So that's how it all works.
00:59:54
Speaker
and Yeah. And, and yeah, the money, whatever money made though, it not it's not, that's not, that's not the point gets back. You're invested into the business itself rather than profit taking.
01:00:06
Speaker
yeah That's right. That's very commendable. And where did you incorporate? Switzerland. Oh, nice. Okay. i was I was thinking whether it's um the US or you've decided to stay far away from the US because I know a lot of companies are looking to establish a DAO and to incorporate. They're kind of steering clear of the US.
01:00:27
Speaker
Was that the main reason? Was like the lack of regulatory clarity that made you guys look into Switzerland? You know, I i found a... a group of lawyers that are highly respected in this domain and said, you know, cause I don't know anything about the law.
01:00:44
Speaker
So I was like, right find a group and they're going to tell me what to do. So that's, you know, we try to do everything by the book and that's what they suggested. So.
01:00:56
Speaker
Very nice. Very, very, very, very good. Well, look, I'm hoping that the political situation in the US is going to get a lot better. the the regulatory situation is going to get a lot better. It's...
01:01:08
Speaker
it's It's early, but it seems like the US is becoming really pro-crypto really quickly. And I mean, you can see that as a testament, like the markets are up, what, 30 bitcoins, 30, 40% up since the last couple of months.
01:01:24
Speaker
ah Stocks are, you know, up and and everything. So um I'm hoping that the states will find, you know, place in its heart and on its land to, to house all of these, ah companies and, and all of these, you know, people that are working to create all of this cool stuff rather than forcing people to go abroad.
01:01:45
Speaker
i mean, you can see, you know, uh, these hierarchies

Freedom of Speech and Neutrality

01:01:50
Speaker
of power. i mean, they want to gather all the power they can. So they try to, yeah try to go you know, for everything. Uh,
01:01:58
Speaker
Luckily, we have concepts like freedom of speech, right?
01:02:07
Speaker
You know, if we can keep that one, that's the important one. Can we please stick to freedom of speech, guys? Thank you very much. That one matters. And the reason is, you know, because as ah as a generalized rule,
01:02:25
Speaker
Everything is speech. you know Everything is language. And yeah I don't know. It's very important. That one matters. It's super important. And it's also super important for people to understand what freedom of speech really means because I feel like there's this understanding, obviously, you know, not not generalizing, but some people think that freedom of speech means, well, you can say whatever you like unless I don't like it. And if I don't like it, then you can go fuck yourself because that's no longer that's no longer acceptable.
01:02:56
Speaker
um if Freedom of speech means being able to to speak your mind, whether you're right or wrong, whether whether you you you tend to offend or not, you know, you have the right to speak.
01:03:07
Speaker
Well, yeah, and and the community, like the group, the there's always this huge... um tendency in any group to regulate its own speech.
01:03:20
Speaker
And so what you said is, okay, since there's already this social pressure to regulate speech, yeah happens right I mean, you can't go out and say the right word or you're in big trouble. Right.
01:03:31
Speaker
So um that's just natural. And so i think, I think the wisdom there is to say, Since there's already that um force in any society, in any group, in any company, in any religion, everything regulates its own speech, then...
01:03:53
Speaker
we're not going to permit the government to regulate speech explicitly. It's already handled. We already got it. It's fine. Leave it alone. So,
01:04:05
Speaker
and and and then it's evolved to say, okay, we've realized regulation of speech or freedom of speech is really, it's not just speech, like the literal language. it's It's all kinds of expression.
01:04:19
Speaker
So we use the term now, freedom of expression. Right. But i I think that's a good thing so that we recognize that it's more generalized than just speech.
01:04:31
Speaker
Yeah. But, you know, Anyway. Yeah, well, ideas ideas tend to get extirpated on a speech level because if you know that you shouldn't say this thing, there's only a matter of time until you won't be able to think about this thing. because Then there's the only matter of time until then, you know, that idea, that concept disappears from, from you know, the social um understanding and...
01:04:57
Speaker
and the social fabric of the world, right? It's it's thought crime by proxy. It is not exactly thought crime, but it's it's you know a a couple of steps away from making thought illegal.
01:05:08
Speaker
And if you make speech illegal, you know you could always make the argument that, well, the next thing that they're gonna come after is your thoughts. And then we have a big brother society that I don't think anyone, or I think very few people actually want to have.
01:05:21
Speaker
Right, yes. Like everyone that think that's a good idea Please, please read 1984. And if you don't get the shivers by the end of the book, I swear, like yeah it's a lost cause.
01:05:34
Speaker
ah it's it's ah It's a crazy thing to believe that, that, you know, so many people are pro censorship of things. And to draw a slight parallel to this, I happened to be on the Satori Discord the other day and um there was a certain user I think that was a bit angry that people talk about like general things on the general chat and it was like, oh, let's lets just ah bring it down to just talk about Satori and like,
01:05:58
Speaker
I didn't want to get into a conversation, but but I thought, man, like, it's a general chat for general things. I don't know why, you know, trying to condense the the the sphere of what's up for discussion here and to regulate it and to effectively police every new member that comes on board and asks a question and goes like, no, this is Satori only. You are not allowed to speak about the weather here. No, but it's a data stream.
01:06:25
Speaker
Yes, exactly, exactly. Let it be what it's going to be. Within reason, I mean, of course, sure we have to do something. We can't just have hate speech or whatever else. But within reason, it's fine.
01:06:38
Speaker
Let things go their own way. yeah and And that's kind of, i mean, that's part of why we chose the kind of Buddhist motif for you know the logo and the name and everything.
01:06:51
Speaker
The Buddhist you know doesn't try to control anything. It's very simple, right? I mean, that's kind of the point. um And i I think that's a really good fit for and a network that will start predicting the future, right? I mean, if you start predicting the future and you do it well, then everybody who's listening and wants to know the future and cares is going to listen to you.
01:07:21
Speaker
And if they listen to you and they believe you, then they're going to act as if whatever you say is true. And so you have to have a archetype or you know a personality on this thing that says, I will not control the future.
01:07:39
Speaker
I will give the future that I see and that's it. And so that's very important. oh yeah And so letting things go their own way is a major part of the philosophy of Satori.
01:07:53
Speaker
Absolutely. ah Like the last thing you want is a network with ah with a then with a stake in wanting to shape the future or change the future or alter the future in any way.
01:08:08
Speaker
yeah um we We've got, I would say we have enough misinformation as it is already. ah We have all these, you know, large language models that are being overly moderated and they have political bias and they have all sorts of other biases, right?
01:08:24
Speaker
um Your news, your media are the same. The TV's got the same. Every advertising agency out there only wants to make money for their ah clients. ah There's just so much control, controlled speech, right?
01:08:38
Speaker
um out there that the last thing you want is to add to to that, I think. Absolutely. Yes. Yeah, that's the vision. Yeah. that sort of yeah And I have to say, I think we embody that very well. I think you're like very level-headed and very like calm and very objective about things. And I really do like like your approach when it comes to this.
01:08:59
Speaker
I try. Yeah. That's the goal. I mean, everybody's got an ego, so you got to pay attention to it. and make sure it's always in check. And sometimes it's not, right? But you just do the Exactly, yeah. And this is not a Joe Rogan podcast, but if people would have tried psychedelics a bit more, I feel like there's ah there's always a ah chance of you know, you're likely are you're more likely to to be to get in touch with your ego or to understand that there's those things that are not really great about you that maybe you need to change. And yeah it's not about promoting psychedelics. You could do it through meditation. You could do it through just being mindful, ah you know, on ah on a daily basis. It just takes, I feel, willingness to to want to be
01:09:52
Speaker
self-aware and to want to improve yourself. yeah um I have a couple of instances with with people that I used to work with that ah there's a specific personality type where they just keep talking to you and talking to you about themselves and only about themselves. And if the topic of conversation is not about themselves, they're not interested and then they just zone out.
01:10:14
Speaker
And I'm not sure if you've ever come across these type of people, but it's just one of the least enjoyable conversations that you could have when everyone and everything someone says is just things about them.
01:10:29
Speaker
Well, I'm sure I've been that person before, you know? You don't strike me as the type. I don't know. ah But I have noticed like what you're saying, like even as a general rule, and this applies to me just as much as anybody else, as a general rule, it seems like we're mostly in conversation with ou ourselves, right? I mean, that we have this huge hierarchy and then we whittle it down and then we send it across the wire. Now,
01:10:58
Speaker
Between the hemispheres, every time like data comes in, you can kind of see it as data comes in at the out outer part of your hemisphere. And then it starts to condense down and it goes through all these layers until it gets to the center. This a very basic view, right?
01:11:15
Speaker
But at every layer that it condenses down, the information bounces back out. You know, so it's like a filter. Some goes and then some bounces back.
01:11:28
Speaker
And then a little bit of that filters through and then it bounces back. And so you have this reverberation all the way down. and me um And I think that happens between humans too.
01:11:43
Speaker
So you have all these ideas, you have all this stuff. And as you kind of formulate into language and send it across the wire, you hear it. And whatever you say kind of bounces back.
01:11:56
Speaker
And so i think most people are having a conversation. um I should say everybody is having a conversation with themselves as much as they are having a conversation with the other person.
01:12:09
Speaker
And, and I don't think that's a bad thing, but it's something we should kind of probably notice. yeah Yeah, I think it only becomes a bad thing if they're only having the conversation with themselves. If instead of actively listening to what the other person is saying, they're just thinking about the next thing they're going to say. yeah And, you know, so a lot of people are guilty of that. That's when you become kind of like, we're not discussed we're not talking here. Like I'm saying some things, you're saying some things, and then there's no acknowledgement of what the other person said.
01:12:44
Speaker
Yeah, like the ego becomes the filter. They're only going to let in information that aligns with their ego. And so if the ego is at the forefront and is like, hey, I don't really care about anything you have to say except unless it applies to me.
01:12:58
Speaker
I'll let that in. Right. Yeah. yeah I would say that's also like confirmation bias is also probably one of the top reasons why people um get really emotional around like the election period and like political views and stuff because there's, I think, very few people that are willing to um either have a conversation with someone that believes something else than they do.
01:13:22
Speaker
or acknowledge to themselves, like maybe they made the right decision. Hey, maybe the things that I believed in are not the right things. And I own up to that. And it was my fault for believing this to begin with.
01:13:37
Speaker
A matter of confirmation bias and ego, you know, is getting people to kind of focus on... and only on the things that brings validation to their ideas rather than have their ideas challenged.
01:13:49
Speaker
um We have this currently is something that's been just ah tearing at me for for the last few days. We have elections going on in presidential elections going on in Romania.
01:14:03
Speaker
I'm based in the UK, but I still like, I like to follow that um because it's obviously where I'm from. And both, like both kind of because I'm not there and because like the people that I speak to, I only get information from what what they tell me.
01:14:17
Speaker
And then I got to go and and look it up myself. And I see like a clear disconnect between what I'm being told ah from my friends that support this candidate and my friends that support that candidate versus the information that i see on the media.
01:14:28
Speaker
And I would like to make an informed decision about like who I'm going to go vote for, but I have absolutely no clue what's real and what isn't. like I feel like the amount of misinformation on both sides is absolutely insane at the moment. We've got this guy that's um His idea is that he wants to get out of the NATO and EU.
01:14:51
Speaker
That was how he was presented. And then I go and I, you know, listen to the guy speak. He said no such thing. But then he's also being accused of saying things like ah they find microchips in your Pepsi. And I'm like, what the fuck are you talking about?
01:15:05
Speaker
yeah um And then we got this this other lady that's supposedly but super, you know, like ah very on on the left. But then I go and I listen to her speak and I listen to her ideas and it doesn't seem to be that way.
01:15:18
Speaker
It's just, a I feel like it's a very confusing kind of war that's being fought on social media to sway people one or another. And I don't think I've ever seen something so intense in in the amount of misinformation that you get on both sides.
01:15:35
Speaker
Yeah. wow I mean, that's just the way it always is. I mean, because all the all the incentive structures are like, we want to believe what we think is true and we want to disseminate that belief out into the world.
01:15:52
Speaker
That's just, um it seems like that's what, you know, people talk about this concept, late stage capitalism, right? Yeah. yeah term Late stage. uh it seems like that's what late stage um media is you know it's just at the beginning you know you form a group or whatever and it's you know could be a global civilization you know whatever you form a group and you start talking and you all start trying to come in coming come into consensus on and you're curious you know you're trying to figure out what is going on here and what do
01:16:31
Speaker
Other people believe. I want to know. I'm curious. And then once you feel like you've, it seems like you hit this threshold where, okay, I know where I'm at. I know what everybody else is at. and And then you start switching over to export.
01:16:52
Speaker
Now I want to export my ideas out there and say, this is the way it is, guys. and so and then And then we do this, especially when we have economic incentives, right? We do this everywhere.
01:17:04
Speaker
And so it's just natural that's going to happen. and It kind of seems like if you want the truth, you just got to go to the source. um And then if you can find some sources that will translate stuff,
01:17:20
Speaker
that seemed the right way that you, you listened to the source and now you find this, this other source that's talking about it and it seems to match what you heard, then you can kind of trust that a little bit. But if you really want the truth, you just got to go to the source yourself.
01:17:37
Speaker
Yeah, 100%. I feel like there's barely any like worthwhile dialogue when it comes to these situations, you know especially something as important as a presidential election. There's just people on the left hitting people on the right, people on the right hitting people on the left, ah and it just degenerates into ah misinformation and propaganda. And I really...
01:17:58
Speaker
don't think this is the best way to communicate ideas for an important event over social media. Like people, in instead of kind of having a ah constructive discussion is just, and and we're in ah we're at the stage where people can employ like thousands of bots on one side or the other and sway the algorithm and like really, you know, kind of manipulate this in the most insidious way.
01:18:24
Speaker
um And and we're just I think we're just seeing the start of that. I think if there's no, I guess, way to kind of actually be able to to fact check or to keep a check on these discussions, it can always degenerate from here. Like, can you imagine 2028 election, like how much more we'll know about the impact of social media over the election results? I think that's going to be absolutely mad.
01:18:52
Speaker
That's a good point. You know, when you get self-referential and you start to educate everybody on, okay, this is how much media sways us. It seems like that is, it almost seems like it's an arms race, but that's like the only way you can, you can improve is becoming self-aware and self-referential.
01:19:13
Speaker
um It seems like that's the antidote to, to the lies we tell ourselves, you know, i mean, like when I was, When was young, i was i i knew from an early age I wanted to know everything.
01:19:30
Speaker
I wanted to know the truth. and And, you know, not everybody's like that. It's okay. We shouldn't all have the same value hierarchy, but truth was a very high value to me.
01:19:44
Speaker
And so, you know, I'd come home every day from school. I'd watch Bill Nye. You know, I wasn't getting what do I really wanted out of school.

Philosophical Discussions on Existence

01:19:52
Speaker
I felt like... I could learn more truth more quickly that way.
01:19:58
Speaker
And what I would do is the reason I'd watch it is I liked the feeling of learning something and having it click and being like, okay, I don't understand what you're talking about. and then at some point, it all clicks and you're like, ah, I get it.
01:20:16
Speaker
That's pretty cool. And so I really liked that feeling. So I'd watch that show. Anyway, and I bring this up because I had the belief, you know, the very naive, childish perspective that there's stuff we know, we figured it out, we' we've solved the puzzle, and that's what the world is, big puzzle, and then there's stuff that we don't know.
01:20:42
Speaker
We just haven't figured it out yet. It's just, yeah you know. And I thought, well, that's all there is, you know? I mean, that's that's the whole place. That's all we got. But as you start getting older, you realize eventually um that there is there's stuff you know and there's stuff you don't know, but there's also stuff that you won't allow yourself to know.
01:21:07
Speaker
You won't allow yourself to know because it's either too painful, too scary, too annoying, whatever it is, right? So in order to appease your comfort level,
01:21:18
Speaker
there's lies that you tell yourself for sure. Like this is, if this is true for everybody everywhere. oh yeah. and it's, but that, you know, being true for the individuals that we are is really just an analogy to say it's true for every group of individuals.
01:21:36
Speaker
It's true for every religion, every company, every, every political leaning, every, everything. And so, so, so it seems like this, this self and it, and it makes it worse. Like you mentioned, it makes it worse when we have this desire to be right.
01:21:58
Speaker
You know, I mean, we don't, we may not really care too much about the truth. We care about being the truth, right? We care about our opinion being the right one.
01:22:11
Speaker
And rather than our opinion, reflecting what is right. And, um, And so i I think it's just inevitable.
01:22:22
Speaker
It's just part of the simulation, part of the simulacra where the more you talk to yourself, the more you're in communication with yourself, you go through a phase where the lies become very popular, very, you know, and and hopefully you get out of that phase eventually where you say, i have discovered self-awareness, right? I mean,
01:22:49
Speaker
I have discovered the realization that you can question yourself and you can kind of try to understand why you think what you think and what you think and and was this, and you can look back on the past and try to figure it out.
01:23:04
Speaker
And so um the antidote to that seems to be self-awareness as much as you can get. um Yeah. That's very hard to do because it's inconvenient to face truths that you might you know have been lying to yourself about.
01:23:22
Speaker
100%. People don't want to be wrong. That's ah that's ah one like brick wall there, like a defense mechanism. you know ah People ah just don't want to be wrong. And I think, Faghi, you mentioned simulation. that That's really interesting because I was going towards, if you remember that scene in The Matrix when ah Neo is finally out of the matrix and then he goes back in and then asks Morpheus, hey, why do i why do I look the same? Why do I got like like these new clothes and the glasses and stuff?
01:23:50
Speaker
Well, that's your residual self-image. That is like what you think that you yourself are, that those are your beliefs. and it To some extent, maybe it's just to do with the the the things you can shed, the things you can get rid of rather than the things you can accumulate that will tell you who you really are.
01:24:10
Speaker
like And that's kind of a scary concept because if you go and think about, well, You know, I'm gonna get rid of my like likes in music and I'm gonna get rid of my hobbies and I'm gonna, you know, remove this part of myself and that part of myself. what What am I really? What's left, you know, there? And you'll discover, well, now there's not really much. It's just a void which you can choose to fill it with and it will be filled with something.
01:24:38
Speaker
um And it's a pretty scary idea. like ah and I completely understand why once you found something, you want to like take that thing, just put it in there, just jam it into that existential void so that it just it doesn't you know bother you, I guess.
01:24:54
Speaker
We want something to identify with. Yeah. Yeah. We're always identifying. this so This is one reason I like Terrence McKenna. I mean, he said a lot of crazy stuff. One cool crazy thing he said was, was don't believe.
01:25:07
Speaker
Don't believe anything. Stop believing. Stop it. And stop trying to identify. And, and I mean, that's the Buddhist approach. Like don't, don't attach to anything.
01:25:19
Speaker
yeah But, you know, it's great. Yeah. he He also said that the cost of sanity in this society is a certain level of alienation. Yeah. And that couldn't be more true because if you really want to be your own individual and to have your own beliefs and your own ideas, you you can't subscribe to anyone else's ideas.
01:25:39
Speaker
And how many times have you caught yourself hearing someone say something and then you're repeating that very thing, thinking that you're the one you're the one that thought about that when in reality, you've you've done no thinking.
01:25:51
Speaker
And I do, I catch myself doing this. And every time I do it, I'm like, fuck. yeah um And we're all we're all guilty of that. It's just being aware of being aware of it.
01:26:01
Speaker
ah There's this cool kind of meditation exercise that um there's this writer I like, Robert Anton Wilson. He was like a futurist slash... He's a really intriguing character because he comes back, he started writing back in the 80s, 90s.
01:26:17
Speaker
ah He's got like and a background in s so like in mysticism and the occult. He was like studying Aleister Crowley, but because he got popular in the 90s when computers became popular as well, he has this kind of He's trying to mix that philosophy, the mysticism with with computers and science and and programming, which I found like it he's got a really interesting interesting take on things.
01:26:42
Speaker
um But one of the exercises he proposes is like you just sit down in a chair and you you ah you try to answer the question, why am I sitting in this chair right now?
01:26:53
Speaker
Oh, well, because I'm at home, because I live in this flat. Okay, but why? Oh, because, you know, and you keep asking yourself that question, why, why, why, why? And you eventually end up with something like really like deconstructing something like, oh, because, ah you know, the Japanese invaded China, like in the 16th century or something like that. Like you get out, you end up into weird places if you follow that exercise.
01:27:19
Speaker
Yeah, that's interesting. It seems like you can go down the layers of your own psyche and get to a place where um it it almost seems like you would eventually get to a place where you're like,
01:27:35
Speaker
I am sitting here because I chose to sit here. This is where I'm at you know Does that kind of make sense? yeah Yeah. Yeah. Yeah. but But then you go, okay, so why why are you here?
01:27:48
Speaker
Well, because I was born in the US and therefore um I live in this town because of my mom and dad and their career path and choices. Okay, but why? Oh, well, because you know my granddad...
01:28:00
Speaker
came from Ireland and then they wanted to have a chance at a better life. So they moved across the ocean. Okay, but why is that? Oh, because Ireland had a pretty difficult history and their fights for freedom. ah But why? Well, because the Vikings invaded like, you know, in in the 17, in year 794 or whatever.
01:28:17
Speaker
And you can't keep up. Why did they invade? Oh, because they happen to have like good carpenters that could build boats and then, you know, they could travel the ocean. But why? And it just keeps going. Oh, that's awesome. and And isn't that the premise of kind of the...
01:28:31
Speaker
Western philosophy of the first cause. Like everything, you can just trace it all the way back and eventually you get to a point where you say, well, because.
01:28:46
Speaker
Like you ask the three-year-old, ask its parents again and again, why, why, why? Eventually the parents are just like, because. Yeah. yeah you You've exhausted all the questions. That's it. There are no more questions.
01:29:01
Speaker
And you can just do it forever. And so it kind of seems like, i don't know, it it kind of seems like eventually you get out of the realm of causality and out of the realm form, of time, and you have to say, well...
01:29:18
Speaker
i I got gotta to make the leap out of this physical reasoning of saying why, because of this, because it's always multifactorial and there's always a lot of causes for every effect and you can just trace it back forever. Any, any thread you want to pull can just go forever and ever.
01:29:41
Speaker
And yeah, And so eventually i think it comes down to, I mean, it seems like an existential metaphysical kind of inquiry because eventually you get to the point where you say, well, I'm i'm ah i'm aware.
01:29:58
Speaker
I'm the awareness that's sitting in this chair right now. And so either i have to say, i don't know. i don't know why I'm sitting here.
01:30:11
Speaker
Or I have to say, I trust that it's because I decided to sit here at some point. I don't know. So, you know, I had this experience when I was very young.
01:30:22
Speaker
I was four years old.
01:30:26
Speaker
It was my first experience of being empathetic ever in my life. I'd never been empathetic before. I'd never cared about anyone else. And I was four years old I'm,
01:30:40
Speaker
i'm I'm playing, you know, I'm doing my thing. I have i have a little sister, she's a baby, right? she's And we didn't have a lot of money. So my parents kind of had to ration like diapers.
01:30:53
Speaker
was like, well, you get like two or three diapers a day and that's it. you know like So my little sister had a diaper rash and you know, my mom was changing her and this had happened before where she's crying because she's got this diaper rash.
01:31:13
Speaker
And I always found it to be annoying.
01:31:17
Speaker
And then this time, though, you know, I asked my mom, like, can't you shut that baby up? You know, like, what's wrong with you? And she's like, well, you know, it's got to hurt. You know, it's a sad thing or whatever.
01:31:32
Speaker
And so this time i was invited to kind of give a thought about what was going on as in a larger picture, not just react to my annoyance, you know, no let that come out.
01:31:46
Speaker
So I, I all of a sudden got a little curious and I thought about it, like what's actually, let' let's investigate here. What's going on. So I'm four years old and I look at the baby and I think,
01:32:02
Speaker
What would it be like to be that baby right now? You're getting your diaper changed. It hurts. It's all red and whatever.
01:32:13
Speaker
And so I thought, oh, she doesn't really know what's going on. She doesn't know why this is happening at all.
01:32:25
Speaker
She just knows she doesn't like it And it hurts. And I remember like experiencing that thought and thinking, um this is a unmitigated torture that she can't, you know, and it was, and all of a sudden I i realized that I was feeling like this empathy towards her.
01:32:46
Speaker
And then it went out a little broader. I had this realization that no matter how we suffer we can never know why we suffer.
01:33:01
Speaker
None of us at all, ever. You know, um my annoyance, right, at her crying was a form of suffering, and I didn't know why it was happening. Just like she didn't know, nobody knows.
01:33:15
Speaker
And so this existential realization you can't trace your the reason of your suffering back because you will never actually know what caused it.
01:33:27
Speaker
um, it was like shocking to me. I didn't know that. Right. So I remembered it because, um, you just, you can't trace it.
01:33:40
Speaker
And so it seems like if you can't trace it you just have to trust it. You know, uh, you have to do whatever you can in the moment, you know, and, and and live your life. But,
01:33:58
Speaker
um You just have to trust that that's, you know, you can't trace it. So I don't know. I don't know.
01:34:08
Speaker
You just try not to identify and just try to just try to be calm and and accept it. Yeah, i feel 100%. You can't trace it and you can't force yourself to understand something that maybe maybe it's one of those things going back to what you said that there's things you know, there's things you don't know, there's things you don't want to know. And maybe there's also things you can't know.
01:34:33
Speaker
Maybe there's things that are just you know on such a deep subconscious level that it's something... that you're not meant to process, right? Which is like the whole human condition, right? And like the tragedy of existence, like why what why is that happening? Why are people who have on paper the best lives, ah some of them are still depressed, some of them are still, you know, ah through going through something, which they can't explain why why that happens.
01:35:03
Speaker
um Apart from chemicals in the brain, but is that all there is? Or are we talking about like a deeper kind of thing there? Well, I think i think it's the thing that gives it a deeper expression, I guess, I don't know, is a realization that is hard to express, but I think the early quantum physicists people got a glimpse of and they tried to kind of express it, but we weren't ready for that truth yet.
01:35:36
Speaker
And the truth is that anything that you can't know
01:35:43
Speaker
and anything you cannot know is in a superposition of of everything it could be right. Of everything that could, it could be, it's not, it's not a particular thing that you just don't know.
01:35:58
Speaker
It actually has to be everything that it could be, but in an unseen superposition, right? It's right. And I don't think we really understood that.
01:36:09
Speaker
We got into this debate where we're like, well, no, it's the observer effect. It collapses, you know, the wave around. yeah and And we kind of ignored that realization.
01:36:20
Speaker
But that's that's kind of what the double slit experiment tells us. And so if you can't your suffering, why why it occurred It's both unfair and fair at the same time, you know, and, and, and it it has to be that way. It's, it's in a superposition.
01:36:40
Speaker
And so yeah you get traditions that say, well, it's totally fair. It's your karma. It's what you are. it It's where you came from. you know, it's, it's due to, it's due to what you, what you are and who you are and who you're going to be and who you've been. And, and and then you get other, I don't know if we could call traditions. How about philosophy that says,
01:37:01
Speaker
No, it's all, you know, everything's completely random and there's no, you know, there's no reason for anything to happen. And it's just completely chaos and then absolutely absurd.
01:37:12
Speaker
And we have to get to a point, I think, someday when we recognize that
01:37:20
Speaker
both of these beliefs are as legitimate as the other because it's something that we cannot know. yeah And so not it's not to say that they're both false.
01:37:33
Speaker
It's to say that they're both false and they're both true. 100%. I'm not exactly sure what to do with that. It seems like it's avoiding the question. But I know but i feel that's correct, yeah.
01:37:48
Speaker
I think it matters because then you can you can compare it to something like death and you can say, well, I'll never know that I have died after I'm dead.
01:37:59
Speaker
So is death real? Right. it It seems like it takes the sting out a little because at that point you have to say, oh, well, that means it's not a particular thing, which means if I, if I have a particular model of death and I say, well, it's, it's the, we're just, you know, you're just dead and and you're gone and that's it.
01:38:24
Speaker
Yeah. Well, actually no. Right. You, you, you can say, well,
01:38:31
Speaker
It can't be that because that's specific. That's particular. And so anything that you could imagine that's particular, it's not that, right?
01:38:41
Speaker
so So it means it's a superposition. Yeah. Yeah. and Anyway, I don't know. don't know. It definitely means something else. so Yeah, well, that the way you look at the world and the way you perceive the world is kind of like the double state experiment in itself, right? Because you could perceive it from um a perspective of that everything is chaos and nothing matters. Or or you could perceive it from a ah Buddhist set of beliefs where, well, where you are what you are because of reincarnation or Hinduism because of the karma, because of what you've done or...
01:39:15
Speaker
um So it's the way you perceive the world, the world becomes it. And, yeah you know, it is true for you. Maybe it's not the absolute truth because the absolute truth, like you said, could be a superposition where all of those things are real, but it's really not for your experience temporary here on this earth, which, you know, what's stopping you from believing in that? Then if that's what you like to believe.
01:39:39
Speaker
Yes, if that's what you if that's what you want to identify with, that's what the world starts to reflect.

Community Engagement and Conclusion

01:39:45
Speaker
Absolutely. Yeah, 100%. I like that we kept this on topic with Satori. I feel ah that that's brilliant. But no, genuinely, I do enjoy this quite a lot.
01:39:57
Speaker
ah So do you have anything ah that you would like to ah announce at all for anyone that might be listening to to the pod? um Anything to do with the network, Satori, new features, come and, you know, join the Discord, stuff like that?
01:40:15
Speaker
Yeah, join the Discord. we put out an We put out like a weekly update every Sunday night. And so, and and we kind of just try to keep everybody up to date, at least with that.
01:40:33
Speaker
We do big announcements sometimes when when something's being completed or whatever, but you can kind of see most of what's in the workings through that weekly update.
01:40:44
Speaker
um So, yeah, I mean, that's a great place to go. Nice one, man. And probably by the time this is live, the the requirement, the staking requirement would have gone up. So ah just add a plus one to whatever it is currently, I guess.
01:41:00
Speaker
yeah
01:41:03
Speaker
Awesome. um Well, listen, it's been an absolute absolute pleasure having you on. I really enjoyed this discussion and you're welcome anytime in the future. um Anytime you want to you know, just just talk or talk about Satori or any important things, always welcome here, man.
01:41:21
Speaker
Yeah, this has been a great conversation. Thank you. My pleasure. um Well, I'll speak to you later then. We'll see you. Take care. Bye, everybody.