Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#8 POT: The Cryptocurrency Podcast - Satori: Predicting the future With Decentralized AI image

#8 POT: The Cryptocurrency Podcast - Satori: Predicting the future With Decentralized AI

E8 · Proof of Talk: The Cryptocurrency Podcast
Avatar
139 Plays2 years ago

Satori is a decentralised AI blockchain that can predict the future. Its creator - Jordan, explains how it all works in an engaging conversation around the possibilities of a decentralised AI that esentially looks at the world and makes predictions about its future. 

Developed by visionary Jordan Miller, Satori is not just a blockchain network; it’s a confluence of AI and blockchain technology, designed to predict the future. Unlike traditional blockchain applications focused on financial transactions or data security, Satori’s only purpose is to make predictions about the future of the world.

The Core Functionality of Satori

At its heart, Satori is a network where each participating computer functions as a node. These nodes are assigned specific areas or ‘data streams’ to monitor. The variety of these streams is vast, covering elements from climate data and stock market trends to socio-political changes.

The nodes continuously gather and analyze data, constantly refining their predictive models. This relentless pursuit of accuracy is what sets Satori apart. Each node acts as an expert in its domain, continually updating its knowledge base and sharing insights with the network. This collaborative approach enables Satori to evolve and adapt, making its predictions more nuanced and reliable over time.

The Role of Satori Nodes

Each Satori node plays a pivotal role in the network’s predictive capabilities. These nodes are not just passive receivers of data; they are active analyzers and forecasters. Depending on the computational power of the host computer, a node can process one or multiple data streams. This versatility ensures that Satori’s network is not just powerful but also resilient and diverse in its analytical capacity.

The nodes are more than just conduits of information; they are centers of learning and adaptation. As they process data, they develop specialized expertise in their respective domains. This expertise is then shared across the network, contributing to a collective pool of knowledge. This process ensures that each node not only enhances its predictions but also enriches the entire network’s intelligence.

One of the most innovative aspects of Satori is the way nodes communicate and collaborate. Each node shares its predictions with the network, creating a rich tapestry of insights. This inter-node communication is vital for refining forecasts and identifying patterns that might be invisible to a single node.

The Broader Implications of Satori

Satori’s potential applications are as diverse as the data streams it analyses. From predicting stock market fluctuations to anticipating climate change impacts, the network’s scope is vast. Moreover, Satori’s design enables it to cater to both public and private forecasting needs. While it can offer insights into societal trends and global events, it can also provide bespoke predictions for businesses or individuals, adding a layer of personalized intelligence to its capabilities.

Satori Discord

Download Satori

This podcast is fueled by Aesir, an Algorithmic cryptocurrency Trading Platform that I helped develop over the last 2 years that offers a unique set of features.

Aesir Website

Aesir Discord

Recommended
Transcript

Introduction to Satori: The Base Layer of Intelligence

00:00:00
Speaker
As I was thinking about and designing Satori and kind of figuring out what it should be, I see it as the base layer of the intelligence stack of a distributed intelligence of the Earth, computational intelligence of the Earth. Because it's mainly just prediction of the future. On top of that, we can build other things. And I will say this, though.
00:00:28
Speaker
Predicting the future in an open fashion that it's, you know, everybody can see it. That is the main goal of Satori.

Interview with Jordan: Creator of SatoriNet

00:00:46
Speaker
What's up, everyone? Welcome to Proof of Talk. I'm here with Jordan, who's the creator of Satorinet, which is a really, really interesting project. First of all, good to see you, man. How are you doing? I'm doing well. How are you? Yeah, all good. Thanks. I was reading about Satorinet and I also watched the video that explains what it is.
00:01:04
Speaker
I have to say, out of all new projects that I've seen in crypto, this is by far the most interesting one in a while. It's a very simple, but yet so such a powerful, powerful concept. And I'm really excited to talk to you about it. So maybe you just want to start by explaining what SatoriNet is and what its purpose is. Sure, sure.

Satori's AI Network: Predicting Future Events

00:01:26
Speaker
Yeah, I've been working on Satori for about two years.
00:01:30
Speaker
And it's been mostly me. It's a community project, but it's got a really, really small community, because nobody knows about it yet. It's 40 people. So Satori is a network of AIs, AI bots running on computers worldwide. It's a network of AIs that are only trying to do one thing. They communicate with each other. They talk to each other. They're only trying to figure out what the future will be.
00:01:59
Speaker
So that's what it is. That's the whole project. It's a network of AIs trying to predict the future accurately. Satori is a blockchain that predicts the future. It watches the world and predicts how it will evolve. Those predictions are free and open for everyone. They cannot be censored. That's why it uses blockchain. Satori has a machine learning engine.
00:02:23
Speaker
It's automated and learns how to forecast the future. That's how the predictions are made, through the use of AI. Satori lives at the intersection of blockchain technology and artificial intelligence. It is a blockchain that predicts the future. Now, next let's discuss how Satori works.

Satori's Machine Learning and Blockchain Integration

00:02:44
Speaker
Satori is a network of computers, each of which is a Satori node.
00:02:48
Speaker
When a Satori node starts up, the network assigns it something to watch, something to predict, something to become an expert at. The Satori node then subscribes and watches that piece of information, whether it be a government statistic, stock price, climate metric or something else.
00:03:07
Speaker
As it watches the data, it learns how this thing, whatever it is, reacts to all other changes in the world. It finds correlations, detects patterns, and in general makes better and better models of what the future will be. The Satori node never sleeps. It runs all the time, broadcasting its best estimate for what the future holds. It even works with other Satori nodes to discover things it couldn't on its own. Over time, it becomes an expert on the topics it was assigned.
00:03:37
Speaker
and it never stops learning. That's the key to how Sartori works. Every Sartori node watches some real-world data, specializes in understanding it and works all day every day to make the best predictions it can.
00:03:50
Speaker
And I think that's exactly what's so fascinating about it. Cause it's a very easy to understand top level idea with a lot of complexities behind it. And he's really, really fascinated to try and unpack these. Like last time we spoke, you said that the chain itself is not yet live, but you can run a note if you like.

Development Stages: Pre-Alpha to Full Launch

00:04:09
Speaker
Yeah, the chain is not live. So where we're at right now is pre-alpha, alpha release. So we're at a point where, um,
00:04:20
Speaker
Actually, you can just go download Satori and run it on your machine, but it'll probably break as we're fixing things and changing and developing things. We're hoping to release like a real kind of alpha testing network at the beginning of next year. So January, February, March, maybe. And then at some point next year, get to beta.
00:04:49
Speaker
and maybe the end of next year, actually launch the chain with the token, everything's running perfectly, yes. That's what I have. Okay, so when you say that one node is only, or one machine, I'm not sure if a node can do multiple things or just one thing alone. How would you determine what a node is going to focus on when it comes to predictions?

Data Streams and Predictive Collaboration

00:05:17
Speaker
So the network is made up of a bunch of streams. So every node can broadcast real world data onto the Satori network so that all the other bots or nodes can see that data and start predicting it. So when a node starts up and joins the network, it will say, I'm brand new. I don't have anything to do.
00:05:45
Speaker
what should I look at to predict? And the network will say, look at this data stream and maybe this one and this one. And it kind of depends on how powerful the computer is, the joins. If it's powerful, then it might get a hundred data streams or I don't know. If it's not very powerful, it'd probably be less than 10.
00:06:05
Speaker
But it's going to get some assignments. It's going to start listening to those data streams. And it's going to start learning how they work, how they operate, and then broadcasting a prediction of their futures out to the rest of the network.
00:06:23
Speaker
And so, yes, it can predict more than one thing, and it will just predict as many things as the computer can handle, really. How much memory does this have? Bandwidth, all that kind of stuff.
00:06:36
Speaker
Right. Okay. And does it ever learn from its own predictions? Let's say you give it a stream of BTC tickers for one hour. One hour has passed. It's made a prediction hour ago. It was off by, I don't know, $50 or whatever. Does it take that into account for future predictions and improve its own accuracy? Yes, it does. Actually, that's what it's doing all the time.
00:07:03
Speaker
So you start up the computer, it's downloading the dataset and it gets the history. It gets the history of the data and it's always listening to new updates and adding that to its history. And then it takes that history all the time and it tries to build a model that predicts it most correctly.
00:07:30
Speaker
And, and it can, you know, depending on the type of data, the amount of data, the type of algorithm it's using to build a model that could take a few seconds to a few minutes. We're not building these massive, like deep neural net models yet. So, and mostly because this is made for home computers still at this point. Right. So it'll iterate on these models. So it'll build one and it will say,
00:07:57
Speaker
Okay, if I had been using this model for its history, would I have made the best predictions that I've ever seen? And if the answer is yes, then it saves that model and it says this is the one I'm gonna use when I get new data.
00:08:17
Speaker
I'm going to use this one. And so it saves that, puts it in the background. It's ready to go. But it continues to look for new models. And it's always looking. So 24-7, it's making a new model every few seconds and comparing it to the best one it's ever seen.
00:08:38
Speaker
as new data comes in, it can make that model even better because it has more data. So it's always churning on building a better model that predicts the future of whatever it's looking at. That's really interesting. Have you ever used the machine learning tool in Visual Studio? Like if you play around with C sharp, you have a way to insert the machine learning model into your Visual Studio project.
00:09:06
Speaker
And the interesting thing is if you give it, let's say you give it like a data set of, you wanted to analyze your sentiment for a specific topic. So you give it this data set of different sentences that are all rated between zero, one and minus one, minus one is negative, zero is neutral, one is positive. It will run this data set and you'll try to find the best model to fit
00:09:33
Speaker
within that gives it the best results. So at the end of the session, let's say it takes a few minutes, it will tell you, well, this model had 86% accuracy. This model had 82. This model has 75. So therefore we're just going to use this one that has 86 in production. Right.

The Complexity of Machine Learning Transparency

00:09:51
Speaker
Yeah. Yeah. It works a lot like that.
00:09:53
Speaker
I haven't played with that. I haven't done a lot of ML. My background is in business intelligence. That's why. Right. Well, neither have I, but I like to play around with these things. I know. Yeah, it's fun. In business intelligence, I did make a few models. I did kind of play with that. I was mostly interested in automated model creation, this kind of self-learning automated ML, but in kind of a
00:10:25
Speaker
large corporation business. They don't do a lot of that kind of stuff because they have to know how every single thing works. But here we can do that kind of stuff because we don't have to do any kind of reporting on how our models behave. All we have to do is make the best model possible.
00:10:44
Speaker
Right. Don't you kind of have to take a leap of faith when working with machine learning at some point because you have no direct way of saying, well, that's how a neural network works. The way I understand it and it's like a very simplistic understanding of it is that you
00:11:01
Speaker
put something in and then you've got this layer or multiple layers of neural networks that data passes through and then the output comes out. What happens in between is like a black hole. We kind of really don't know why it works the way it does, but it does work. Yeah, it's wild. I've thought a lot about this and there's different ways of learning. There's incremental learning where you're actually
00:11:30
Speaker
reducing the models understanding of the state space of possible models down. And so that as you're learning, you always make a new model that has perfect predictions on everything it's already seen. Neural networks don't work like that because actually working like that is very computationally expensive.
00:11:53
Speaker
So the neural networks are just kind of randomly searching this vast space for a model that works. And when they find one, that's great. So it is kind of a black box. You just don't know how it works.
00:12:07
Speaker
But we don't know how our own minds work. That's true. We are learning more every day about it, but we, yeah, we don't know. Sure. Yeah. I mean, that's a very good comparison between the, because you can't ask to like, there is no way that you can derive a conclusion. Oh, that that's, that's what you're going to say based on your neural activity or something.
00:12:28
Speaker
Right. Yeah. Yeah. You can't ask that from human. You can't ask that from a machine either. We're kind of alike in that way. It is kind of wild. That's right. Yeah. Yeah. So how does machine learning work within a node?

Technical Aspects: Docker and Python Automation

00:12:42
Speaker
Like what are some of the technical components of a node? What's it made of from a technical perspective? I know it's got a token container. I didn't get a chance to run the token container. I will do at some point because I'm interested in this. So it's all there to unwrap.
00:12:58
Speaker
Yeah, we chose Docker container so that it was just easier for people to download and have the environment all set up and ready to go, and we can modify it easily. So the node is a piece of software running inside that Docker container. So you just run it, and it's got its hands off. It's automated. You don't have to do anything.
00:13:24
Speaker
The engine is actually the thing that I built first, just a very prototypical proof of concept engine, which would take in data, try to build these models automatically. And then every time it took in new data, it would produce a prediction, respond with a prediction. And so that's the first thing I built.
00:13:49
Speaker
And that's the part of the node. I built that in Python. Oh, nice. OK, cool. Yeah, I love Python. And actually, the whole thing is pretty much built in Python, mostly because that one is. So the engine is
00:14:06
Speaker
the heart of the node, it's just running all the time doing its thing. And there's an outside layer to that engine which allows everything to communicate with it. Like there's a UI that allows you to kind of see what the engine's doing. It goes out to the network, it goes out, it gets data, it saves it to disk, that kind of stuff. So that's the basic architecture of the node.
00:14:35
Speaker
That's pretty cool. So it's also cool that you've chosen a language that is relatively easy to pick up for people. So you stand a better chance of people developing on your network, which is really cool. Do you have anyone else that works to build the nodes or to build the network alongside yourself?
00:14:58
Speaker
Not yet. So I would say I have a few like advisors on the project, people that have kind of given me advice and stuff like that. But I've actually written all the code so far. And I wish that weren't the case. But
00:15:14
Speaker
I'm not the best communicator and getting people excited about this when it's just an idea. I kind of figured that I would just have to build it and show people, look, this thing actually kind of works. And we could make it better now that everybody can see the vision. That's kind of what I'm going for.
00:15:35
Speaker
Yeah, that's fair. Yeah, it's good sometimes to just put the word out there so that you can get some interest going and get some people potentially interested in this. So if there are any developers out there listening to this, do check out this project because I think it's such a good, a common good kind of idea to roll out and to be part of really.
00:15:59
Speaker
So I also was going to ask about the kind of data that it chooses to predict. Like you've said a node is being given a certain number of streams based on the power of that computer that runs the node, but what dictates what kind of streams to look for in the first place? Do you have to, do you have like a database of all the possible streams and node connects to, or how does that relationship between node and network works?
00:16:28
Speaker
That's a really good question. By the way, for any developer that is interested in looking at this project, whatever, connecting, there's a Discord server link on the website, which is SatoriNet.io. There will also be one in the description as well. Oh, cool. Very cool. Very cool. So I'm sorry. What was your question? It just escaped me.
00:16:55
Speaker
What's the relationship between nodes and the network and how is there a database of like a central, well, not central, but is there a database of all of the possible streams that a node might choose to connect to? Right, right. Okay, so the streams that they choose, let's say you're given, at first it'll just kind of be random, but as your node is learning and doing things,
00:17:21
Speaker
it will start to search for data streams that are correlated with the one that it's trying to predict. So it will actually subscribe to more data streams than it's trying to predict. This is the design right now. Eventually, the design, just because of bandwidth constraints, will eventually go to a point, I think, that it will predict everything that it consumes
00:17:48
Speaker
but let's not worry about those details right now. So you're trying to predict the price of gold, say, as a node. You might be given a bunch of other random streams to also predict, but at some point you might realize that the price of silver is really correlated with the price of gold. And so you might ingest that data, maybe not even to make a prediction, but to inform your prediction of the price of gold.
00:18:15
Speaker
So I think of it this way. The engine is sitting there looking for the right model. And it has to search a few different spaces. And the more space, the larger the space is. So let's start with the algorithm. It has to choose the right algorithm for your data.
00:18:37
Speaker
That will mostly be constrained by the kind of hardware that you have. If you have GPUs, it might be doing neural net. If you have just a CPU, it might be like decision trees and stuff like that. So it'll build a model. It'll choose the right algorithm for the data. It'll also have to choose the right hyper parameters for that algorithm. How should it work exactly?
00:19:01
Speaker
It'll also have to choose the right way to look at the data. So like feature selection and feature tuning. And that includes like what data am I looking at? Am I just looking at the price of gold? Because that'll get me a good prediction. But if I look at the price of gold and everything it's correlated with, that'll probably get me a much better prediction. So it has to find those data streams.
00:19:32
Speaker
So that's kind of how it would deal. It will have a searching mechanism. And I actually wrote the code, even though it's all very prototypical. The engine's still very prototypical. It does have a way to kind of get a handle on which data streams will be useful to it. It downloads a data stream. And it says, it doesn't even do anything yet. It just looks at the data and produces a prediction score.
00:20:03
Speaker
It says, this is using entropy. It says, this data helps my predictions so much, right? Or it doesn't, it's completely uncorrelated, it's completely random. I can't use this data to predict my thing. And so it produces a prediction score before it even uses that in a model and kind of really tests it.
00:20:29
Speaker
And that way it can download, produce a prediction score. If it's below a certain threshold, it just throws it out and says, I'm going to look for something better. And they can also work with each other and give suggestions. Why don't you download this one? Because I noticed you're looking at this and this and this, and you haven't checked this one for a prediction score. So they have a lot of mechanisms to find the best inputs to their model.
00:20:55
Speaker
Right, but there must be some sort of service that aggregates and distributes all of these streams.

Transition to Decentralization: Blockchain as Oracles

00:21:03
Speaker
Yes, sort of. Okay, so right now in this kind of alpha version,
00:21:11
Speaker
It's a centralized database that knows about all the streams, but it doesn't produce the streams. You've got to standardize them somehow to distribute them to the nodes. Exactly. But what it does is it says, I'm the database, I'm the Satori server. I know who produces what type of data stream.
00:21:33
Speaker
This computer, this node, is producing the price of gold from this source, and this computer is producing some other thing. So when the computers connect to the server,
00:21:47
Speaker
they actually connect in so that it serves as a rendezvous server so that they can then connect out to the other nodes directly. And so this is how the peer-to-peer works right now. So then they connect to the other nodes directly and the server tells them, you need to connect to this one because it has the data that you're looking for. And so then those two nodes are talking directly
00:22:14
Speaker
and it can hear every broadcast that that one makes. So the actual data is just provided by the nodes, but the centralized service, which will be a blockchain eventually, it's just a database, you can just put that on a blockchain. That will be the Satori server for now.
00:22:37
Speaker
Right. And that's where you're going to have something like an oracle to look outside from the chain to pull this data in, right? That's actually every node is an oracle or can be an oracle. And I've actually built that in already.
00:22:53
Speaker
in its simplified format. So you open up your node UI, it's running on your machine, and you can see a place where you can put in like, go to this website or this WebSocket or whatever, this API, get this data, and send it out to the network with this name.
00:23:18
Speaker
So every node is an oracle itself. The node will go out, ping the data, send it out to the network on a schedule that you can set, like a cadence.
00:23:33
Speaker
So, um, yeah, every node can be an Oracle. Can you subscribe to any kind of data from, from outside, any data source? And do you have to run some, cause you'll have to make it fit within the parameters that you've set, right? It needs to be a certain, I don't know, Jason format or whatever it is. Um, does it do that automatically or do you have to format this data?
00:23:56
Speaker
Nope. Um, so it can be anything. It can be a, well, it can be anything basically. It can be anything. Yeah. And so I assume that at the beginning, you know, I'm kind of designing this for the beginning.
00:24:11
Speaker
We're just going to need to know all the metrics of the earth, right? We need to know climate metrics, government statistics, economic prices. We just need to know mostly just numbers. And as the thing evolves, it'll start looking at text and it'll start looking at images and it'll start predicting other things. But at the beginning, it's just metrics because we just want to cover the base of everything.
00:24:41
Speaker
in the most simple fashion. So you might say, well, go get the weather from this weather API, and you'll tell it the exact URL for you want the weather out of Germany or something. And then it'll go get that metric, and it'll send it to the network. And if you might have to translate the API of that weather data,
00:25:11
Speaker
and say, well, just extract this one little piece of information out of that JSON. So there's a way to do that. But it's mostly just automated. If the data is already just a piece of data, then you don't really have to deal with that portion of the relay. I call it a relay service. You're relaying data onto the Satori network.
00:25:37
Speaker
Okay, so that is in essence how a node can work and can communicate with the network itself. But then I remember reading that you can also then have all of these nodes communicate with each other and help each other based on the prediction that they've already made. And I think that's where it starts becoming really interesting, because then you can get to find out correlations that you wouldn't probably even think about.
00:26:05
Speaker
Right. Connections that are just radical. The simplest way for them to start doing that is by saying you're predicting the price of gold. Your note is. And you realize, oh, the price of silver is highly correlated as we discussed that example.
00:26:30
Speaker
You might say, I can ingest the price of silver, and that'll help me with my prediction of the price of gold, or there's already like 40 silver predictors out there on the network producing some kind of prediction of silver. Why don't I just average their predictions
00:26:50
Speaker
and use that as my input to the price of gold. Because these 40 different silver predictors are also listening to all kinds of different things. So they make different numbers, they have different models, and then we average them to be able to get kind of the wisdom of the group, the wisdom of the crowd. We average that and we say, that is gonna be my input
00:27:17
Speaker
I am going to leverage all the work that they're doing to be able to have a better informed prediction of the price of gold. So the very first way that they start to help each other is by leveraging the work that each other is, that they're doing, by consuming the actual prediction of what might be correlated with their data.
00:27:42
Speaker
Right. So in that there's also the limitation, I guess, that if you try to predict on top of a prediction, you might lose some of the accuracy, right? Like I could build a model. Let's say I'm giving it.
00:27:59
Speaker
one-hour tickers of the gold prize for the past three years. Now, the next hour might be predicted with 80% accuracy, but if I take that prediction, then I say, well, now give me two hours from now and four hours and eight hours and 16 hours. The margin for error just grows exponentially with each prediction and I guess
00:28:20
Speaker
how far it does, right? Like how far can a node look into the future? Where is it based on one, three, five data points? Like, well, as a cutoff point, I guess.
00:28:33
Speaker
This this system is a protocol and so the protocol will have to evolve and I I think at some point the protocol might become when I make a prediction I'm actually producing a forecast and it's got a distribution of confidence and so it'll probably evolve into that but for now I
00:28:56
Speaker
The easiest way to solve that problem that you just mentioned is actually to have multiple different time cadence
00:29:09
Speaker
different streams. So you could say, well, I got, uh, I got gold and it's a one hour price. Every, every hour we get a new price, but I also have a different stream that is a four hour price and a different stream that is a daily price. And so I have multiple predictors predicting each of these. Um, so yeah, that's, I think that's the best way to solve it right now because it's just the simplest and easiest way.
00:29:39
Speaker
Right. Yeah. Yeah. That makes sense. You could just aggregate the data and then just work with less data to make longer term predictions. Right. I guess. Yeah. That's, that's good. That's a good way of going about it. Um, so obviously this is, this can serve a lot of public good if adopted and it becomes like a big project because people are going to, uh, you could, you could look up anything, right? That's the point. You're going to have this one.
00:30:06
Speaker
is I guess the idea is I'll work kind of like a search bar in Google. Yes. Yeah. You'll type something in like, how is, you know, how are roads going to look like in 50 years from now? Right. And hopefully there'll be a node that's looking at the amount of asphalt versus cost versus any potential upgrades on the road. Maybe, I don't know, have them made out of solar panel or have them have like, I don't know, electricity hubs or whatever. Right.
00:30:36
Speaker
You might just get a prediction of this is how the future of Rose is going to look like. That's wild. Yeah. And I think it's modeled as a search bar right now. You can go to the website, you can kind of put something in.

Public and Personalized Predictions

00:30:49
Speaker
But for the immediate future, what you'll be able to put in is just all the data streams that are being predicted. You can search for the data stream you want. It'll evolve, like you said, into something that's more high level.
00:31:04
Speaker
And that will be done, I'm sure, with a chatbot, an LLM, so that all the data will flow into the LLM, and that one will be the one that translates your human intention to connecting the data points together. It'll translate it into that, and then it'll just have a conversation with you about the future.
00:31:27
Speaker
Right. Yeah. So it'll be exciting. Yeah. It already sounds exciting right now, honestly. So is it you've built this because there is currently no open AI out there on the internet or is it for public good? Like what's the main motivation for building a network like this?
00:31:51
Speaker
Yes, it's for public good. I don't know which one's the main motivation, but AI is centralizing and centralizing quickly, and so that's one reason for Satori to exist.
00:32:06
Speaker
You know, predictions on time is not something that's been centralized yet. We've done very static things. Like, if you want to build a model that's really big and powerful out there in traditional AI right now,
00:32:27
Speaker
You can't because you'd get a static data set like a bunch of images and you get a static data set of like English language and then you build a model that translates between these two static data sets and it can be massive and you can take a year to train it
00:32:44
Speaker
And then you have something that's really cool because it's working with static data. I call that spatial data, like an image. Everything's spatially related. But with data that is kind of necessarily exists in time,
00:33:06
Speaker
That's a little bit harder to do in a centralized fashion. And so it's not impossible, but it's harder. And so that's why we haven't really seen it. By data that exists in time, do you mean like tickers or any kind of numbers? Anything that's updating really fast, like for instance. Let's say English evolved at a much faster rate than it does.
00:33:33
Speaker
Let's say every three months, it's practically a new language, right? Because it's just evolving really, really fast. Well, it takes more than three months to build chat GPT, right? So you get the data set, you build it, and then you have this thing, but it's translating something that's no longer useful.
00:33:54
Speaker
And so with time, you get the data set, you build it, and you have data that is now, your data set is stale by the time you've built a massive LLM. So the most valuable,
00:34:13
Speaker
The most valuable data that it can predict is like the next three months. That's where its accuracy is the highest. And it took you six months to build the model. So it's like, now I'm not saying this is an insurmountable task that you can't do in a centralized, I think they'll get over it.
00:34:36
Speaker
early days, this is kind of a niche that you can get into and build a platform that's distributed and say, we need to make this, we need to allow everybody to have a voice in what the future becomes because predictions have a feedback loop. They tend to make the future what they predict.
00:34:58
Speaker
over time. So if we can decentralize that as early as possible, which is like now, then we can, we can, I think have a better future, because we don't want one entity or two entities that have a lot of incentive to collude in certain areas to be in charge of what the future of humanity thinks that it is. Oh, yeah, for sure. Yeah.
00:35:27
Speaker
And you definitely don't want one or two organizations to be in charge of AI and be at the forefront of regulating AI, which is what is happening right now with OpenAI, with Elon Musk's XAI and Google. It's literally three or four players that are lobbying the government to make regulations, to make AI safe. The problem I have with that is that you guys are
00:35:55
Speaker
the market, you are the guys that are producing the AI. Shouldn't there be an autonomous regulatory body that deals with this? It just gives me the same vibes like FTX lobbying the government for crypto regulation. You've got the same thing. You've got these three AI companies. It's weird. It's a bit weird. Yeah. I mean, this is a thing.
00:36:20
Speaker
that happens in every industry. It's just natural. It's a regulatory capture. The people that are the big players in the industry.
00:36:31
Speaker
go to the government and say, you know, you probably should regulate us. You know, it sounds like it's a PR. It's like, that sounds pretty good. Like they're doing the right thing.

AI Regulation Risks and Innovation

00:36:40
Speaker
But then the government's like, well, we don't know how because we're a limited entity. Like we don't, we're not God. We don't know everything about everything. So how should we regulate you? And they say, well, we'll tell you.
00:36:51
Speaker
I'll tell you how you strengthen this. Yeah, just trust us. Honestly, nobody tensions. And at the very least, they make it a good, well, an easy ride for the already established AI companies. And at worst, it creates, and I'm sure that it will, high barriers to entry for other players. That's the main point of it. Yes.
00:37:13
Speaker
Yeah. Cause then if you've got an open source project, then you've got to take this certificate and you have to make sure that your data's formatted in that way. And you've got to get an approval for processing that data. And you've got, it's just going to be such a headache. Yeah. Yeah. And it's going to kill, kill innovation. So that's why we need to build it now.
00:37:33
Speaker
Build it now. So yeah. Build it and propagate it so that it can't stop it once the regulation is in play. Exactly. Yeah. So do you think there's any, right now, any limitations regarding the way that you can use AI or you can use data that's available out there?
00:37:57
Speaker
Not for our purposes. I don't know. I don't know. Anything can technically be relayed onto the Satori network. And eventually we'll also include other networks. You know, there's stuff like Streamr and Ocean and there's other data oracles that we could have used from the beginning, but we kind of just rolled our own.
00:38:18
Speaker
So we'll try to incorporate as much as we can because it needs to be as decentralized as possible. And that's one way that you can help it be decentralized. So I think the data, it'll have plenty of data.
00:38:35
Speaker
And all the data that we need right now to build a kind of foundational understanding of what society is, is totally, totally available. It's mostly government statistics and economics. And then sentiment is also really available. Eventually it'll turn into modeling human attention.
00:38:59
Speaker
because that's what economics is mostly anyway. So yeah, so it'll be exciting. I think there's plenty of data.
00:39:10
Speaker
Are there any, is there some sort of mechanism that can kind of control the data that people can subscribe to with their nose? Like if there's a, let's say there's a bad acting note that just looks for some potentially not very good piece of information that maybe shouldn't be analyzed by a very powerful AI.
00:39:30
Speaker
like how to make more efficient bombs or how to make better nuclear weapons and stuff like that. Is there a check in place for that kind of information? Are you guys planning to tackle that? Excuse me. I don't know. I mean, I don't know where you're subscribing the first place to get that data. Right. Yeah, because it's not particularly temporal. This is mostly temporal data that's being predicted.
00:39:57
Speaker
At the beginning, the Satori server will serve as a policeman to make sure that there's no bad actors. It's a last resort and any centralized government control or any executive decision-making should be a last resort if there's a huge problem that's very obvious.
00:40:27
Speaker
So the first line of defense against any kind of bad actor is, do they make predictions that are accurate? So there's a competition between all these nodes. They're saying, I'm one of 40 or one of 10 or whatever the popularity of the data stream is. I'm one of X predictors on this data stream. So I'm competing with those other predictors that are predicting this data stream.
00:40:56
Speaker
If the producer of that data stream seems like it's a bad actor, we switch to a different data stream. So you could say, okay, I'm producing the price of gold. I'm telling everybody what the price of gold is in the next 10 minutes, but actually
00:41:16
Speaker
I am saving it for 10 minutes, right? So I'm saving it and then I'm producing, I'm broadcasting it out. Well, it's inaccurate data, because it doesn't match the other gold raw data producers. And if I am always accurate, like 100% accurate, if I'm always really accurate predicting my own data, then,
00:41:46
Speaker
you know that I'm a radically bad actor. So there's the competition and the ability for the nodes to look for new data streams that don't seem to be compromised, that kind of thing. But that's just the first layer and it's got to be built out
00:42:08
Speaker
much better than that eventually, but we're still in alpha. So just getting the first layer of everything has been my goal. Right. And is there any, are there any other potential use cases for Satorinet once it's all up and running and operational, or is it just Razer focused on predictions and just predictions? It's pretty much Razer focused on predictions. And the reason is
00:42:35
Speaker
You know, in college, I kind of studied a lot of different weird things, but one of them was the brain. I really was interested. Everything kind of tended towards, I was just trying to search for what I was interested in. Everything eventually tended towards, I wanted to know what intelligence was as such. And so I spent a lot of time like looking at neurology and looking at like, well, how does the brain process data? And one of the main things I learned from that exploration was that
00:43:06
Speaker
it always predicts the future everywhere. Like there's this neural circuit that's been repeated, like evolution figured this neural circuit out and we don't even understand it completely, but we know it's this kind of repeatable unit and it repeats throughout the neocortex, the mammalian brain. And so evolution figured this neural circuit out, which is kind of some kind of general intelligence, smallest unit of general intelligence or something.
00:43:34
Speaker
And then it realized if I just repeat this circuit again and again and again, I can scale intelligence. And so that's what our brains are mostly. So, um, I realized that this circuit, as I, you know, as I learned about it, you can go read like on intelligence by Jeff Hawkins for more on this. Um, as I learned about it,
00:44:01
Speaker
I realized one of the very main thing it's doing is learning spatial temporal patterns.
00:44:08
Speaker
which allows it to always predict the future. All those patterns that it learns are like temporal patterns at the base level. So at our subconscious level, we are always predicting the future of everything. All the data that's gonna come into our bodies, we're predicting what it's going to be before it comes. And that's how we kind of discover or detect kind of anomalies. And we say, oh,
00:44:37
Speaker
I predicted her hair to be longer. Something's wrong here. Down on the base level, our neurons are like, I made a wrong prediction. And then that flows up the hierarchy to our conscious attention and says, what is wrong here? Because I predicted something different. And then we investigate it and we say, you cut your hair. I love it. So that's kind of how our brains work.
00:45:07
Speaker
on the base layer. So as I was thinking about and designing Satori and kind of figuring out what it should be, I see it as the base layer of the intelligence stack of a distributed intelligence of the earth, computational intelligence of the earth, because it's mainly just prediction of the future.
00:45:31
Speaker
Yeah. On top of that, we can build other things. And I will say this though, predicting the future in an open fashion that it's, you know, everybody can see it. That is the main goal of Satori. All of the public data is predicted publicly, and you can see the future of all of it. But one thing that it will eventually do, I'm certain,
00:45:55
Speaker
is build private predictions for individuals or for companies. So a company says, OK, look, Satori, you are predicting a whole bunch of cool stuff here. But I want to know what my quarterly sales are going to be next quarter. So can you tell me that? And Satori's going to say no. Until we make it available to send your data into Satori, the nodes start churning it and correlating that data with
00:46:25
Speaker
the real world. And then it can say, Hey, I do have a prediction of your quarterly sales that I'm not going to broadcast to everybody because we're just contracted with you. And here you go. Here's your prediction of the future. You can see your future. Right.
00:46:42
Speaker
And I think that is going to be really powerful. So when I think of like, what is the evolution of Satori? I see two layers. I see we're always going to be predicting public data for everybody, plus private data for private people or whatever. And that will give the network
00:47:05
Speaker
I don't know more value. It'll allow them to earn tokens from like a private sale of their intelligence. So I think that's going to be cool. But I think I think you need both. So yeah, I don't think one's ever going to be replaced by the other. It's always going to be this double.
00:47:28
Speaker
Yeah. Cause you do want to make predictions based on things that shouldn't necessarily be public, but it's like almost as if you're connecting your whatever data stream, like you said, like my next quarterly sales to all of the data available on earth, right? Which technically is going to take predictions from, well, next quarter is August, which is proven to be like the high season for this type of business in accordance with all these other factors. They're going to give you factors you might not even think about.
00:47:58
Speaker
Let's say you're

Uncovering Connections in Distributed AI

00:48:01
Speaker
sending in your Fitbit data to Satori and you say, tell me the future of my Fitbit data and you're going along and everything's looking good. You're like, yeah, I'm being really healthy. Then all of a sudden, it just tanks and you're like, what the heck? What's going to happen?
00:48:24
Speaker
you could investigate it and say, oh, it's tanking because my water supply just got tainted. And so, you know, I'm in trouble with all these other people that are in my area. I need to stop drinking the water. So things that are completely disparate
00:48:45
Speaker
having a network of AI predicting nodes, not just one centralized AI, having a network allows it to discover these kind of crazy connections. And I think that's important. So it's like you're building the, like observing the butterfly effect in a way that's wild, if you think about it. Yeah. Yeah. And I think the important thing is the feedback loop that occurs because
00:49:11
Speaker
You know, building a public API that allows everybody to know the future will make this society as a whole more efficient. But building it also, having a private prediction element allows every individual or individual company to become more efficient in their specific domain. And so both of those will increase the efficiency.
00:49:39
Speaker
Because going back to the Fitbit example, right now you don't have Satori, so you look at your Fitbit data and you say,
00:49:49
Speaker
I want to try a new diet. So I'm going to track my data. I'm going to understand my data. Then I'm going to start my diet. And for 30 days, I'm going to track my data and see how it compares to the history. And they'll know how this is affecting me. I'll know how it's going to affect me in the future. And so in order to make a prediction about the future, since your data is siloed, you have to kind of capture it first. You have to look at the future first, and then you can predict the future.
00:50:18
Speaker
Well, if your data becomes unsiloed through Satori and you're getting immediate feedback on all of your decisions in real time, as soon as you start that diet, the Satori network says, you know, I've seen this happen. These kinds of signatures of this kind of diet, I've seen this happen in data that looks a lot like yours before. So I can tell you within like three days
00:50:45
Speaker
What's going to happen for the next 30 like what how is this going to affect your data and so I can give you the future before it happens and All it has to detect is like the signature of what's going on of the change that occurred and
00:51:02
Speaker
And that rapidly increases your efficiency because then you can immediately decide, I want to change the diet I've chosen or do whatever you want. It's invaluable, that kind of insight. And also that's what fascinates me about this kind of predictions when you're looking at a thousand, let's say, completely or seemingly unrelated data points.
00:51:26
Speaker
Then you have to, I was working on a similar project trying to predict the price of Bitcoin a while back. Well, there's two schools of thought there when it comes to price prediction. One is that the price is basically the aggregated sum of all potential variables.
00:51:49
Speaker
And then technically, if you have a machine that has a long enough set of price data, it can figure out what are all of the other variables that go into this price. So just looking at the price itself is enough. But then there's the flip side of that coin, which is you need all of that data and put it in like price of oil, price of, I don't know, like hot water and any kind of data that's seemingly unrelated.
00:52:17
Speaker
The challenge with doing the latter for me was how do I put the weights on this data? Let's say if Jim Cramer goes on the television and says, sell your Bitcoins going down. Now, that's a strong indicator for me. I would put a pretty hefty weight on that piece of data if I could capture it. You're saying that's an anti-indicator.
00:52:40
Speaker
Well, it's the inverse Kramer, so it always applies. But how do I apply weights to things that I have no idea that might be related, like the price of vitamin C in the supermarket? That's obviously not as heavy of an influence as Kramer saying something on TV. That was one of the main challenges I had for this company. Yes, yes. And that's why you need this engine that's always churning.
00:53:09
Speaker
You just need it to always be looking, always be exploring. A lot of these connections are really, really contextual. So in a certain context, this thing that's really, it's got far reaching connections, it's very highly predictive, but in most contexts it's not. So you can't just
00:53:32
Speaker
immediately invalidate anything that doesn't have enough correlation, right? Because then you're only taking the things that matter most of the time. But in order to really, you know, get that last 20% or that last 4% of accuracy, squeeze it out, you really have to be open to figuring out the context in which everything is connected. And that just takes time, you got to churn on it. So yeah,
00:54:01
Speaker
Yeah, for sure. That's really fascinating, dude. I really like talking about this kind of stuff. How about the nodes and the people running the nodes? What is the incentive for people to run a node? Is there a government's token? What does it do? Well, it's not created yet, but there will be a governance token. So as people download the Satori miner, the node,
00:54:33
Speaker
I'm calling it, I don't know if this is a good idea or not, but I'm calling it intelligence mining because it's not hashing. It's not going to secure the blockchain at all. It is predicting the future. That is what the node is doing. It's churning, it's making intelligence. I'm calling it intelligence mining because you do actually earn a token for doing the work.
00:54:58
Speaker
So people download the node, it starts making predictions. Those predictions are in competition with each other. Whoever's winning is making the token, a proportionate to how much they win. And that's kind of different. It's got some different, um,
00:55:18
Speaker
it's got a different signature, different structure than blockchain mining. Because even if you have a small computer, we have a lot of different algorithms that we can use to build a model and they can be
00:55:34
Speaker
you choose the one that's ideal for your computing structure. So it's not like hashing where you just have one algorithm and whoever has an ASIC that's built specifically for that algorithm is going to make all the money. It's actually much more even payout. So even if you have a small computer, you can provide some value to the network and get compensated for that value.
00:56:01
Speaker
Could you write it on Raspberry Pi? Yeah, sure. Yeah, I mean, because some of these algorithms, they can run on CPUs. They're optimized, right? Right. They don't have to be as nuanced as the deepest neural net, right? So yeah. Yeah. So there's that. There's that the fact that it's a more even payout. But I think the other
00:56:28
Speaker
benefit of this over like hashing mining.
00:56:34
Speaker
is that all of the models that it creates live on your machine. They're not shared. They're not sent up to some centralized

Decentralized Prediction Mining: Value and Reward

00:56:43
Speaker
server. They live on your machine. It's truly decentralized in that way. So if your machine goes down, the network no longer has access to your model. And this is one reason we have a lot of redundancy. Another reason is because we want unique redundancy so that every model is different.
00:57:01
Speaker
And so you're the only one with that particular model, that particular understanding of these data streams. And that means every miner is producing an asset. Blockchain mining is throwaway work. So you make a hash, you throw it away. It didn't work. You make another hash, you throw it away. And eventually, sometimes you make a hash, and it's the right one. You broadcast it out, you make a lot of Bitcoin, you're done.
00:57:29
Speaker
But with intelligence mining, all the compute power is going into finding the best model, and the best model is always on your machine and only your machine. You can do whatever you want with that model. It's actually an asset. I don't know what people might use it for, but they could use it for anything.
00:57:56
Speaker
But I think it will be, like if they're churning while we're focused on this kind of public data, public prediction piece.
00:58:09
Speaker
If their computer is gaining expertise in certain areas, it's learning how the world works in certain areas, then what they have, once we open it up to have private predictions, is they're the first ones at the door to say, I'm really good at correlating these kinds of data.
00:58:29
Speaker
So if you're a company and you want to know something about the economy in this way, I can help you. So that's the asset they're creating. I think it's really good. So those two things, they have a more even payout and they're creating an asset. And I think it has those two benefits over like blockchain mining.
00:58:57
Speaker
Right. And also because it's actually, it's proof of work that, of work that actually needs to be done. It's not like, oh yeah, you got to just, you got to solve this really complicated equation. What for? Don't worry about it. Just, just do it. Yeah. Um, and do people also get, because you mentioned your, you, there will be situations where you'll be competing against 10, 15, 50 other nodes for the same data. Do people does only the winner of that prediction.
00:59:25
Speaker
get rewarded or do some of the others too? Just the winner, but the winner is always changing because it's on a cadence or it's on some kind of cycle where you get new data. And so you might have won this last round 10 minutes ago, but a new piece of data comes out and you make a prediction.
00:59:49
Speaker
maybe your model is better at this prediction. There will probably be one winner that's winning more of the predictions than the others. There will be a power law distribution and probably everything, but that's okay. I mean, you can still earn even if you're not at the peak of that power law distribution.
01:00:11
Speaker
Yeah. And do you see mining pools ever becoming a thing, creating pools so that people can collectively predict so they can split the prediction profits? Oh boy. I haven't thought about mining pools in this context actually. I kind of feel like, I don't know.
01:00:33
Speaker
I don't know, probably. Well, I have thought about this. There will probably be competitors to Satori eventually. But if you take in the average price of something that's correlated with your data, that will help you, even if it's coming from a competitor structure. So even across competition boundaries, future predictors,
01:01:02
Speaker
It'll all become kind of one thing in the end, one really massive network. Yeah. So obviously the purpose is to make predictions for the future, but then there's no control or incentive to act on these predictions in any way. It's, I guess, up to any individual to take that prediction and do whatever they want with it. Yeah.

Societal Adjustments Through Predictive Feedback

01:01:30
Speaker
Yeah. And people have said to me.
01:01:33
Speaker
Let's say you make this thing and it's working really well, predicting the future. Don't you think it could have a negative feedback loop where it predicts doom and then so then that comes to pass? And my attitude is that's possible. You're in a soccer game and you've got this open shot and you go for it and something in your brain says you're not going to do it. And then you choke. It was right there.
01:02:00
Speaker
That can happen, but it's usually like in the moment. And it's an emotional experience, right? It's emotionally motivated. I don't think that that generally is the case in the long run if you have intelligence. Generally, if we say, oh, I'm seeing a bad prediction, something I don't want to see, the Satori network is telling me will happen.
01:02:27
Speaker
That means that's a signal for the rest of the system, which is all the humans on earth, right? It's a signal for the system to make a modification so that it doesn't happen, right? Because it hasn't happened yet. So it's a signal for us to respond and take control of the future. And that's what our intelligence does, I think.
01:02:47
Speaker
You know, babies, they're just trying to figure out how their body works. And so, but as we become more intelligent individuals and we grow up, we start to see the future first, a very short term and then longer and then longer.
01:03:04
Speaker
And we know, don't put your hand on the stove or you're going to get burned. We know that future. And so we don't do it. And so by giving the society a view of its future, I think it's the best way to make a better society is to say,
01:03:21
Speaker
This is the only way that you survive long-term is to know what your future is going to be. And Satori could do that, other things could do that, but you got to know. And it's a forecast, right? It's basically you're likely to end up here if all of these data streams continue on this projected trend. Exactly.
01:03:46
Speaker
But there is always the possibility of something that you haven't taken into account to happen that will completely mess up your forecasting, right? And if you could somehow collect all of the information available, you could say, well, there's nothing that can surprise me ever, right? You know about all there is to know about tectonic plates, about global warming, about oceans, just rising sea levels, about solar flares and about meteorites and other things. Like you somehow had all of these variables.
01:04:16
Speaker
At your disposal, you could say, well, that is a prediction, which is not as likely to change its course because everything has been weighted and taken into account. But I don't think it's ever going to be that all-seeing, all-knowing kind of... There's no way it could. Yeah. Yeah. And that kind of goes to the philosophy of just what is modeling in the first place. We have to model things
01:04:45
Speaker
We model things as kind of a deterministic structure. We say, well, in theory, if I knew everything, then I would know everything. I would be able to predict the future of everything. If I knew the state of the environment now and how it all interrelates, then I could project it into the future in theory. And of course, that's not true. We know that's not true because you can't have all the data.
01:05:13
Speaker
Uh, because you'd have to go all the way down to like the quantum realm. And then there's a barrier there where you're like, well, uh, what's it called the uncertainty principle. I really, I can't go any further than this. And there are hidden variables, like there's uncertainty down there. So we know that that's impossible. There's always going to be black swans and things that surprise us, but we can reduce them. The closer we approach, um, omnipotence.
01:05:43
Speaker
Yeah, no, for sure. It's a fascinating concept to think about and to kind of try to wrap your head around. In a sense, it's like AI, but if you distribute AI to individuals and if you'd somehow, let's say you were aware of what each individual is likely to do, then you'd have a better understanding of where we stand as a society.
01:06:08
Speaker
Yes. Yeah. If you, if you model every individual, but you know, the deeper you model, you have to go like, okay, well, let's, let's take it down to every group of individuals. Now let's model every, you know, we've mastered that. Let's model every individual. We've mastered that. Let's model every individual neuron of every individual brain. And so you're, you're basically modeling attention. Um, because that's what creates the future.
01:06:37
Speaker
Yeah, where attention flows and intention. Yeah. Well, I can't wait to see something like a solution where you'd be aware of, you know, what other people actually think and what they know. And I know we're talking about Neuralink and potentially, you know, that we'll be able to achieve it, but to have it, to organically be able to connect to some sort of hive mind and be like, huh,
01:07:03
Speaker
That's what it's like. That is the essence of a human being, right? It's not just my own perspective of what it's like, but if you understand everyone else's thinking process, fears and secrets and all that sort of stuff, that will just, you will be a much more complete human being, I feel. Even though you might lose yourself in the hive mind, but at the same time you are more complete because you know all there is to know about humanity and humans.
01:07:29
Speaker
That's true. That's true. And I think that's inevitable, but that's so far in the future. I'm really focused on implementation of the simplest thing. It's weird because I do go back and forth. I like to think about these things way out there. I've often thought that
01:07:46
Speaker
The general trajectory of technology is essentially the communication technology trajectory. Communication technology always wants higher bandwidth. You might say that all technology is communication technology because it serves to increase the bandwidth of information on the planet.

Future of Communication and Civilization's Scale

01:08:07
Speaker
So at first we had like you know horses and they would send letters through carriers and stuff like that. Eventually we got to the point where we had actual like a mail structure that's better and and then we get um telephones, telegraphs and telephones and tv and internet. And so we're always increasing our bandwidth
01:08:33
Speaker
to the rest of society. And that's just the tendency of what technology seems to be in, I think. Yeah. And so that means neural link is inevitable because we have, we have two sides of our brain, and they're interconnected with a corpus colosseum, right? They speak to each other through this one structure corpus colosseum. Right. And so there's a lot of bandwidth between the two hemispheres of our brain.
01:09:00
Speaker
And I think that kind of gives us our sense maybe of experience, our sense of conscious experience, maybe something like that. But as soon as you increase the bandwidth between two brains that matches that speed,
01:09:19
Speaker
Uh, I mean, I don't know how you could make the argument that those two brains are not one conscious experience anymore. Right. It seems like then that's kind of the hive mind. That's when you, that's when you become one individual, right?
01:09:36
Speaker
So it's wild, but yeah. It is wild. Because right now we don't have any kind of, like the speed at which we communicate and you and I exchange ideas right now, you know, it's a very, I can, I need, I don't know, a half a minute to express an idea and then you're internalizing that and then you reply and then we have a conversation. But if this whole conversation, like it could have technically happened in under a second.
01:10:04
Speaker
Imagine the volume of information that you'd be able to exchange with someone or with multiple people if you had that kind of bandwidth. It'd be wild. Crazy. That makes you start to wonder about Conway's law because the communication structure is what defines what is.
01:10:23
Speaker
I don't know, I've given some thought to this, but it's kind of magical. It just feels like there's never enough speed when it comes to the internet, when it comes to connectivity. Like you have 5G and 5G is already like if you want to send big pieces of information, that will be a bottleneck at some point, you know.
01:10:41
Speaker
And I feel like we still have, we create so much more information than we're able to process even through the, I'm not even talking about internalizing it, using your brain as a human being, but just sending it out through the internet. There's just, you produce more data than you can send out. And we produce so much more that we can ever internalize in our lifetime. Like you've got to, right now you have to pick and choose what I want to focus on. Cause I don't have time to do all these things, but there might be a time in the future where
01:11:11
Speaker
you don't have to choose anymore because suddenly you just have all of this like you have a real time stream of all of earth's data just coming into you and be like yeah that's this is what it's like to live in uh according to um i don't i can't remember the the the writer but um
01:11:31
Speaker
He made a prediction that in 2040, the singularity event will happen when machines and humans will be indistinguishable from each other, would have transcended to humans 2.0 and all signs point in that direction. Is that right, Kurzweil? Yes, it is Kurzweil, yeah. Yeah, that's crazy. I don't know if it's 2040. It seems like
01:11:59
Speaker
his prediction is an exponential, right? He's like, yeah, but that's just, you know, everything in in existence is an S curve. So it's nothing is just exponential forever.
01:12:12
Speaker
And so it starts to level out at some point. And so I don't know if that some point pushes the singularity out or if the singularity occurs at the middle of the exponential curve. I don't know, right? But I think that's his assumption.
01:12:33
Speaker
Yeah, it's the assumption that there is an exponential growth happening on and on and on and on. And I do agree that we kind of plateau sometimes. There is a new technological breakthrough. It's all exciting. We kind of plateau for a while and then the next thing comes. Like if you look at the internet from web to up until now, so you've got like web two, you have like Facebook, social media, all that.
01:12:59
Speaker
Before that, there was barely anything, right? Maybe you had Myspace, but before that, you just had email and like really shitty looking websites, right? And then you have Web2. But then it all goes quiet up until 2008 when you have blockchain and crypto. And that just, it's still like on the back of that. I still feel like we're creating new things every day. Hence the reason of us sitting together right now, right? On blockchain.
01:13:25
Speaker
But now you also have AI, which AI adds, I feel like is the next level of the plateau. But also now you can combine AI and blockchain, right? Which gives it a completely new stack that's built on top of these two things. Like what's going to be the next thing that you can build out of these two combined, right? Right.
01:13:46
Speaker
And we see that everywhere. We see it in biological evolution, where you get this long period of like stability and homeostasis and everything's kind of, and then all of a sudden a new combination is found, a new energy source is discovered, whatever. And then all of a sudden the complexity shoots up and it finds a new kind of minimum. It's exciting.
01:14:15
Speaker
Right. Do you know about a scale of the chances, the speed of evolution of a civilization based on the amount of solar power that they use? Oh, I've never heard of that. That's amazing. There's different types. There's like type zero, which is where we are right now. We're moving our way to a type one civilization. Oh, that's right.
01:14:36
Speaker
Type 1 civilization, I think, uses about 10% of the sun's, or it uses 100% of the sun's energy that comes towards us. So we're trying to use less than 1%. If you were to use 100% of the energy that come freely, lands here every day, within eight minutes, it's here every day, no matter what.
01:14:59
Speaker
That would just completely change society. And then, hello there. And then you have type two that's capable of harnessing the entire solar system energy. And then you have type three that can harness the entire galaxy energy generated by the entire galaxy. There's still plenty of ways for us to evolve. I feel we've barely scratched the surface with tech and everything.
01:15:26
Speaker
Oh, totally. Totally. And so what I have noticed recently is that, is that, has it, is it Moore's law? Right. Though a doubling of compute power. It is Moore's law. Hasn't it gone, has it gone down? I think it does. It started reducing because it was supposed to be like 18 months or something, right? And then, isn't it now almost like two years it's doubling? So I don't,
01:15:54
Speaker
It's gone down in GPUs specifically. GPUs and CPUs used to be almost like clockwork every 18 months. It would double the processing power, it would double the number of threads and etc. But that hasn't really been happening for the last few years, I feel.
01:16:10
Speaker
And so that's why I'm kind of mentioning that I think the S curve has started. It's something might come along where it's like, oh, well, we can just use a different substrate to compute, like human brains or something. We've just figured out how to use biology instead. And so it might still continue on that exponential curve. But I don't know. I don't know.
01:16:37
Speaker
Yeah. We might be able to leapfrog one day. Who knows? Yeah. Yeah. Even if you feel like you're plateauing might just be a period of time and then you leapfrog and then you you're back on track. I guess we'll just have to wait and see. It's going to be exciting. Whatever it is. Um, do you want to wrap it up? Sure.
01:16:55
Speaker
I had a great time. Really good conversation and thanks a lot again for anyone wanting to get involved with Satori. That's Satorinet.io. Yeah, get a Docker app, get a node running and let's start predicting the future.
01:17:12
Speaker
Yes, yes. We'll definitely put all of our information out on what's happening, maybe on Discord and Twitter, so that when the alpha launch occurs and the beta launch, people are ready for it. If you download one now and just start playing with it, that's totally fine. But you'll probably have to download it again, because we'll make breaking changes and all that once beta occurs.
01:17:41
Speaker
Yeah, that makes sense. Awesome. Well, listen, Jordan, I had a great time. And I'm going to keep following along with a project's evolution and growth. I'm excited to see it happen. I'm excited to see it live. And best of luck with it. Thank you very much. Thanks for having me. It was my pleasure, dude. Thank you. Thanks for your time. All right. Thank you, everybody. Bye.