Introduction and Co-Host Change
00:00:01
Speaker
You're listening to the Archaeology Podcast Network. Hello, and welcome to the Archaeotech Podcast, Episode 204. I'm your host, Chris Webster, with my temporary co-host, Rachel Rodin. Paul's on a spy mission in the Middle East, or at least I'm convinced he
AI Replacing Humans in Archaeology
00:00:17
Speaker
is. Today we talk about three ways AI is being used to replace the little humans trying to do archaeology. Let's get to it. All right. Hey, everybody. Welcome to the show. How's it going? Not Paul. Is that my new name? Not Paul.
00:00:31
Speaker
So if any of you guys happen to listen to the archaeology show, you would recognize Rachel. But I think you've been on an archaeotype before, too. Yeah, I think so. I've talked about Wild Note before, and we did a couple other episodes together. I'm Paul's stand-in. Indeed. Well, Paul is not on the show today because I'm pretty convinced that he's a secret agent. And he's just called to the Middle East occasionally and can't talk about why. Oh, man. Did you just, like, out his CIA status to the entire world right here? I think I did. Wow.
00:01:01
Speaker
Anyway, I mean, he probably speaks some Middle Eastern languages. He kind of fits in over there. It just kind of all makes sense. It does. It does. I believe it.
Privacy Implications of AI Transcription
00:01:11
Speaker
Archaeologists would make the best secret agents because we travel a lot. That's true. Yeah. We have lots of stamps in our passports. And we often speak other languages. And we blend in with cultures. Yeah. You should definitely make that a life goal right there. In case anyone from the CIA is listening, I also have no tattoos or distinguishing marks. So there you go.
00:01:29
Speaker
You're a little old though. Yeah. Yeah. Yeah. Well, hopefully I'm making these statements because like Google I know is they for a long time, they were like auto transcribing podcasts that came through the Google service. Really? Yeah. I don't know if they're still doing it, but that was when they were more into podcasts and they kind of started dialing that back. Okay. But the fact that that
00:01:48
Speaker
transcription is on the internet and searchable by Google, that's why they did it, means you can find stuff.
AI Enhancements in Archaeological Research
00:01:55
Speaker
So if the CI's quantum computer, which I'm sure they have, is searching for different words and things like that that are interesting to them, then I might get a phone call.
00:02:05
Speaker
So either good or bad. Yeah, I was going to say like a good phone call or a bad phone call. Like what are you talking about? That's going to capture his attention. So that's one of the disturbing and frightening things that AI can do. Yes. But other things AI can do are identify sites and translate languages.
00:02:21
Speaker
Yeah, totally. Yeah, so we found one article. Actually, Rachel found an article about AI identifying Nazgolizers, which we're going to talk about here in a minute. And when I happened to find a second one, I was like, there's got to be a third one. There must be, yeah. So I started looking. Well, AI is in the news a lot lately, it feels like. So it just makes sense that it would be used in an archaeological setting as well. Yeah.
00:02:44
Speaker
This episode of Archeotech might be a little bit different because I don't know that we've ever done like a news story episode on this show, like we do for RTS. But that's what we're going to do is we're going to just talk about AI in the news in three different instances. Yeah, exactly. So I do have some interviews coming up. They're in the process of getting planned right now. Paul is going to join us if we can, but there's some stuff from the most recent
00:03:11
Speaker
I think one just came out a few weeks ago, the June edition or whatever they're calling it. Maybe it's July, by the way, publishing works. So look for that coming up here soon. But in the meantime, Rachel's filling in and like she said, we're going to talk
AI Discoveries: Nazca Lines
00:03:24
Speaker
about this. So we have the original article for this one and it's from the Journal of Archaeological Science.
00:03:29
Speaker
called Accelerating the Discovery of New NASA Geoglyphs Using Deep Learning. We also have a link from Live Science. And sometimes, this one seems to be OK, but sometimes you have to log into Live Science. This one's not making you, but there's another one that we have. Yeah, this one's not. There are some. That they've kind of opened.
00:03:45
Speaker
Some publications let you read and some think you have a subscription. I don't know anymore, but yeah. The thing I like about having both is because the actual journal articles are sometimes hard to read because there's so many citations they have to put in and just like, I wish there was a remove citations filter. It would just be really easy to read. But if you do want to read the journal article, cause it is open access,
00:04:07
Speaker
then do what I've always been told as just kind of like a rough overview of it. If you really need to dig into it and you're trying to replicate the results, that's one thing. But if you just want to know what it means, first read the abstract and that's probably all you need to do. Scan through and look at the pictures and read the captions. Sounds stupid to say that, but it's true. Just do that. And then go down and read the conclusions. Because that's where
00:04:30
Speaker
you really get the meat of the article. And if you want to know more, if you want to dig into it, then go ahead. Yeah. And that's what reading an article like this live science article does. That's exactly what they do. Yeah. They essentially just do that. They don't really dig into it and you get the, you get the meat of it without the fluff. You get like the CliffsNotes version, but that's really all you need. Like it's not like you're trying to go out and repeat the study. You just want to know what they did and what they found. And that's all you really need.
00:04:56
Speaker
So let's talk about Nazca lines. First off, these researchers in this article, they found three new Nazca line figures in Peru that were created up to 2,400 years ago. And that's just the date of the Nazca lines within Peru, not just these, but like all of them. There's a bigger range. We'll talk about that in a minute.
00:05:12
Speaker
What they found, though, is just astonishing to me that nobody actually noticed these before. Or maybe they did and thought, ah, you know, somebody must have noticed those and then didn't report it. But people who are doing actual research on these things, when they say nobody's noticed these before, that's actually who they're talking about. Like I said, I have no doubt that some, you know, local armchair warrior on Google Earth
00:05:33
Speaker
has found these Nazca lines, right? These additional ones. Or the local people that live in the area probably are like, Oh yeah, there's probably something out there. We've never seen it from the sky, but like there's definitely something there. Maybe, but it's tough to see some of these older ones because they're just, or the ones that are more ephemeral and that's what, where the AI comes in.
Preserving Nazca Lines with AI
00:05:51
Speaker
Erosion is really like picking havoc on them. That's for sure.
00:05:54
Speaker
But one of the things that, three things they found that were notable are a pair of legs, more than 250 feet across. I'm like just the legs. Like there's nothing else. Yeah. Okay. I don't like it. Maybe it's like that, uh, what was it? Some image I saw a long time ago where you've got the earth and it's like the Rapa Nui, you know, heads from Easter Island. And then all the way through the earth is Stonehenge. It's like the feet of those heads.
00:06:18
Speaker
Anyway, so you've got those feet, 77 meters across, and then a fish measuring 62 feet across or 19 meters, and a bird measuring 56 feet or 17 meters wide. Yeah. Super cool. Those second two are definitely a little smaller, so I could see how those would be missed. But man, the first one, giant pair of legs. Yeah. Yeah, that's crazy, but really cool that they found them.
00:06:40
Speaker
Yeah, so they found a humanoid figure by the same means back in 2019. So this is just kind of more of that research and probably better because I mean four years is a long time in the field of AI and how these models are made. The Nazca lines are actually made most of the time by
00:06:56
Speaker
essentially just moving the black stones that are out there, the stones that have this like desert varnish on them, this patina, and they reveal the white sand underneath. So it's literally just moving them out of the way to reveal the underneath layer. I've heard too that they can be flipped over because the stones are like a different color underneath. So maybe some of it's that, but this is more plausible that they're actually moved out of the way and you make kind of a pathway because one of the possible
00:07:21
Speaker
purposes of these is obviously ritual and ceremony, and some researchers think that the creators of these, and then ancestors, would have run ceremonial processions tracing the figures. Oh, like actually walking along? Yeah, like walking the path. Using it like a path, yeah, okay. That's possible for sure, right?
00:07:37
Speaker
There are a total, as we know, of more than 350 geoglyphs in total. A geoglyph is just something that's a shape made out of rocks. Basically by humans. That's too big to like carry with you. Right. Right. Otherwise it's an artifact. Yeah. They were first spotted by pilots in the 1920s. I can't imagine just flying over there for the first time and going, what is that?
00:07:58
Speaker
And some of them are so cool looking and really like super elaborate for a giant picture on the ground. You know, like you can see fully formed birds and other animals and shapes. So that's, it's really cool. Yeah. Among the figures that have been found hummingbirds, monkeys, whales, whales, spiders, flowers, geometric designs and tools. Yep.
00:08:19
Speaker
So, and they're not just found in the Nazca desert where these are, they're found in other places in Peru too, but they're mostly found in the Nazca desert, probably because of the environment and how you can set those up. Yeah. And that's on the southern half of Peru. So I worked on the northern coast of Peru when I went there back in my undergrad days, but this is all on the southern side. So I unfortunately didn't get a chance to go see them, but man, they were on my list. Maybe one day if we make it back to Peru. Yeah, for sure. Yeah.
00:08:45
Speaker
I mentioned the 2400-year-old dates. They think that they range in date from about 400 BCE to 650 AD for their creation. Obviously, they could have been used all the way up until, you know, 100 years ago or now. I don't think they're really being used now by locals, but, you know, they could be. The professor and archaeologist from the Yamagata University in Japan, Masato Sakai,
00:09:12
Speaker
Again, Yamagata University in Japan. He's been searching for Nazca Geoglyph since 2004. He's been basically obsessed with it. And as you go forward in time, he's just been using newer and newer technologies. He's used satellite imagery, of course, which we've had for a long time. Aerial photography, which is a lot higher resolution usually than satellite imagery. Oh, that makes sense. Unless you have military satellite imagery. But even satellite imagery is getting a lot better these days. Airborne scanning LiDAR and drone photography. Yeah.
00:09:41
Speaker
And if you're a long time listener to the Architect podcast, that's the first time you have to take a drink because we said drone. We did say drone. So have a sip of coffee over there. Yeah, I'm drinking coffee. So we're recording early. Yeah.
00:09:53
Speaker
They identified the new glyphs after about five years of study. So it took a long time to really dial these models in. It's like so much time and effort to only find a handful though. I wonder if this is going to get better and better and they'll be able to find more or if there just aren't more to find. That's a question that is always interesting to me with this kind of thing.
00:10:15
Speaker
Well, we'll talk about that in a second because the model you're using, they're training it pretty heavily. Yeah, yeah. In 2016, this is probably what helped them find the one in 2019, the humanoid figure, but they obtained some high resolution images of the area. And that's when they started using AI and what's called deep learning to train a computer to find more glyphs. Right. They actually partnered with IBM of Japan and in the US, IBM's Thomas J. Watson Research Center to conduct the research. You might recognize Watson as the computer they did that...
00:10:45
Speaker
No, for Jeopardy. Yeah, that's on the beat 10 Jennings and Holtzauer. Yeah. So anyway, deep learning is essentially training a computer system on thousands or even millions of known objects. So when you show it just so many instances of a thing and you say, this is this, this is this, this is this, and it just starts to really, it's pattern recognition is what it is. And the more patterns you give it, the more it understands and learns and then can start finding its own patterns and then can start finding variations of those patterns.
00:11:15
Speaker
So I imagine since we have 350 glyphs, geoglyphs that we know about, they were able to use some of those to basically train it on what to look for. Yeah. And, and that's what they did. So they didn't actually have thousands of elements, right? But they did break these up into like head, torso, arm, legs. So they have these pieces, right? Okay. Yeah.
00:11:35
Speaker
And they only used about 21 known Nazca geoglyphs, but broken up into these elements to actually train this computer initially Yeah, and this is just a preliminary kind of thing. I'm sure they're by now even they've they've started to give it a lot more info That's how AI is right like the training just gets More and more and so therefore the identification gets better and better and better with more time and more training
00:11:56
Speaker
Yeah, they've said that the AI can identify possible figures, and they usually train these things on known figures. They'll punch in an area, and if it doesn't identify the ones we know about, you know you have a problem. But that's how they verify what they've done. That is right, they say, they punch in known stuff, and they say, okay, did you find everything? And the AI identified possible figures about 21 times faster than trained archeologists. It doesn't say more accurately,
00:12:22
Speaker
Like the archaeologists still still get it, but the computer just did it faster. Yeah. And did it find all of them or was it missing anything would be my question. Yeah, exactly. Yeah. So anyway, it's important to find as many of these as we can because that area is suffering a lot from erosion and climate change. And the climate change is bringing in more water and it's bringing in heavier winds and lots of stuff that is just damaging for this type of environment. So it's important to identify these so we can learn as much from them as we can.
00:12:52
Speaker
Yeah, these kind of geoglyphs that I just don't think that they're the kind of thing that can last for the ages. You know, it's not, it's not a pyramid that's going to be there as long as we, we can serve it, you know, over 2000 years. It has, it has that it's a long time, but I just, with erosion and everything, like I just, I could see them starting to disappear.
AI in Translating Ancient Texts
00:13:13
Speaker
Yeah. In the last hundred years or so, it's the human induced climate change that's really accelerated the process. Yeah, for sure. Yeah.
00:13:18
Speaker
Now I wonder too if this AI will be very helpful in scenarios where it's hard for the human eye to see the geoglyph, maybe? Like some of them can be very faint lines. So as this erosion is happening and the lines are fading and getting harder to see, I wonder if this AI can help.
00:13:38
Speaker
still make that connection and still identify them even though it's hard to see with the human eye. That would be a really great use of this technology. I think that's really kind of what it's trying to do. Once we can pump in the LiDAR data and other data that it can just
00:13:53
Speaker
like cross-reference them. Yeah. To see all the different things that it's hard for a human to put like this satellite image and this aerial image and this LiDAR data and like put it all together. But I guess a computer can probably just combine all that and get a better, kind of a better conclusion in a quicker conclusion than we can.
00:14:11
Speaker
Yeah, I mean, humans are pretty good pattern recognition machines, but computers are just way better at it on a more massive scale. All right, well, that's enough for that one. Let's head over to the other side of the world and see how AI is being used to translate ancient Sumerian and Acadian straight into English off of cuneiform tablets, which is just baffling. Very cool. Back in a minute.
00:14:36
Speaker
Welcome back to Episode 204 of the Archaeotech Podcast, and we're talking about artificial intelligence. And this time around, we're going to go to an article from PNAS. It's actually PNAS Nexus. There's a lot of different versions of this, but it's called Translating Akkadian to English with Neural Machine Translation. And actually, one of the articles you read talked about Akkadian and Sumerian, actually. And it even says that in their abstract, so I'm not really sure why they don't mention Sumerian in the title of the article. But there it is.
00:15:04
Speaker
There's another article from archaeology magazine called researchers use AI to read ancient Mesopotamian texts. Yeah. So check that one out too, because there's a pretty cool image of a cuneiform tablet on there. So cuneiform is that if you're trying to picture what that means and you don't have the ability to click on the links right now, it's that read created text in soft clay tablets. Yeah. Where they're like.
00:15:26
Speaker
almost like puncturing the... Kind of, but they take the end of a read, which is kind of a, I don't know, kind of looks like a squinty eye. Yeah. And then they, they punch it in and sometimes they'll, they'll like twist it or drag it or do a little thing. And that's how they made letters, letters and probably more like syllables.
00:15:43
Speaker
It really is worth looking at the image in the archeology article because they show the Kuneo form and then the translation into, I'm not sure what language that is, but then translation into English below it. It's really cool to see that. Yeah, it's scrape, scrape, punk punk, scrape, scrape, whatever it is. And that translates to dis, I'm going to read this, um, dis tuksu, dag dag, uh, umesu,
00:16:10
Speaker
Gid Damas, which means if he cleans his garments, his days will be long. Oh, wow. Talking about personal hygiene. I love it. Apparently. Yeah. That's awesome. That's amazing. So anyway, researchers from Tel Aviv University and Ariel University, which studies mermaids, use AI to translate. Nope, nope, nope. That is completely wrong. Ignore that.
00:16:32
Speaker
Go on. Yeah, I have to translate ancient cuneiform texts from Mesopotamian languages basically into English. And it's not like into ancient Greek or something in that English, it's into straight up English, which is a little bit crazy. Yeah, that's insane. The use of the EA is not actually intended to replace humans, but just like we talked about last time, speed up the process, which kind of replaces humans.
00:16:56
Speaker
No, it frees up human time to do the things that computers can't do. Like make coffees at Starbucks. No. Like doing more advanced research into whatever it is that they're studying. Like say, do you want fries with that? No.
00:17:12
Speaker
what you do with your social sciences degree. All right. God, you're terrible. Yeah. So anyway, they're trying to speed up the process because there are, I mean, hundreds of thousands of bits of fragmentary text. And that's the one thing humans had a hard time doing is, you know, there's no context for a lot of this stuff. So you're just trying to piece together these things and it's really difficult. And the computer is able to do that a little better because it understands when you feed it enough sources. Again, it's all about feeding the algorithm.
00:17:39
Speaker
and teaching it. Yeah. Yeah. And saying, Hey, this is often found in association with this and something like that. And then it can mostly get it right. Yeah. Yeah. There's only like, when you see a line coming off at a certain angle, there's probably only so many figures or shapes that that could be. And then the computer can kind of narrow it down. Yeah. Yeah, exactly. So the computer can narrow it down way faster and easier than our little baby human brain scan. Yeah.
00:18:04
Speaker
Canadian Form is one of the earliest writing systems in the world. And we're not talking about like rock art and stuff like that, which some people could say is a form of communication, but it's not necessarily seen as a writing system. Right. But Canadian Form is one of the earliest like legit writing systems in the world. And it dates to about, or was used from about 3400 BCE to 75 CE. That's over 5,000 years ago. That's been a long run. Yeah. Yeah. That's insane. Yep.
00:18:28
Speaker
There have been hundreds of thousands, like I mentioned, of Canadian Farm Techs found over the last 200 years. And most of those are in Sumerian and Acadian, both in Mesopotamia. The AI used, the artificial intelligence, was basically what they call a natural language processing method. And there's a number of those that can be used. But one of the more common ones that we talk about on this show that we hear about all the time is called a convolutional neural network.
00:18:51
Speaker
So essentially you're taking bits and pieces of information and when they say, anytime they say neural network, you think of a human brain and your neurons have up to 10,000 plus connections for each neuron to the things around it. That's what a neural network is. It's basically making these associations and saying, well, this is associated with this and this and this.
00:19:10
Speaker
Okay, it's picking up all the potential connections. And then recognizing when those connections go together in most circumstances, you know, known circumstances, and then using that to make inferences and translations. It's pretty cool. Yeah, that's really neat.
00:19:26
Speaker
So then it's able to translate the glyphs directly into English. So it's like bypassing anything in between, which is probably how it has been done in the past and just going straight from Akkadian or Sumerian into English. That is so, so cool.
00:19:43
Speaker
Yeah, it's pretty crazy. It doesn't actually do super well with apparently like longer sentences either, for whatever reason. The best results come with short and medium length sentences of approximately 118 characters or less. So I'm not really sure why that is, if that's some quirk of the language or something like that. But I can tell you right now that it's just going to get better. Yeah,
AI's Need for Human Input
00:20:01
Speaker
for sure. The more they teach it and the more that they say, yep, that one was right, this one was wrong, the more it learns.
00:20:06
Speaker
I wonder if they were to break the longer sentences up into shorter pieces, if it would do okay with just like, just like give it fragments. Yeah. But again, I wonder too, if they've created this by, they've probably taught it by feeding it things where we already know the translation, right? So it's learned from that. And then it's using that knowledge to apply to things that haven't been translated, translated before. Maybe. Yeah.
00:20:31
Speaker
But like anything with AI, you know, it's a first pass, right? Like the computer is always going to be a first pass and then you need human eyeballs on it to kind of confirm what it's saying. But that would be hard in this case because you just end up having to look at every single piece over again. Well, at some point, your confidence level gets higher and you don't have to look at every single piece. In the early stages, yeah, that's a real pain in the ass. But the more you do it, the better. This reminds me.
00:20:57
Speaker
you know, I was thinking about this, like, you know, as these, these artificial intelligence programs, you know, get put online and maybe they're, they're told to search for other stuff. I just briefly read just like the first part of an article that said AI is not going to get any better if we keep using AI to teach AI. Basically having these neural networks and things learn from other things that I'm put together like that. It's, it's not, it's not good enough yet.
00:21:22
Speaker
Right. You know, it's an interesting concept too, because people might be saying, well, I've got this whole database over here that was actually put together by, you know, some sort of, you know, neural network learning system. Uh-huh. Let's just feed it into this one. Yeah. Gosh, that's such a good point. Yeah. The only way to really do it with super high confidence is to feed it absolutely 100% known things. Yes. And tell it, you got that right. You got that wrong. And here's why.
00:21:45
Speaker
Yeah, that's, that's super interesting. Yeah. I wonder when you win with a technology like this one specifically, when you get to the point where you can just like fully trust whatever it's telling you, or if you're always going to have to double check. Like I wonder, I wonder what the future of it looks like.
00:22:02
Speaker
I don't know. Yeah, that's crazy. This feels a little different to me than the last article because in the last article you could kind of almost use the AI as like a filtering, right? Like it
AI's Future in Archaeology
00:22:13
Speaker
could just scan an area that hadn't been looked at before for Nazca lines and then
00:22:18
Speaker
pick out the ones that it thought might be it. And then the human goes in and verifies. This is different because you can't go in and verify every single thing it translates. So you have to get to a point where you have a certain level of confidence in it. And I wonder how long it takes before it gets to that point or if it ever does. I mean, I wouldn't say if it ever does, it will. It will. Yeah, for sure. Yeah. All right.
00:22:41
Speaker
We're going to go now on the other side of the break. We're going to stay in Mesopotamia because, man, we're using satellite imagery. And soon enough, these satellites are just going to be able to say, yep, there's a site. And oh, yeah, I saw a clay tablet. And here's what it means. Yeah, I just went ahead and translated it for you. Here you go. Here's a nice little package. You know what? Don't even bother going out there. We got this. We're done. We don't need you little human people. I'll take a fan and another coffee. Thank you very much. Back in a minute.
00:23:06
Speaker
Welcome back to the architect podcast, the all news edition, the AI podcast, I think the AI podcast. Yeah. Uh, you can tell by our halting speech and really poorly enunciated things that chat GPT actually did not write this. So I think that's going to be the thing in the future is like, wow, they're really terrible. It's authentic.
00:23:25
Speaker
Oh God. Until chat GPT can be taught to like, you know, write like a valley girl or something like that, you know, and probably can. But the thing is, is that scientists write so, I don't know, like there's that lot, that science language. I feel like a computer could replicate that pretty easily because it's very dry. You know, it's very technical. I think chat GPT could probably do that.
00:23:50
Speaker
I might try to find some news articles for another episode and have chat GPT summarize them for me. Oh my God. You should totally do that. And then I can use like a, there's a, I use audition, Adobe audition to record and edit. And there's this thing in there where you can type in texts and have it read it out the text in a computer voice.
00:24:07
Speaker
Yeah. We might have an all chat GPT episode. I think that's a brilliant idea. Have you and Paul talked much about chat GPT? Only a little bit, but not specifically. It is time to have that conversation on Archeotech. Come on, it could change the way, you know, future articles are written. Maybe, or it'll just sound fake. Who knows? We'll see. Anyway, that's a little tangent right there.
00:24:31
Speaker
Yeah. So anyway, this one, again, we're staying in Mesopotamia here and this time they're using satellite images, again, satellites to train basically an AI algorithm to find sites in Iraq. Yeah.
00:24:46
Speaker
Yeah, so this is archaeologists from the University of Bologna and they have developed a system of AI algorithms that can identify previously undiscovered archeological sites in the Southern Mesopotamian Plain. Now that is the subtitle of the article. It's a little bit misleading, I think, actually, because they haven't used this yet to actually find undiscovered sites. They're just using it as a test and they're training and the results are pretty good. So they're hoping it can be used that way in the future.
00:25:14
Speaker
Right. I don't think they have ground truth, but the model did predict some places where they didn't have sites where it thinks there are sites. Yes. Yeah. Yeah. So, and yeah, they do need to go and actually look at it and see if that is true or not. Yeah. And unlike the first article with the Nazca lines where you could look at an aerial image and be like, Oh yep, that is totally a Nazca line that we've never found before. You can't do that with a site like this.
00:25:37
Speaker
I mean, maybe you can kind of see the tells they call them, which are like the hill shaped mounds. You might be able to see that on the aerial image, but more likely than not, you're not going to really be able to tell and you have to just go out there in person and actually verify whether or not it's a sight. You won't be able to tell the tell? You won't be able to tell the tell. Wow.
00:25:56
Speaker
So they were testing this algorithm in the Maison province of Iraq. Paul's listening to this going, that is not how you pronounce that. No, it's not how you pronounce it. I'm so sorry, Paul. Probably Maison. Maison, okay. M-A-Y-S-A-N. And the dataset was a bunch of already identified sites, like we said, and the archaeologists knew exactly where to look, what they look like. They're able to feed all that data into the algorithm because they have all of it already.
00:26:21
Speaker
Yeah, through the feeding of these sites and where they are and the characteristics of these sites and things like that, they were able to essentially fine tune the program and have it identify those known sites with up to 80% accuracy, which is really good. 80% is good. Yeah. I mean, when we do like a pedestrian survey, like 80% is way more than we're able to get or cover in that kind of a survey, right?
00:26:46
Speaker
Yeah. If we were able to train in AI on the terrain and culture and things that have been found in the past, you know, cause when we do pedestrian survey, like you said, we're usually 25 to 30 meter spacing, sometimes a little closer, but usually not. And we're, we're doing a known sample of about 5% or less because you can't see the entire distance between you and the next person. So you know that you're only sampling the area, but if, and if a computer system could get 80% accuracy and you could just go ground truth, those, I mean, you'd find a lot more stuff.
00:27:14
Speaker
You really would. And I didn't even think about that, but yeah, like our normal survey methods just simply can't even get to 80%. So this is already leaps and bounds better than that. That's amazing. So the way they taught it was the researchers used a data set of vector shapes that represented the shape of the sites that they knew about, these known sites that had been recorded in the Southern Mesopotamian floodplain. And they had thousands of satellite images
00:27:41
Speaker
from various different archives and those images, it would have taken a person hours, just hours upon hours to look through them all. It just really wouldn't be possible for a person to take the amount of time or people to take the amount of time it would to look through all that.
00:27:58
Speaker
And also, the other thing they had is that there's images, multiple images of the same location. They were of varying quality, and sometimes they were satellite, sometimes they were aerial, and there's just all kinds of variability in the types of images that you're looking at, which any person who sits down to look at these images is just going to take them a second to figure out what they're seeing. Is it the same location? Where is this overlap happening? But they're able to kind of train this AI to sort of look at all of that together and immediately know and understand
00:28:27
Speaker
what they're looking at, which is just another way that AI is going to be faster and better at this sort of
Technological Advancements and Predictions
00:28:31
Speaker
work. Yeah, for sure. So this 80% is great. But like we had said in earlier segments, they're basically proposing a human AI collaboration, right? Yeah. Where the computer can do the initial pass-through, it can tag anything that's a possible site, or they can decide how sure the AI needs to be at a site, like what that percentage is, and then anything
00:28:56
Speaker
you know, above that percentage is what they look at to ground truth it, essentially. You know, in the future, they're not going to have to do that either. They'll just send out their robot dogs. Or the drones. Send the drones. Well, send the drones first. The drones will drop the dogs off.
00:29:13
Speaker
I don't drink. The drones will drop the dogs off and the dogs will dig because that's what dogs do. Right. But they've got sensors on them. They'll be like, you know, you could even watch through their eyes probably, but they'll know what they're looking for because they've got the AI on board. Right. So and I really do think that I mean, we're not.
00:29:28
Speaker
We could probably do this today, to be honest with you. We could probably do this today, fully automated, you know, survey, excavation, and detection, all of the, every phase of the project. It's just nobody's putting the money into that. They're putting it into military applications and other stuff. So when all that trickles down to common society, I mean,
00:29:45
Speaker
the stuff we could get done and the way we could do it in a slightly more affordable way, because it's expensive to send people. It is. And it's dangerous to send people to do things. I mean, we think we want to do that just from a, you know, humans have to look at this because, you know, they're just better at it. Humans are not going to be better at it for much longer.
00:30:00
Speaker
Yeah, I do wonder though, like when it comes to making the connections between things and drawing the conclusions, is the computer going to be able to do that piece of it? I think it will because you look at human history and entire theory textbooks for anthropology are written on the fact that humans are pattern recognition and making machines.
00:30:20
Speaker
We do the same thing across the world, time after time, again and again and again. And sure, there are some cultures that just go outside the box and they for some reason evolved culturally a way to do something drastically different. But agriculture, the bow and arrow, swords and other weaponry and different eating techniques and clothing and all that stuff's been invented multiple times across the planet. And the reason for that is, well, it's predictable. You know what I mean?
00:30:50
Speaker
It's not predictable that humans were going to, you know, develop a brain that's going to figure these things out the same way, but those things can only be done in so many ways. So, you know, if you've got somebody that can figure it out, then they're going to do that. And I think, I think based on those predictable elements of human nature that, you know, with enough information and eventually these computer systems are going to have all the information.
00:31:11
Speaker
develop this in conjunction with quantum computing, which does exist now, but when it gets better and more affordable and they're everywhere, a quantum computer can do hundreds of thousands, if not millions of times more computations per given unit of time than a regular computer can, than a binary computer can. And it's just orders of magnitude faster and it can process more data. And once we can do that, it's gonna be able to do stuff that we can't even ask the questions about now. We don't even know what it can do.
00:31:39
Speaker
I think the only thing that trips me up about that idea is that, yes, you're right, it's patterns. A computer will be able to identify and recognize patterns better than humans can at a certain point when it learns enough. But what about the things that don't fit the patterns? That's the stuff that you still are going to need people to go in and look at.
00:31:58
Speaker
If you have a society that is, I don't know, maybe doing something totally different that has never been seen before. I can't even think of something off the top of my head, but it does happen. Right. And those kind of things, like the computer is just going to look at that and try to assign a pattern or a thing to it that it already knows. And it won't be right in that case.
00:32:17
Speaker
Sure, but if it can't do that, then it should be smart enough to say, hey, I got something different here. Yeah. And that's going to be the thing is it has to be smart enough to know when it has something different so that it brings in the humans to do that last bit of analysis or whatever.
00:32:35
Speaker
I think it'll be able to do all those things. I can't see, I can't see a world where it just won't be able to completely eliminate the human element and then go use chat GPT nine to write the report. I wonder if like excavation will be a thing of the past at a certain point, like with the various ways we have to basically see under the ground that are constantly getting better and developed more. I wonder if using all of those things, you won't even have to.
00:33:04
Speaker
Just with the march of technological progress, the answer to that question is always yes. It will get better. I constantly think of, I know I bring this up a lot, but Star Trek.
00:33:14
Speaker
There was an episode, and probably lots of them, where from the enterprise, they were able to look under the ground and say, oh, there's catacombs and all kinds of stuff under there. Yeah, yeah, yeah. That's a little bit out there. It's a little simplistic, too. Well, sure. But they were able to see from a remote distance in space, basically consider it like a satellite, underground and determine these things. Now, whether or not we'll actually be able to do that in the future, I don't know. But doing that from a ground-based station,
00:33:40
Speaker
I just think it's inevitable that we'll be able to do that kind of thing. I don't know what the mechanism is. I don't know how it would look, but I don't think we can say with any confidence anymore that some sort of technological thing you can think of won't be possible. I mean, even 90 years ago, they were saying we're going to have flying cars and hoverboards and we're not too far off of that.
00:33:58
Speaker
Yeah, that's true. We were just watching that episode of Grand Tour where they had a flying car on it. They're commercially available soon. The car has four wheels in it.
Collaboration in AI and Archaeology
00:34:09
Speaker
It drove itself to the airport, unfolded wings, and then flew to another airport and then folded his wings back up and drove away. It's nuts. Yeah, that was crazy.
00:34:18
Speaker
Yeah, so when that becomes commercially viable, first off, stay out of the skies, because if my grandma's flying a car, you know there's going to be hell. So anyway, yeah. So a little bit of a shorter episode today, but hopefully we can get some of those interviews scheduled and hopefully Paul can join in on some of those because we normally record on Thursdays and he said they don't work on Fridays, which means he can record at two o'clock in the morning, which is when it'll be.
00:34:46
Speaker
I don't know about that. Paul's a trooper. That's a lot to ask of this human anyway. I would be like, bye, I'm sleeping. I'm not asking Paul to do it, but if he can, I really appreciate it. Anyway, with that, if you are working on anything, well, anything really, to be honest with you, give us a call. Not a call, it's not 2004. This is a technology podcast. Are you opening the lines? The lines are open.
00:35:12
Speaker
Call us at 1-800-ARCHIOTEC. Oh my god. Do not do that ever. Yeah. Anyway, send an email, chris, at archaeologypodcastnetwork.com, or use the contact form on the website. And just let us know what you're doing. And if you go to the Archaeotech page on the Archaeology Podcast Network, which is just archpodnet.com forward slash archaeotech, and that link is down in the show notes,
00:35:34
Speaker
You can click on the schedule right on the right-hand side and you can see the Thursdays that we record, record every other Thursday. So check on one of those. If none of those times work out, let me know. I can send you a different link and we'll make something work. But we try to fit everything into that scheduled recording time because we're trying to plan schedules from around the world and it gets tough.
00:35:53
Speaker
So try that if you're especially working in something in AI or these convolutional neural networks or something like that I really want to hear about it because that's cutting-edge stuff. Yeah We want to see it. But again, nothing is off the table when it comes to technology. We won't talk about all of it
00:36:07
Speaker
And if you like the way that I gave Chris Crap all through this episode and want to hear me disagree with him more, you should start listening to The Archaeology Show podcast, which is the show that he and I do together every week. Every Sunday. Yep. All right. Well, with that, we will see you guys next week. And hopefully we'll have Paul back soon. But if not, I'll probably rope Rachel into it. Yep. All right. It turns out I live with this guy. So. Indeed. Yep. All right. See you next week. Bye.
00:36:40
Speaker
Thanks for listening to the Archaeotech Podcast. Links to items mentioned on the show are in the show notes at www.archpodnet.com slash archaeotech. Contact us at chrisatarchaeologypodcastnetwork.com and paulatlugol.com. Support the show by becoming a member at archpodnet.com slash members. The music is a song called Off Road and is licensed free from Apple. Thanks for listening.
00:37:05
Speaker
This episode was produced by Chris Webster from his RV traveling the United States, Tristan Boyle in Scotland, DigTech LLC, Culturo Media, and the Archaeology Podcast Network, and was edited by Chris Webster. This has been a presentation of the Archaeology Podcast Network. Visit us on the web for show notes and other podcasts at www.archpodnet.com. Contact us at chris at archaeologypodcastnetwork.com.