Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Using Artificial Intelligence to Translate Ancient Text and Find New Sites - Ep 228 image

Using Artificial Intelligence to Translate Ancient Text and Find New Sites - Ep 228

E228 · The Archaeology Show
Avatar
4.3k Plays1 year ago

We’re playing an episode of the ArchaeoTech podcast for you on today’s episode. However, it was one that Chris and Rachel took over to do some tech-related news stories. We talk about artificial intelligence and how it’s being used to translate ancient text and find new sites.

Links

Contact

ArchPodNet

Affiliates

  • Motion
  • Motley Fool
    Save $110 off the full list price of Stock Advisor for your first year, go to  https://zen.ai/archaeologyshowfool and start your investing journey today!
    *$110 discount off of $199 per year list price. Membership will renew annually at the then current list price.
  • Laird Superfood
    Are you ready to feel more energized, focused, and supported? Go to https://zen.ai/thearchaeologyshow2 and add nourishing, plant-based foods to fuel you from sunrise to sunset.
  • Liquid I.V.
    Ready to shop better hydration, use my special link https://zen.ai/thearchaeologyshow1 to save 20% off anything you order.
Recommended
Transcript

Introduction

00:00:01
Speaker
You're listening to the Archaeology Podcast Network. You're listening to The Archaeology Show. TAS goes behind the headlines to bring you the real stories about archaeology and the history around us. Welcome to the podcast.

AI in Archaeology Overview

00:00:16
Speaker
Hello and welcome to The Archaeology Show, episode 228.
00:00:20
Speaker
On today's show, we play an episode of the Archeotech Podcast that Chris and I took over. We talk about artificial intelligence and how it's being used to translate ancient text and find new sites. Let's dig a little deeper and then go listen to other great shows on the Archeology Podcast Network and the Archeotech Podcast. All right.

Meet the Hosts

00:00:41
Speaker
Hey, everybody. Welcome to the show. How's it going, not Paul? Is that my new name, not Paul?
00:00:48
Speaker
So if any of you guys happen to listen to the archaeology show, you would recognize Rachel. But I think you've been on an archaeotech before, too. Yeah, I think so. I've talked about Wild Note before, and we did a couple other episodes together. I'm Paul's stand-in. Indeed. Well, Paul is not on the show today because I'm pretty convinced that he's a secret agent. And he's just called to the Middle East occasionally and can't talk about why. Oh, man. Did you just, like, out his CIA status to the entire world right here? I think I did. Wow.
00:01:17
Speaker
Anyway, I mean, he probably speaks some Middle Eastern languages. He kind of fits in over there. It just kind of all makes sense. It does. It does. I believe it. Archaeologists would make the best secret agents because we travel a lot. That's true. Yeah. We have lots of stamps in our passports. And we often speak other languages. And we blend in with cultures. Yeah. You should definitely make that a life goal right there. In case anyone from the CIA is listening, I also have no tattoos or distinguishing marks.
00:01:44
Speaker
There you go. You're a little old though. Yeah. Yeah. Well, hopefully I'm making these statements because like Google I know is, for a long time they were like auto transcribing podcasts that came through the Google service. What? Really? Yeah. So we searched for it. I don't know if they're still doing it, but that was when they were more into podcasts and they kind of started dialing that back. But the fact that that transcription is on the internet and searchable by Google, that's why they did it, means you can find stuff. So if the CI is quantum computer,
00:02:14
Speaker
which I'm sure they have, is searching for different words and things like that that are interesting to them, then I might get a phone call. So either good or bad. Yeah, because they like a good phone call or a bad phone call. What are you talking about that's going to capture this attention? So that's one of the disturbing and frightening things that AI can do.

AI Impact on Archaeology

00:02:33
Speaker
But other things AI can do
00:02:35
Speaker
are identify sites and translate languages. Yeah, totally. Yeah. So we have, we found one article, actually Rachel found an article about AI identifying Nazgolites, which we're going to talk about here in a minute. And when I happened to find a second one, I was like, there's got to be a third one. There must be. Yeah. So I started looking. Well, AI is in the news like a lot lately, it feels like. So it just makes sense that it would be used in an archeological setting as well. And yeah,
00:03:00
Speaker
This episode of Archeotech might be a little bit different because I don't know that we've ever done like a news story episode on this show, like we do for our TAS. But that's what we're going to do is we're going to just talk about AI in the news in three different instances.
00:03:16
Speaker
Exactly. So I do have some interviews coming up. They're in the process of getting planned right now. Paul is going to join us if we can, but there's some stuff from the most recent edition of Advances in Archaeological Practice as you're listening to this. I think one just came out a few weeks ago, the June edition or whatever they're calling it. Maybe it's July.
00:03:32
Speaker
by the way, publishing works. So look for that coming up here soon. But in the meantime, Rachel's filling in and like she said, we're gonna talk about this. So we have the original article for this one and it's from the Journal of Archaeological Science called Accelerating the Discovery of New Nasca Geoglyphs Using Deep Learning. We also have a link from Live Science. And sometimes this one seems to be okay, but sometimes you have to like log into Live Science. This one's not making you, but there's another one that we have. But they've kind of opened.
00:04:01
Speaker
Some publications let you read and some that you have a subscription. I don't know any more. The thing I like about having both is because the actual journal articles are sometimes hard to read because there's so many citations they have to put in. I wish there was a remove citations filter. It would just be really easy to read. But if you do want to read the journal article, because it is open access,
00:04:24
Speaker
then do what I've always been told as just kind of like a rough overview of it. If you really need to dig into it and you're trying to replicate the results, that's one thing. But if you just want to know what it means, first read the abstract and that's probably all you need to do. Scan through and look at the pictures and read the captions. Sounds stupid to say that, but it's true. Just do that. And then go down and read the conclusions. Because that's where
00:04:46
Speaker
you really get the meat of the article. And if you want to know more, if you want to dig into it, then go ahead. And that's what reading an article like this Live Science article does. That's exactly what they do. They essentially just do that. They don't really dig into it. And you get the meat of it without the fluff. You get the CliffsNotes version, but that's really all you need. It's not like you're trying to go out and repeat this study. You just want to know what they did and what they found. And that's all you really need.
00:05:12
Speaker
So let's talk about Nazca lines.

Nazca Lines & AI Discovery

00:05:14
Speaker
First off, these researchers in this article, they found three new Nazca line figures in Peru that were created up to 2,400 years ago. And that's just the date of the Nazca lines within Peru, not just these, but like all of them. There's a bigger range. We'll talk about that in a minute.
00:05:28
Speaker
What they found though is just astonishing to me that nobody actually noticed these before. Or maybe they did and thought, ah, you know, somebody must have noticed those and then didn't report it. You know what I mean? But people who are doing actual research on these things, when they say nobody's noticed these before, that's actually who they're talking about. Like I said, I have no doubt that some, you know, some armchair warrior on Google Earth
00:05:49
Speaker
has found these Nazca lines, right? These additional ones. Or the local people that live in the area probably are like, Oh yeah, there's probably something out there. We've never seen it from the sky, but like there's definitely something there. Maybe, maybe, but it's tough to see some of these older ones because they're just, or the ones that are more ephemeral and that's what, where the AI comes in. Yeah. Erosion is really like picking havoc on them. That's for sure.
00:06:10
Speaker
But one of the things that, three things they found that were notable are a pair of legs more than 250 feet across. I'm like just the legs. Like there's nothing else. Yeah. Okay. I don't like it. Maybe it's like that, uh, what was it? Some image I saw one time ago where you've got the earth and it's like the Rapa Nui, you know, heads from Easter Island. And then all the way through the earth is Stonehenge. It's like the feet of those heads.
00:06:35
Speaker
Anyway, so you've got those feet, 77 meters across, and then a fish measuring 62 feet across, or 19 meters, and a bird measuring 56 feet, or 17 meters wide. Yeah. Super cool. Those second two are definitely a little smaller, so I could see how those would be missed. But man, the first one, giant pair of legs. Yeah. Yeah, that's crazy, but really cool that they found them.
00:06:56
Speaker
Yeah, so they found a humanoid figure by the same means back in 2019 so this is just kind of more of that research and probably better because I mean four years is a long time in the field of AI and how these models are made. The Nazca lines are actually made most of the time by essentially just moving the black stones that are out there, the stones that have this like desert varnish on them, this patina, and they reveal the white sand underneath.
00:07:20
Speaker
So it's literally just moving them out of the way to reveal the underneath layer. I've heard too that they can be flipped over because the stones are like a different color underneath. So maybe some of it's that, but this is more plausible that they're actually moved out of the way and you make kind of a pathway because one of the possible
00:07:37
Speaker
The purposes of these is obviously ritual and ceremony. And some researchers think that the creators of these, and then ancestors, would have run ceremonial processions tracing the figures. Oh, like actually walking along? Like walking the path. Using it like a path, yeah, okay. That's possible for sure, right?
00:07:53
Speaker
There are a total, as we know, of more than 350 geoglyphs in total. A geoglyph is just something that's a shape made out of rocks, basically, by humans. That's too big to carry with you. Otherwise, it's an artifact. They were first spotted by pilots in the 1920s. I can't imagine just flying over there for the first time and going, what is that?
00:08:14
Speaker
And some of them are so cool looking and really like super elaborate for a giant picture on the ground. You know, like you can see fully formed birds and other animals and shapes. So that's, it's really cool. Yeah. Among the figures that have been found hummingbirds, monkeys, whales, whales, spiders, flowers, geometric designs and tools. Yep.
00:08:35
Speaker
So, and they're not just found in the Nazca desert where these are, they're found in other places in Peru too, but they're mostly found in the Nazca desert, probably because of the environment and how you can set those up. Yeah. And that's on the Southern half of Peru.

History of the Nazca Lines

00:08:48
Speaker
So I worked on the Northern coast of Peru when I went there back in my undergrad days, but this is all on the Southern side. So I unfortunately didn't get a chance to go see them, but man, they were on my list. Maybe one day if we make it back to Peru. Yeah, for sure. Yeah.
00:09:01
Speaker
I mentioned the 2,400-year-old dates. They think that they range in date from about 400 BCE to 650 AD for their creation. Obviously, they could have been used all the way up until 100 years ago or now. I don't think they're really being used now by locals, but they could be. The professor and archaeologist from the Yamagata University in Japan, Masato Sakai,
00:09:28
Speaker
Again, Yamagata University in Japan. He's been searching for Nazca Geoglyph since 2004. He's been basically obsessed with it. And as you go forward in time, he's just been using newer and newer technologies. He's used satellite imagery, of course, which we've had for a long time. Aerial photography, which is a lot higher resolution usually than satellite imagery. Oh, that makes sense. Unless you have military satellite imagery. But even satellite imagery is getting a lot better these days. Airborne scanning LiDAR and drone photography. Yeah.
00:09:57
Speaker
And if you're a long time listener to the Architect podcast, that's the first time you have to take a drink because we said drone. We did say drone. So have a sip of coffee over there. Yeah, I'm drinking coffee. So we're recording early. Yeah.
00:10:09
Speaker
They identified the new glyphs after about five years of study. So it took a long time to really dial these models in. It's like so much time and effort to only find a handful though. I wonder if this is going to get better and better and they'll be able to find more or if there just aren't more to find. That's a question that is always interesting to me with this kind of thing.
00:10:31
Speaker
Well, we'll talk about that in a second because the model you're using, they're training it pretty heavily. Yeah, yeah. In 2016, this is probably what helped them find the one in 2019, the humanoid figure, but they obtained some high resolution images of the area.

AI Training for Geoglyph Identification

00:10:45
Speaker
And that's when they started using AI and what's called deep learning to train a computer to find more glyphs. Right. They actually partnered with IBM of Japan and in the US, IBM's Thomas J. Watson Research Center to conduct the research. You might recognize Watson as the computer they did that...
00:11:01
Speaker
Oh, for Jeopardy. Yeah, that's one of the beat 10 Jennings and Holtzauer. Yeah. So anyway, deep learning is essentially training a computer system on thousands or even millions of known objects. So when you show it just so many instances of a thing and you say, this is this, this is this, this is this, and it just starts to really, it's pattern recognition is what it is. And the more patterns you give it, the more it understands and learns and then can start finding its own patterns and then can start finding variations of those patterns.
00:11:31
Speaker
So I imagine since we have 350 glyphs, geoglyphs that we know about, they were able to use some of those to basically train it on what to look for. Yeah. And that's what they did. So they didn't actually have thousands of elements, right? But they did break these up into like head, torso, arm, legs. So they have these pieces, right? Okay. Yeah.
00:11:51
Speaker
And they only used about 21 known Nazca geoglyphs, but broken up into these elements to actually train this computer initially Yeah, and this is just a preliminary kind of thing. I'm sure they're by now even they've started to give it a lot more info That's how AI is right like the training just gets more and more and so therefore the identification gets better and better and better with more time and more training
00:12:12
Speaker
Yeah, they've said that the AI can identify possible figures, and they usually train these things on known figures. They'll punch in an area, and if it doesn't identify the ones we know about, you know you have a problem. But that's how they verify what they've done. That is right, they say, they punch in known stuff, and they say, okay, did you find everything? And the AI identified possible figures about 21 times faster than trained archeologists. It doesn't say more accurately,
00:12:39
Speaker
Like the archeologist will still get it, but the computer just did it faster. Yeah. And did it find all of them or was it missing anything would be my question. Yeah, exactly. Yeah. So anyway, it's important to find as many of these as we can, because that area is suffering a lot from erosion and climate change. And the climate change is bringing in more water and it's bringing in heavier winds and lots of stuff that is just damaging for this type of environment. So it's important to identify these so we can learn as much from them as we can.
00:13:08
Speaker
Yeah, these kind of geoglyphs that I just don't think that they're the kind of thing that can last for the ages. You know, it's not, it's not a pyramid that's going to be there as long as we, we can serve it, you know, over 2000 years. It has, it has that it's a long time, but I just, with erosion and everything, like I just, I could see them starting to disappear. Yeah. In the last hundred years or so, it's the human induced climate change that's really accelerated the process. Yeah, for sure. Yeah.
00:13:34
Speaker
Now I wonder too if this AI will be very helpful in scenarios where it's hard for the human eye to see the geoglyph, maybe? Like some of them can be very faint lines. So as this erosion is happening and the lines are fading and getting harder to see, I wonder if this AI can help.
00:13:54
Speaker
still make that connection and still identify them even though it's hard to see with the human eye. That would be a really great use of this technology. I think that's really kind of what it's trying to do. Once we can pump in the LIDAR data and other data that it can just
00:14:10
Speaker
like cross-reference them, yeah, to see all the different things that it's hard for a human to put like this satellite image and this aerial image and this LiDAR data and like put it all together. But I guess a computer can probably just combine all that and get a better, kind of a better conclusion and a quicker conclusion than we can.
00:14:27
Speaker
Yeah, I mean, humans are pretty good pattern recognition machines, but computers are just way better at it on a more massive scale. All right, well, that's enough for that one. Let's head over to the other side of the world and see how AI is being used to translate ancient Sumerian and Akkadian straight into English off of cuneiform tablets, which is just baffling.

Translating Ancient Texts with AI

00:14:50
Speaker
Very cool. Back in a minute.
00:14:52
Speaker
Welcome back to Episode 204 of the Archaeotech Podcast, and we're talking about artificial intelligence. This time around, we're going to go to an article from PNAS. It's actually PNAS Nexus. There's a lot of different versions of this, but it's called Translating Akkadian to English with Neural Machine Translation. Actually, one of the articles you read talked about Akkadian and Sumerian, actually. It even says that in their abstracts, so I'm not really sure why
00:15:16
Speaker
They don't mention Sumerian in the title of the article, but you know, there it is. There's another article from archaeology magazine called researchers use AI to read ancient Mesopotamian texts. Yeah. So check that one out too, because there's a pretty cool image of a cuneiform tablet on there. So cuneiform is that if you're trying to picture what that means and you don't have the ability to click on the links right now, it's that read created text in soft clay tablets. Yeah. Where they're like.
00:15:42
Speaker
almost like puncturing the... Kind of, but they take the end of a read, which is kind of a, I don't know, kind of looks like a squinty eye. Yeah. And then they punch it in and sometimes they'll like twist it or drag it or do a little thing and that's how they made letters. Yeah. Letters and probably more like syllables. It really is worth looking at the image in the archaeology article because they show the cuneiform and then the translation into
00:16:08
Speaker
I'm not sure what language that is, but then translation into English below it. It's really cool to see that. Yeah, it's scraped, scraped, punk punk scraped, scraped, scraped, whatever it is. Yeah. And that translates to dis. I'm going to read this. OK. Dis tuxu, dag dag umesu gid dames, which means if he cleans his garments, his days will be long. Oh, wow. Talking about personal hygiene. I love it. Apparently. Yeah, that's awesome. That's amazing.
00:16:37
Speaker
So anyway, researchers from Tel Aviv University and Ariel University, which studies mermaids, use AI to translate. Nope, nope, nope. That is completely wrong. Ignore that. Go on. AI to translate ancient cuneiform texts from Mesopotamian languages, basically, into English. And it's not like into ancient Greek or something in that English. It's into straight up English, which is a little bit crazy. Yeah, that is insane.
00:17:03
Speaker
The use of the AI is not actually intended to replace humans, but just like we talked about last time, speed up the process, which kind of replaces humans. No, it frees up human time to do the things that computers can't do. Like make coffees at Starbucks. No, like doing more advanced research into whatever it is that they're studying. Like say, do you want fries with that? No, also wrong.
00:17:28
Speaker
what you do with your social sciences degree. All right. Oh my God, you're terrible. Yeah. So anyway, they're trying to speed up the process because there are, I mean, hundreds of thousands of bits of fragmentary text. And that's the one thing humans had a hard time doing is, you know, there's no context for a lot of this stuff. So you're just trying to piece together these things and it's really difficult. And the computer is able to do that a little better because it understands when you feed it enough sources. Again, it's all about feeding the algorithm.
00:17:56
Speaker
and teaching it. Yeah. Yeah. And saying, Hey, this is often found in association with this and something like that. And then it can mostly get it right. Yeah. Yeah. There's only like, when you see a line coming off at a certain angle, there's probably only so many figures or shapes that that could be. And then the computer can kind of narrow it down. Yeah. Yeah, exactly. So the computer can narrow it down way faster and easier than our little baby human brain scan. Yeah.
00:18:20
Speaker
Coneform is one of the earliest writing systems in the world. And we're not talking about like rock art and stuff like that, which some people could say is a form of communication, but it's not necessarily seen as a writing system. Right. But Coneform is one of the earliest like legit writing systems in the world. And it dates to about or was used from about 3400 BCE to 75 CE. That's over 5000 years ago. That's a long run. Yeah. Yeah. Yeah. That's insane. Yeah.
00:18:44
Speaker
There have been hundreds of thousands, like I mentioned, of Canadian Farm Techs found over the last 200 years. And most of those are in Sumerian and Acadian, both in Mesopotamia. The AI used, the artificial intelligence, was basically what they call a natural language processing method. And there's a number of those that can be used. But one of the more common ones that we talk about on this show, that we hear about all the time, is called a convolutional neural network.
00:19:08
Speaker
So essentially you're taking bits and pieces of information and when they say, anytime they say neural network, you think of a human brain and your neurons have up to 10,000 plus connections for each neuron to the things around it. That's what a neural network is. It's basically making these associations and saying, well, this is associated with this and this and this.
00:19:26
Speaker
Okay, it's picking up all the potential connections. And then recognizing when those connections go together in most circumstances, known circumstances, and then using that to make inferences and translations. It's pretty cool. Yeah, that's really neat. So then it's able to translate the glyphs directly into English.
00:19:47
Speaker
So it's like bypassing anything in between, which is probably how it has been done in the past and just going straight from Akkadian or Sumerian into English. That is so, so cool. Yeah, it's pretty crazy. It doesn't actually do super well with apparently like longer sentences either for whatever reason. It does. The best results come with short and medium length sentences of approximately 118 characters or less.
00:20:11
Speaker
So I'm not really sure why that is, if that's some quirk of the language or something like that, but I can tell you right now that it's just gonna get better. The more they teach it and the more that they say, yep, that one was right, this one was wrong, the more it learns.
00:20:22
Speaker
I wonder if they were to break the longer sentences up into shorter pieces. If it would do okay with just like, just like give it fragments. Yeah. But again, I wonder too, if they've created this by, they've probably taught it by feeding it things where we already know the translation, right? So it's learned from that. And then it's using that knowledge to apply to things that haven't been translated translated before. Maybe. Yeah.
00:20:48
Speaker
But like anything with AI, you know, it's a first pass, right? Like the computer is always going to be a first pass and then you need human eyeballs on it to kind of confirm what it's saying. Right. But that would be hard in this case because you just end up having to look at every single piece over again. Well, at some point you, your confidence level gets higher and you don't have to look at every single piece in the early stages. Yeah, that's a real pain in the ass. Yeah. But the more you do it, the better. Yeah. This

Challenges in AI Data Training

00:21:13
Speaker
reminds me.
00:21:13
Speaker
You know, I was thinking about this, like, you know, as these, these artificial intelligence programs, you know, get put online and maybe they're, they're told to search for other stuff. I just briefly read just like the first part of an article that said AI is not going to get any better if we keep using AI to teach AI. Basically having these neural networks and things learn from other things that I'm put together like that. It's, it's not, it's not good enough yet.
00:21:38
Speaker
Right. It's an interesting concept too, because people might be saying, well, I've got this whole database over here that was actually put together by some sort of neural network learning system. Let's just feed it into this one. Yeah. Gosh, that's such a good point. Yeah. The only way to really do it with super high confidence is to feed it absolutely 100% known things and tell it, you got that right, you got that wrong. And here's why.
00:22:02
Speaker
Yeah, that's, that's super interesting. Yeah. I wonder when you win with a technology like this one specifically, when you get to the point where you can just like fully trust whatever it's telling you, or if you're always going to have to double check. Like I wonder, I wonder what the future of it looks like. I don't know. Yeah. That's, that's crazy. This feels a little different to me than the last article, because in the last article, you could kind of almost use the AI as like a, a
00:22:27
Speaker
filtering, right? Like it could just scan an area that hadn't been looked at before for Nazca lines and then pick out the ones that it thought might be it. And then the human goes in and verifies. This is different because you can't go in and verify every single thing it translates. So you have to get to a point where you have a certain level of confidence in it. And I wonder how long it takes before it gets to that point or if it ever does.
00:22:50
Speaker
I mean, I wouldn't say if it ever does it will, it will. Yeah. Yeah. All right. Well, we're going to go now, uh, on the other side of the break, we're going to stay in Mesopotamia because man, we're using satellite imagery. And soon enough, these satellites are just going to be able to say, yep, there's a site. And oh yeah, I saw a clay tablet and here's what it means. Yeah. I just went ahead and drew it for you. Here you go. Don't even bother going out there. We got this. We're done. We don't need you little human people.
00:23:18
Speaker
I'll take a fan and another coffee. Thank you very much back in a minute. Welcome back to the architect podcast, the all news edition, the AI podcast, I think the AI podcast. Yeah. Uh, you can tell by our halting speech and really poorly enunciated things that chat GPT actually did not write this. So I think that's going to be the thing in the future. It's like, wow, they're really terrible. It's authentic.
00:23:41
Speaker
Oh God. Until chat GPT can be taught to like, you know, write like a valley girl or something like that, you know, and it probably can. But the thing is, is that scientists write so, I don't know, like there's that lot, that science language. I feel like a computer could replicate that pretty easily because it's very dry. You know, it's very technical. I think chat GPT could probably do that.
00:24:06
Speaker
I might try to find some news articles for another episode and have chat GPT summarize them for me. Oh my God. You should totally do that. And then I can use like a, there's a, I use audition, Adobe audition to record and edit. And there's this thing in there where you can type in texts and have it read it out the text in a computer voice.
00:24:23
Speaker
Yeah. We might have an all chat GPT episode. I think that's a brilliant idea. Have you and Paul talked much about chat GPT? Only a little bit, but not specifically. It is time to have that conversation on Archeotech. Come on, it could change the way, you know, future articles are written. Maybe. Or it'll just sound fake. Who knows? We'll see. Anyway, that's a little tangent right there.
00:24:47
Speaker
Yeah. So anyway, this one, again, we're staying in Mesopotamia here and this time they're using satellite images, again, satellites to train basically an AI algorithm to find sites in Iraq. Yeah.

AI Predicting Archaeological Sites

00:25:02
Speaker
Yeah, so this is archaeologists from the University of Bologna, and they have developed a system of AI algorithms that can identify previously undiscovered archaeological sites in the southern Mesopotamian plain. Now, that is the subtitle of the article. It's a little bit misleading, I think, actually, because they haven't used this yet to actually find undiscovered sites.
00:25:24
Speaker
using it as a test and their training. And the results are pretty good, so they're hoping it can be used that way in the future. Right. Now, I don't think they have ground truth, but the model did predict some places where they didn't have sites where it thinks there are sites. Yes. Yeah. Yeah. So and yeah, they do need to go and actually look at it and see if that is true or not. Yeah. And unlike the first article with the Nazca lines where you could look at an aerial image and be like, oh, yep, that is totally a Nazca line that we've never found before. You can't do that with a site like this.
00:25:53
Speaker
I mean, maybe you can kind of see the tells they call them, which are like the hill shaped mounds. You might be able to see that on the aerial image, but more likely than not, you're not going to really be able to tell and you have to just go out there in person and actually verify whether or not it's a site. You won't be able to tell the tell? You won't be able to tell the tell. Wow.
00:26:12
Speaker
So they were testing this algorithm in the Mazin province of Iraq. Paul's listening to this going, that is not how you pronounce that. No, it's not how you pronounce it. I'm so sorry, Paul. I'm probably Maison. Maison, okay. M-A-Y-S-A-N. And the dataset was a bunch of already identified sites, like we said, and the archaeologists knew exactly where to look, what they look like. They're able to feed all that data into the algorithm because they have all of it already.
00:26:37
Speaker
Yeah, through the feeding of these sites and where they are and the characteristics of these sites and things like that, they were able to essentially fine tune the program and have it identify those known sites with up to 80% accuracy, which is really good. 80% is good. Yeah. I mean, when we do like a pedestrian survey, like 80% is way more than we're able to get or cover in that kind of a survey, right?
00:27:02
Speaker
Yeah. If we were able to train in AI on the terrain and culture and things that have been found in the past, you know, cause when we do pedestrian survey, like you said, we're usually 25 to 30 meter spacing, sometimes a little closer, but usually not. And we're doing a known sample of about 5% or less because you can't see the entire distance between you and the next person. So you know that you're only sampling the area, but if, and if a computer system could get 80% accuracy and you could just go ground truth, those, I mean, you'd find a lot more stuff.
00:27:30
Speaker
You really would. I didn't even think about that, but yeah, like our normal survey methods just simply can't even get to 80%. So this is already leaps and bounds better than that. That's amazing. So the way they taught it was the researchers used a data set of vector shapes that represented the shape of the sites that they knew about, these known sites that had been recorded in the Southern Mesopotamian floodplain. And they had thousands of satellite images.
00:27:57
Speaker
from various different archives and those images, it would have taken a person hours, just hours upon hours to look through them all. It just really wouldn't be possible for a person to take the amount of time or people to take the amount of time it would to look through all that.
00:28:14
Speaker
And also, the other thing they had is that there's images, multiple images of the same location. They were of varying quality, and sometimes they were satellite, sometimes they were aerial, and there's just all kinds of variability in the types of images that you're looking at, which any person who sits down to look at these images is just going to take them a second to figure out what they're seeing. Is it the same location? Where is this overlap happening? But they're able to kind of train this AI to sort of look at all of that together and immediately know and understand
00:28:42
Speaker
what they're looking at, which is just another way that AI is going to be faster and better at this sort of work. Yeah, for sure. So this 80% is great. But like we had said in earlier segments, they're basically proposing a human AI collaboration, right?
00:28:58
Speaker
where the computer can do the initial pass-through, it can tag anything that's a possible site, or they can decide how sure the AI needs to be at a site, like what that percentage is, and then anything above that percentage is what they look at to ground truth it, essentially.
00:29:17
Speaker
In the future, they're not going to have to do that either. They'll just send out their robot dogs. Or the drones. Send the drones. Well, send the drones first. The drones will drop the dogs off. I don't drink. The drones will drop the dogs off and the dogs will dig because that's what dogs do. But they've got sensors on them. They'll be like, you could even watch through their eyes probably, but they'll know what they're looking for because they've got the AI on board.
00:29:40
Speaker
So and I really do think that I mean we're not we could probably do this today to be honest with you We could probably do this today fully automated, you know survey excavation and detection all every phase of the project It's just nobody's putting the money into

Future of AI in Archaeology

00:29:54
Speaker
that. They're putting it into military applications and other stuff So when all that trickles down to common society, I mean
00:30:00
Speaker
the stuff we could get done and the way we could do it in a slightly more affordable way, because it's expensive to send people. And it's dangerous to send people to do things. I mean, we think we want to do that just from a, you know, humans have to look at this because, you know, they're just better at it. Humans are not going to be better at it for much longer.
00:30:16
Speaker
Yeah, I do wonder though, like when it comes to making the connections between things and drawing the conclusions, is the computer going to be able to do that piece of it? I think it will because you look at human history and entire theory textbooks for anthropology are written on the fact that humans are pattern recognition and making machines.
00:30:36
Speaker
We do the same thing across the world, time after time, again and again and again. And sure, there are some cultures that just go outside the box and they for some reason evolved culturally a way to do something drastically different. But agriculture, the bow and arrow, swords and other weaponry and different eating techniques and clothing and all that stuff's been invented multiple times across the planet. And the reason for that is, well, it's predictable. You know what I mean?
00:31:06
Speaker
It's not predictable that humans were going to, you know, develop a brain that's going to figure these things out the same way, but those things can only be done in so many ways. So, you know, if you've got somebody that can figure it out, then they're going to do that. And I think, I think based on those predictable elements of human nature that, you know, with enough information and eventually these computer systems are going to have all the information.
00:31:27
Speaker
develop this in conjunction with quantum computing, which does exist now, but when it gets better and more affordable and they're everywhere, a quantum computer can do hundreds of thousands, if not millions of times more computations per given unit of time than a regular computer can, than a binary computer can. And it's just orders of magnitude faster and it can process more data. And once we can do that, it's gonna be able to do stuff that we can't even ask the questions about now. We don't even know what it can do.
00:31:55
Speaker
I think the only thing that trips me up about that idea is that, yes, you're right, it's patterns. A computer will be able to identify and recognize patterns better than humans can at a certain point when it learns enough. But what about the things that don't fit the patterns? That's the stuff that you still are going to need people to go in and look at.
00:32:14
Speaker
If you have a society that is, I don't know, maybe doing something totally different that has never been seen before, I can't even think of something off the top of my head, but it does happen, right? And those kind of things, like the computer is just going to look at that and try to assign a pattern or a thing to it that it already knows. And it won't be right in that case. Sure. But if it can't do that, then it should be smart enough to say, Hey, I got something different here.
00:32:38
Speaker
And that's going to be the thing is it has to be smart enough to know when it has something different so that it brings in the humans to do that last bit of analysis or whatever. I think it'll be able to do all those things. I can't see a world where
00:32:55
Speaker
It just won't be able to completely eliminate the human element and then go use chat GPT nine to write the report. Totally. I wonder if like excavation will be a thing of the past at a certain point, like with the various ways we have to basically see under the ground that are constantly getting better and developed more. I wonder if using all of those things, you won't even have to.
00:33:19
Speaker
Just with the march of technological progress, the answer to that question is always yes. It will get better. I constantly think of, I know I bring this up a lot, but Star Trek.
00:33:30
Speaker
There was an episode, and probably lots of them, where from the enterprise, they were able to look under the ground and say, oh, there's catacombs and all kinds of stuff under there. Yeah, yeah, yeah. That's a little bit out there. It's a little simplistic, too. Well, sure. But they were able to see from a remote distance in space, basically consider it like a satellite, underground and determine these things. Now, whether or not we'll actually be able to do that in the future, I don't know. But doing that from a ground-based station,
00:33:56
Speaker
I just think it's inevitable that we'll be able to do that kind of thing. I don't know what the mechanism is. I don't know how it would look, but I don't think we can say with any confidence anymore that some sort of technological thing you can think of won't be possible. I mean, even 90 years ago, they were saying we're going to have flying cars and hoverboards, and we're not too far off of that.
00:34:14
Speaker
Yeah, that's true. We were just watching that episode of Grand Tour where they had a flying car on it, so. They're commercially available soon. Yeah. You know, the car has four wheels in it. It drove itself to the airport, unfolded wings, and then flew to another airport, and then folded his wings back up and drove away. Yeah. It's nuts. Yeah, that was great.
00:34:34
Speaker
Yeah, so when that becomes commercially viable, first off, stay out of the skies because if my grandma's flying a car, you know there's going to be hell. So anyway. Oh man. Yeah. So a little bit of a shorter episode today, but hopefully we can get some of those interviews scheduled and hopefully Paul can join in on some of those because we normally record on Thursdays and he said they don't work on Fridays, which means he can record at two o'clock in the morning, which is when it'll be.
00:35:02
Speaker
I don't know about that. Paul's a trooper. That's a lot to ask of this human anyway. I would be like, bye, I'm sleeping. I'm not asking Paul to do it, but if he can, I really appreciate it. Anyway, with that, if you are working on anything, well, anything really, to be honest with you, give us a call. Not a call, it's not 2004. This is a technology podcast. Are you opening the lines? The lines are open.
00:35:28
Speaker
Call us at 1-800-ARCHIOTEC. Oh my god. Do not do that ever. Yeah. Anyway, send an email, chris, at archaeologypodcastnetwork.com, or use the contact form on the website. And just let us know what you're doing. And if you go to the Archaeotech page on the Archaeology Podcast Network, which is just archpodnet.com forward slash archaeotech, and that link is down in the show notes,
00:35:50
Speaker
You can click on the schedule right on the right-hand side and you can see the Thursdays that we record, record every other Thursday. So check on one of those. If none of those times work out, let me know. I can send you a different link and we'll make something work. But we try to fit everything into that scheduled recording time because we're trying to plan schedules from around the world and it gets tough.
00:36:08
Speaker
So try that if you're especially working in something in AI or these convolutional neural networks or something like that I really want to hear about it because that's cutting-edge stuff. Yeah, we want to see it But again, nothing is off the table when it comes to technology. We want to talk about all of it
00:36:23
Speaker
And if you like the way that I gave Chris Crap all through this episode and want to hear me disagree with him more, you should start listening to the archaeology show podcast, which is the show that he and I do together every week. Every Sunday. Yep. All right. Well, with that, we will see you guys next week and hopefully we'll have Paul back soon. But if not, I'll probably rope Rachel into it. Yep. All right. It turns out I live with this guy. So. Indeed. Yep. All right. See you next week. Bye.
00:36:51
Speaker
Thanks for listening. I hope you consider subscribing to the Archaeotech podcast and checking out the other great episodes on that show. On the next episode of The Archaeology Show, we talk about Pompeii, ancient biblical kingdoms, and a badass female warrior. See you then. Bye.
00:37:12
Speaker
Thanks for listening to The Archaeology Show. Feel free to comment and view the show notes on the website at www.arcpodnet.com. Find us on Facebook, Instagram, and Twitter at arcpodnet. Music for this show is called, I Wish You Would Look, from the band C Hero. Again, thanks for listening and have an awesome day.
00:37:35
Speaker
This episode was produced by Chris Webster from his RV traveling the United States, Tristan Boyle in Scotland, DigTech LLC, Culturo Media, and the Archaeology Podcast Network, and was edited by Chris Webster. This has been a presentation of the Archaeology Podcast Network. Visit us on the web for show notes and other podcasts at www.archpodnet.com. Contact us at chris at archaeologypodcastnetwork.com.