Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Episode 25: NASA Data Scientist Chris Mattmann image

Episode 25: NASA Data Scientist Chris Mattmann

S1 E25 ยท CogNation
Avatar
22 Plays4 years ago

Chris Mattman, Principal Data Scientist at NASA's Jet Propulsion Laboratory, talks about bridging the gap between lab scientists and data scientists, his work with DARPA unearthing the dark web, machine learning in autonomous planetary rovers, and other cool stuff he's been doing.

Chris Mattman's page at NASA

More information about the Memex program at DARPA can be found here.

Chris's forthcoming book, Machine Learning with Tensor Flow (2nd ed.) will be available soon. CogNation listeners can get 40% off all Manning products by using the code "podcogn20" when ordering from [Manning Publications](manning.com).

Special Guest: Chris Mattmann.

Recommended
Transcript

Introduction to Chris Matman, NASA JPL Data Scientist

00:00:09
Speaker
Welcome to Cognation. I'm your host, Rolf Nelson. And I'm Joe Hardy. And on this episode, we talk to Chris Matman, who is a data scientist at the NASA Jet Propulsion Laboratory in Pasadena. So welcome to the show, Chris. Rolf and Joe, thanks for having me. Great. Well, why don't we just jump

Chris' Journey to NASA and Early Challenges

00:00:30
Speaker
in? So maybe you want to tell us a little bit about your background and how you ended up being a data scientist at NASA.
00:00:37
Speaker
Sure. I'm Southern California, born and raised. I like to tell people that, you know, I went from the trailer to the PhD. I grew up in a trailer in Santa Clarita about an hour north of LA. In school, I was interested in sports. I was good at math. I was good at science. Basically,
00:00:59
Speaker
where I grew up in Santa Clarita, you know, it's known today, I think only in the national news or media recently, because it's also unfortunately, actually, my high school is where one of the school shootings happened, Saugus High recently. But yeah, I grew up there in the 80s and 90s. And so when I graduated Saugus High, we were a pipeline to the local universities, Cal State Northridge, and also UCLA. And so growing up,
00:01:26
Speaker
Uh, a couple of my friend's parents started taking me to USC games and, um, I couldn't afford it. I probably shouldn't have gone there considering, uh, you know, everyone was going to UCLA or CSUN. So of course I decided, and I didn't have the money, but I decided, Hey, let's, let's go. I, you know, I want to go to USC. Yeah. So we're assuming, we're assuming that you took your own SATs.
00:01:49
Speaker
I did. I did take my own SATs. There you go. And I could have a whole podcast complaining about those standardized tests for various other reasons. But yeah, I did that and I got in and so I went to USC.

First Experiences with Computers

00:02:05
Speaker
And I think my only experience with computers before is I had an Apple IIe. I did try and pull the boards out back in the day.
00:02:12
Speaker
So I also used to play the adventure game on Apple TE. And my other experience with computers is that I figured out if you press Control C, it would break into basic code, which is an old programming language. And you could change the strings for what the character said to one another. And that was my first foray into testing out my skills at swear words. I would make the nights cuss at each other, or I'd make the various characters say bad things.
00:02:40
Speaker
So anyways, that's, that was my basic hack. Yeah. That was my, uh, that was my experience with programming and stuff before I got to school, which wasn't much. So I got to USC.

Adapting to USC Challenges

00:02:52
Speaker
I felt kind of out of place a little bit, um, and taking all the computer science courses there, uh, mostly because I felt everyone was smarter than me and kind of knew what was going on. I've always believed that I wasn't the smartest person, but that I could outwork people.
00:03:06
Speaker
So, so yeah, I just, I hunkered down, you know, I applied myself. Um, I just decided, well, you know, other people need sleep. I don't. And, you know, whatever I'm deficient in, or I don't understand, I could, you know, work my way through it. And so that's, that's basically. That's, that was what I applied from everything at that point in my undergraduate to, you know, turning my grades around, doing well my sophomore year at USC, since I didn't have any money.

Landing a Job at NASA JPL

00:03:32
Speaker
I was sitting in a computer lab late one night and scanning the job offer boards. And there was an offer to go work with this earth scientist gentleman named Rob Raskin at a place called JPL, the Jet Propulsion Laboratory in Pasadena. And they were looking for programmers to help them with their science stuff. It was an earthquakes project in collaboration with Caltech. And I was looking for any job. And to be honest, that was like the second posting in a big dry spell.
00:04:00
Speaker
This is during undergrad, my sophomore year. I started at USC in 1998. You had your foot in the door from NASA for a while because of just life situation and other things.

Internship at Iwin.com and Internet Games

00:04:19
Speaker
I had to work my freshman year at USC and
00:04:22
Speaker
I had done about, I don't know, an eight month, almost, almost a year internship at a place called I win.com. It was just a startup. Basically it was for internet games. They were looking for basically to keep people online, playing Java applets and poker and things like that. And then to advertise. So, you know, I worked there for about eight months. And then after that, like I said, I needed a job.
00:04:44
Speaker
you know, that JPL opportunity came up. I interviewed for the position. Dr. Raskin was a born again, atmospheric scientist who was reinventing himself as a, what at the time they called informaticist, you know, or informatics person, someone who realized in doing his atmospheric science, the computers could help

JPL's Role in Space Exploration

00:05:03
Speaker
him. Databases could help him. That programming languages and, you know, processing and computation could help him.
00:05:09
Speaker
and was looking to mentor and have some students to help him in you know kind of the goals in the projects he was working on so yeah i passed that interview i basically got the job i started it jpl. Yeah i started jpl i think in two thousand.
00:05:23
Speaker
So maybe just to interject here, so again, NASA's Jet Propulsion Laboratory in Pasadena. So maybe you can describe overall what kinds of things they do there. And it's not just jets, right? It's not just making faster jets. So I mean, JPL, the super fast history on that is it grew out of basically
00:05:48
Speaker
people at Caltech and other places testing rockets in the Arroyo, in the Arroyo Seco here, and basically around the time that Wernher von Braun was in the post World War II era around trying to advance the study of rockets. But around that same time, in the 50s, they had the National Defense Act and they had the National Aeronautics and Space Administration Act. They created NASA as a civilian, well, it wasn't a civilian agency to start, it started out as a,
00:06:17
Speaker
as really a sub department in the military and Department of Defense. And they saw like around the same time as they were creating what they called the national labs around the country that there was an opportunity to leverage the close proximity from Caltech to those that wanted to do rockets and to test them in the beautiful area that's the Arroyo Seiko here in between Pasadena and La Kenyatta.
00:06:39
Speaker
And so JPL was born. Its, you know, initial applications, again, did support the space agency, but it also had, you know, military and defense applications. Today, JPL is sprawling, you know, over hundreds of acres, you know, right, you know, broken into the San Gabriel mountains.
00:06:56
Speaker
in this beautiful sprawling facility in which there are deers and everything else has become what I would argue to be NASA's center of excellence in really robotic space exploration. So what we do is when you see NASA Mars rovers and things like that, there's a huge probability that most of the work was done at the Jet Propulsion Laboratory. We've got the Mars program. We build robots. We have entire divisions and organizations around autonomous exploration.
00:07:25
Speaker
So not just for Mars, but also for the Earth. If it's a Earth orbiting satellite and it involves a lot of autonomy or robotic space exploration, there's a big chance that it was done at JPL. So all of the NASA centers, there are nine plus NASA centers. JPL is NASA's only federal lab or what they call federally funded research and development center.
00:07:44
Speaker
We're supposed to do first-of-a-kind missions for the agencies, and other agencies have their national labs. DOE has a number. The Department of Energy has a number of national labs, one at Berkeley, one in Livermore. JPL is NASA's national lab, and we're supposed to do first-of-a-kind things, and we do robotics space exploration, in my opinion, we're the center of excellence.
00:08:08
Speaker
Ames is a big center of excellence in Silicon Valley with computer scientists experts. They also do a lot of work in aviation studies like studying planes and suborbital, you know, things like that. And so usually all the NASA centers have their various kind of areas of expertise, but we all are kind of like one NASA.

Career Progression at JPL

00:08:29
Speaker
Okay, so you started out at an entry-level position and you've kind of worked your way up to a great position. Do you mind if I describe your phone? Oh, sure. Because it is an impressive title. It's great. So Principal Data Scientist and Deputy Chief Technology and Innovation Officer in the Office of the Chief Information Officer at the NASA Jet Propulsion Laboratory in Pasadena.
00:08:54
Speaker
So this means that you, so your primary position is in data science and you oversee a good deal of that data science that gets done at that.
00:09:03
Speaker
Yeah, let's see. Yeah. I, I always got to catch myself. My wife says I'm long winded. She's right. But, uh, I guess it's the professor in me. Um, yeah. So, so basically real quick, fast forward history on myself post that time that I left off is that, yeah, I started out, I was doing entry level things. I did that for a number of years. And then circa 2005, I worked on my first NASA mission. I kind of got my first gig as like a.
00:09:30
Speaker
kind of lead developer for this mission called the Urban and Carbon Observatory, OCO. Unfortunately, OCO was famous for it basically failing to launch in 2009. So my first NASA mission where I led the team and staffed it and put together the people to build the ground element to it, the ground data processing system, which is the thing that
00:09:50
Speaker
takes the data once it comes down from orbit down to a ground station, all that kind of raw unprocessed data is sent to what they call a data processing center or an instrument data processing center. So JPL is one of them amongst the many academic universities and other non-governmental organizations that we use to do that. So basically I led the team that was going to do all the data processing for that mission. And so anyways, yeah, that unfortunately it fell out of the sky
00:10:18
Speaker
Um, and very quickly the administration worked to, um, and JPL and NASA worked, you know, with them to basically put together a plan to have, uh, just build the spec, same OCO instrument, you know, put it back up in the sky. And so very quickly in government terms meant by 2014, they were able to do that. So OCO2 launched then.
00:10:38
Speaker
And they were able to leverage all of our system and time.

Data Challenges in Science Missions

00:10:41
Speaker
And actually there's an OCO3 now that'll be on the International Space Station. But they're all using the same system basically that I led the team to create. And so yeah, so during that time, that was where I grew up a lot. If there was something I could, you know, give you as a, you know, kind of cool tech or, you know, future looking indicator during that time is that I.
00:11:01
Speaker
led an effort to basically take us from using C and C++ to build software for those ground data systems. Basically, I led an effort to move away from C and C++ to Java, which is what a programming language that we were learning at the time, and something that got me involved in a lot of the outside activities that you might see if you ever read a bio of me like
00:11:22
Speaker
Chris has done open source. Chris helped invent Hadoop. Chris was on the board of the directors at the Apache foundation. So the reason that I was doing that is I was really into Java. You know, so one other parallel thread to this is I ended up staying at USC. I stayed on to get a master's there. I figured I could, I took six months after my bachelor's off, just worked at JPL. But then I was like, you know, I got the bandwidth to do this. And to be honest, at the time, if you got a master's, you made more money.
00:11:47
Speaker
And so, yeah, back then, I was like, let me get a master's. Let me knock this out. I think I could do it. And during my master's, I got really interested in research. I met a person who eventually was my advisor. His name was Dr. Neenad Medvedevich, a Serbian. And he was also hungry, early stage faculty heading into tenure, didn't have it yet. And I was one of his first five students. And basically, he convinced me to stay on and do a PhD.
00:12:12
Speaker
And so the only way I could do it is if it related to all the work that I was doing at JPL and NASA. And so, yeah, so basically while I was like designing the next generation of these earth science processing systems and ground systems, I was getting my PhD at the same time.
00:12:26
Speaker
I was married, I was paying for a house, and I was just doing the thing. So basically, yeah, so for me, okay, why were these missions really important? And why did they require someone like me to go get his PhD to do it, to help me do it, you know, or to think about new novel ways to do it?
00:12:42
Speaker
And this, this is related to maybe kind of broader interest from your podcast listeners or people just in the community of science. I think that you work with science changed in the decades post 1990s, you know, into the 2000s, mainly in the sense that the instruments that we had started taking way more data.
00:13:01
Speaker
that the instrument resolutions were bigger and the systems weren't just closed loop. I mean, people have taken, I won't say, I mean, comparatively large for a long time, but a lot of those systems were closed loop. But in science, you know, it's open loop. You got to make the data available to people, especially with all this NASA data and science systems. So OCO also marked not just a really cool way to measure carbon, which we need to measure CO2, you know, for atmosphere and help us understand the climate. It didn't just mark that, but it marked a fundamental shift in our missions.
00:13:31
Speaker
which basically took us from the era like Seawind's quick scat, which was a cat scatterometer, which was like the last major mission in our science that we did before OCO, where at the end of 10 years, they had 10 gigabytes of data and their regular workload on a daily basis for like processing jobs was tens of jobs per day.

Open Source Software and Innovations

00:13:48
Speaker
OCO was tens of thousands of jobs per day and within the first three months, 150 terabytes of data.
00:13:55
Speaker
So you're, so you're observing this as it's happening too, because you're getting in there as, as data is really starting to get. Absolutely. Absolutely. And, and I was trying to convince, you know, at the time data thing is big, you know, instruments are changing. And the real thing for me, I ended up getting my PhD in software architecture, software design, you know, I've done a lot for data science in like search or information retrieval or applying it to science.
00:14:17
Speaker
But I, I really got a PhD in software design. Like that's Nino's you've ever heard of the rest architecture, representational entity state transfer, basically modern web services and software and things like that. That architecture came from a guy named Roy Fielding who, and I'm going to relate all this like little, uh, Illuminati thing here is a Roy started Apache.
00:14:38
Speaker
He started the Apache Software Foundation. So, okay, now Roy is my academic uncle, because he and Nino went to UC Irvine at the same time, and they were in Dick Taylor's group getting their PhD. And this group in the mid-90s was a power software group. Richard Taylor, he had Fielding, who defined REST and helped start the W3C, the World Wide Web Consortium. He had Nino, who really defined the modern component connector software architectural style.
00:15:05
Speaker
the way of thinking about interactions and in software design get a guy Jim Whitehead who defined this thing called Argo UML which is a modern software tool like like the de facto open source modeling tool and he also had a guy
00:15:21
Speaker
I'm sorry, that was Jason Robbins. Jim Whitehead defined the standard called WebDAV, web-based distributed authoring and versioning, which is like the modern backbone of things like Dropbox and things like that. So anyways, this group was like amazing at Irvine, you know, in the mid nineties. And Nino came out of that group through Nino and through my relationship with his, I would call academic brethren and their students. I met people like Roy Fielding and I met a guy named Justin Aaron Krantz.
00:15:51
Speaker
who was the president of the Apache Software Foundation at the time. So I got really into open source and Apache and all of that through Nino and through academia and through my job. Because basically what I also, even independent of study and all that, around the time Java was becoming so popular and Apache was where you implemented Java. You know, it's where you implemented Java technologies. It's where Tomcat came from, which was like the major Java web server.
00:16:17
Speaker
And so yeah, I started to get into like search and information retrieval, especially during my PhD. How do we search? How do we design data systems and software systems so that they can handle this big data? And so that's how I got involved.
00:16:32
Speaker
in that and got involved in open source at the same time and started making contributions to open source and then getting involved in Apache and really what became big data. I finished my PhD in 2007 and then I hung out two more years on OCO and I helped implement these other two missions called the N-Pose Preparatory Project or NPP, which today is called JPSS. I don't even, I can't expand the acronym, I'm sorry, but
00:17:01
Speaker
Basically, it's all the replacement polar orbiting satellites, joint NOAA NASA DOD. I think it's just NOAA and DOD now. But anyways, we helped do the sounder data. So yeah, like my last post two years PhD, it was like finishing delivering these missions, building the teams, making data science and open source first class citizens in these missions. And by 2009, I was burned out.
00:17:24
Speaker
I basically was like, hey, I need off. I'm sick of sitting up in the thermal vac chamber babysitting the instrument. And by then, my wife and I wanted to start a family and I wanted to slow down a little bit. And so I went into technology development.
00:17:42
Speaker
at JPL. I moved over into the research and technology side. And then I basically had all these great ideas of how to do better at building software because we had just done it for all these next generation missions at JPL. And we did do a lot of work to re-architect
00:18:01
Speaker
old software, you know, applying modern search engines, not just database Boolean searches, but more Google like searches and search engines and things like that. You know, we did go through this like big technology refresh and development, but it was still even isolated to our science missions. It hadn't promulgated kind of everywhere and not just a JPL, but my goals were bigger than that. I wanted to do it everywhere because I had been involved in the academic community and I was like, all science needs this. You know, we need to do this.
00:18:29
Speaker
This must be what prompted, so you did a Nature article in 2013, which is a commentary. And I think the gist of it is that you want to get scientists more involved in the process of data scientists, boots on the ground scientists involved in the creation of the way that data gets processed and used.

Importance of Coding for Scientists

00:18:52
Speaker
That's exactly, that's, that's exactly the gist of it. The reason for that was what I realized was that you could be someone like me who had his training in computer science all the way to the PhD or whatever. And then, you know, at that time, that was like my first decade at JPL and get them interested in science, like first science, you know, hyperspectral, remote sensing, whatever, Mars science, planetary science. And those are rare.
00:19:22
Speaker
You know, there were certain people. We have this cliff at JPL. And I think a lot of places have this where if in the first five years they don't get interested in science, it's really hard to keep people there. Right. You know, why are they there? I mean, if you're not interested in. Yeah, it must be. It must be quite a draw to industry, you know, the salaries that.
00:19:41
Speaker
get a different Google or Facebook. Absolutely. And that's not secret knowledge I'm revealing to you guys. The promptness or the quickness that you guys just immediately threw down on this conversation with your commentary is exactly because it's just common knowledge, especially in government or science or even national labs, which can be a little bit more competitive. We're not civil servants, so we're not bound exactly by all the
00:20:05
Speaker
you know, labor and salary and other requirements there, we can be a little bit more flexible. But yeah, the draw to industry is huge. I mean, they can double and triple salaries. It's a challenge. It's definitely a challenge. The people that are, you know, at JPL, you've got to be interested in space and science. We talk about that. You've got to ask yourself the big questions. You know, why are we here? How can we protect Mother Earth? How did the galaxy form? You know, if you're interested in the answers to those questions, you're going to be at JPL. And so, yeah, back to the Nature article and your question,
00:20:34
Speaker
Basically, what I realized was that there were a lot more of those type of people there at JPL and NASA than there were me.
00:20:41
Speaker
And so how do we take those people and make data and software and open source and things like that? How do we, how do we just leverage them? And so like, it wasn't just my realization, you know, of that, you know, that, that was happening. There was also a bottom up bubbling up that was happening. And maybe you guys have seen this, you know, just based on your own backgrounds is that, you know, the senior senior professors, you know, that have been tenured for 30 years, many of them believe that you need people to do their data analysis for them.
00:21:11
Speaker
for their experiments and things like that. But whereas today, anyone that kind of comes out through STEM and just, I would say, the nation and the world's awakening that coding is important, that, you know, technology is important. They're all coming out, you know, of high school, college and things like that, where even if you got your degree and, you know, you're looking at dust properties, you know, in snow because it makes it melt faster. You have programmed in MATLAB or Python. It's just a fact, you know, or
00:21:41
Speaker
This is a great plug for my students too because oftentimes I try to convince students that, you know, despite what your main interest is, you know, you may be interested chiefly in the brain or you may be interested in the environment, whatever it is, getting a coding background and having some idea how coding and
00:22:01
Speaker
and your topic of interest match up can be essential. And I see a lot of undergraduate students that come through that see this point, you know, only a couple of years after they graduate. So I'm going to replay that last clip to students, if I may. Absolutely. Yeah. Yeah. No, I think that's a really good point. Yeah. I mean, Ralph and I both coming from, you know, we did a PhD together at Berkeley in psychology, but we both wrote a lot of code, set up our
00:22:32
Speaker
vision science experiment. So a lot of stimulus design and then also on the statistics back end. So we're coming from the side. We're the scientists in this story. We know about developing software and MATLAB and Python languages like that. And it's interesting. I had the experience of then going off into industry and working to integrate other scientists into the system of how can we
00:23:01
Speaker
leverage their skills and abilities to develop software. And so I actually had the opportunity to train people to be data scientists, essentially coming from a neuroscience background. So I appreciate what you're saying here about how to integrate these systems and how to leverage the skills that people are bringing with them from the science background into more effective data science and software systems.
00:23:30
Speaker
That's exactly right. That's exactly right. Joe. And so, so the point of that nature article was that. You know, I've seen people kind of come in my own experience from both paths and kind of meet in the middle. Like, I've seen the data scientists like myself that kind of came from software, but.
00:23:49
Speaker
learned to appreciate cosmology, study of the stars, working with people whose advisors were also Carl Sagan, working with people who wanted to measure snow in the Western US and the snow melt. I've learned to appreciate all that, but I've also met the you guys of the world.
00:24:05
Speaker
who, after writing their third or fourth Python script, were like, you know, I'm a smart guy or gal. There's, I can probably refactor this. Or isn't there a way that I don't have to cut and paste the same code over and over again? Welcome to software refactoring. Let's talk about it. That's a component, you know, and things like that. And so,
00:24:26
Speaker
We all meet in the middle, but yes, that was sort of the point of, so by the way, given your comment on your background, I have to ask a question back to you, or tell you guys a vision I had in my head of view. I'm in, you guys, I'm envisioning Dr. Venkman with you guys, right? And the Ghostbusters when you were doing your psychology experiment. I'm envisioning that, am I right?
00:24:54
Speaker
Dan Aykroyd, Bill Murray, Harold Ramis, you guys in that lab getting kicked out, you know. It's like a lot of 80s comedy. I have to relate life to sports or movies. And yes, 80s and 90s comedies are well in there. Ivan Reitman.
00:25:16
Speaker
Yeah, that hits home with me too. Okay, so we have a good sense of your background and a lot of areas of expertise that you've been in. I wonder if we could apply these to some things that maybe are more speculative or some things that you're thinking about.

Machine Learning and DARPA Memex Project

00:25:34
Speaker
These days, you know in the last year or so what's been really grabbing your interest so obviously Machine learning has so you've you've become very interested in machine learning so much so that you revised the textbook on machine learning What are the most interesting applications that you're seeing for that kind of stuff now? And you know, what's the what's the exciting stuff that's going on in the field? Yeah in my in my post 2014 era to today I worked on a lot of DARPA projects
00:26:04
Speaker
the DARPA is the Defense Advanced Research Projects Agency.
00:26:07
Speaker
They're kind of, you know, if you look up their history, it's all based on, I mean, they helped create the internet like everyone else. And they've got this office there called the Information Innovation Office. And a lot of their programs and projects that I've been involved in lately have been, like one of them was called Memex. It was a play on Dr. Vannevar Bush. Yes, yes. I ran across that and that's great. Yeah, Vannevar Bush being the famous head of science for a number of big projects during the 30s, 40s and 50s.
00:26:37
Speaker
Yeah, that's right. It all relates back to that era, man. I tell you. So basically like the gist of Memex was if you look at the web, it's unstructured, but infinite.
00:26:50
Speaker
And out on the web, because it's so unstructured and infinite and because the vast majority of the web that we search and see, they call the surface web is actually only 3% of the actual information that's out there. Most of the information that's out there is what they call that's in the deep web, that it's behind forms, web forms, Ajax, JavaScript, interpreted languages that display things on a browser.
00:27:12
Speaker
And also the content is so heterogeneous that in that vast space of where all of the information was going on, there was sort of twofold. There was a big defense and law enforcement interest because people were doing bad things. They were selling people. This is why everyone's so interested in human trafficking nowadays, because it was a big problem and proliferation. Guns and weapons trafficking, illegal arms sales.
00:27:35
Speaker
counterfeit electronics, all of the things that are big in the news today, we basically realized during that time that people were using the deep and the dark web to put that information out there. And so Memex developed a capability to observe that and to really create the next generation of search engines to take unstructured ugliness and turn it into labeled structured data. So why is labeled structured data important? And back to your question on machine learning. And yes, I am working on a new book.
00:28:03
Speaker
I revised a book called Machine Learning with TensorFlow, the first edition for Manning Publications. I'm writing the second version of that book now, based on a lot of this knowledge and some of these new trends that we're going to talk about. But basically, one of the trends is that machine learning, the process of doing things like grouping that wild west of data into structured classes and labeling them, or the process of making predictions
00:28:31
Speaker
based on a bunch of data that's out there that you've seen in the past and now make predictions for in the future or learning or interpreting structure of biology the way that our eyes do and doing computer vision things like convolutional networks. All of those techniques in machine learning make the initial assumption that the data that you're using or learning from is clean, that it's structured like a big table, like a big Excel sheet. And I heard by your groan or your,
00:29:01
Speaker
that you know exactly what I'm talking about, which is the big secret in all of that is the data ain't clean. It's not clean. It never looks like that. It's always ugly and messy. And we spend most of our time cleaning it to do the 20% of fun crap with it.
00:29:16
Speaker
You know, um, and so, so Memex, the reason it was important is it built software and we open sourced it and it's all out there. There's a repository on GitHub called NASA dash JPL dash Memex, but it fundamentally. Advanced our capability to turn ugly data into clean data. That was step one. So I'm very excited about that. That's, you know, happening and it's, it's more possible today because of programs like that.
00:29:43
Speaker
So just to understand the Memex project, so this started in around 2014, is that right?
00:29:55
Speaker
Okay. So it started around 2014. The goal is to increase search capabilities with the practical application of turning over information, say to law enforcement about activities on the dark web. Right. And it built the initial, the initial catalog. Like there didn't exist a Google for the dark web before that program. And, and, and, you know, six years later, so how, how is the success of the program? How would you gauge it?
00:30:23
Speaker
The way that I would measure the success of the program, there's a technology success in that now, today, anybody can build a domain-specific vertical search using the technologies and the software. You don't even have to get ours. Just using things that you probably don't already know that you're using. If you've ever used a content management system nowadays like Drupal or Plone, if you've ever gone to an Atlassian Confluence
00:30:47
Speaker
site on a Wiki or Jira had made a ticket and you've done a search on any of those sites, you've benefited from Memex. Okay, so that's just a strong statement that I can make. All of these modern software technology things that do search are benefiting from Memex. They're using capabilities that we built during Memex to reach into content in ways that weren't done before, to pull out names, peoples, places, things.
00:31:11
Speaker
do entity recognition, do machine learning to get that information, and then to acquire the data that's necessary to train those systems to do that. That's on the technology side. On the national interest side, on the human side, what I can tell you about Memex that's in the public domain is that it's led to hundreds of arrests,
00:31:31
Speaker
Um, of people that were doing very bad things, human trafficking, it's helped deter terrorism. Um, it has helped to fundamentally create now. Um, downstream awareness.
00:31:47
Speaker
And methodologies and means for dealing with people that have been traffic. And it's helped to uncover and foil plots related to weapons and arms trafficking. And also to create awareness in the area of counterfeit electronics and supply chain issues. You know, about how.
00:32:07
Speaker
falsified parts and other ways that things make their way into supply chain of everything from consumer electronics to household things to bigger ticket items. So that would be the way that I would measure it in both technology and the national interest. What other stuff out there are you excited about for new tech and machine learning?
00:32:28
Speaker
Yeah. So, uh, so building off of that mimics thing, now we have unstructured to structured labeled data. The biggest challenge that I see today is that there's, there's not enough of the types of labeled data. We're getting more of it, but there's not enough of the very valuable labeled data.
00:32:50
Speaker
that we need to train our machine learning algorithms or to do that, to do a Memex or to have the computing and the storage and the systems to even actively go out and use those tools to curate and get labeled structured data. You need a hundred million dollars or five years or you need a big data center or capabilities like Facebook and Google. And in science, for our machine learning,
00:33:15
Speaker
We are, you know, absent, boo-coo to data. We want it. We want lots of labeled training data, but it's hard to get and it's expensive. And so because of that, there's a lot of techniques now that people are looking at. It's called zero shot learning or one shot learning, or more broadly, learning with less labels, which is the name of another DARPA program that I'm involved with that I'm really excited about that we're learning how to apply to the government in other places nowadays.
00:33:44
Speaker
But basically, yeah, like given the fact that for tasks other than labeling cat videos for tasks, like for instance, I want to build or use machine learning to make the Rover smarter when it lands on Mars, because there's an eight minute light time round trip between earth and Mars. And so literally any command I type, we got to wait eight minutes to get, you know, things back.
00:34:11
Speaker
We'd like the rover to be more autonomous, and so we'd like to put machine learning on the rover.

Future of Space Missions with AI

00:34:16
Speaker
We have simulated that in the past, and I call it simulated for the following reason. It's because there's a human in the loop. And the challenge is that the computing power, at least for us in space, on assets or on surface assets on other planets, the computing power is limited by something called the RAD 750, which is a radiation-hardened version of a PowerPC chip that ran in the iPhone 1.
00:34:39
Speaker
Okay. So when you see those big, bad Mars curiosity rovers and our next one, 2020, that's the size of a Volkswagen bug. And, you know, that's powered by all these amazing things and has all these amazing instruments on it. Just imagine that the computing power on board is an iPhone one processor. Okay. And so, yeah, this, it's crazy how that you have to future proof these things when you, you know, when you send them out in space, they're going to be out there for a long time. And yeah, they have to survive, you know, uh, all these.
00:35:08
Speaker
technology upgrades that we're doing on Earth. You can send new software up there, but you can't send a new processor. That's exactly right, Joe. And I'm also glad you touched on this, too, because you anticipated a question that I had. So I have a colleague at Wheaton College who is a planetary geologist, and he does some work with NASA and focused on, I think Europa is his favorite planet.
00:35:37
Speaker
And he had talked about the real need for machine learning in remote missions and I guess that I mean that implies exactly that you want more processing at the source so you can do things like visual recognition and
00:35:53
Speaker
You don't have to send back all of the data. You don't have to get all of that junk data. You just send back the absolute most crucial stuff. You can save a lot of bandwidth that way, right? So instead of having an iPhone 1 there, you want to just constantly be able to have the most efficient processing on site.
00:36:11
Speaker
That's exactly right. And that's the human in the loop element of it too. Rolf is that today machine learning is used using basic statistical techniques basically to do what they call data triage or prioritization of
00:36:27
Speaker
that pipe, that thin, thin pipe that we have to get the most efficient science back to the humans so that they can use the capacious networks, computing, storage, power down here, terrestrially to do the analysis and then to send further instructions and commands. So there's limited autonomy today, but let me explain and vision the future with you both. Tomorrow there won't be.
00:36:51
Speaker
By the way, the reason that there is that we use this chip throughout 750 is it's got to be radiation hardened So and because NASA is risk averse unlike Elon Musk and who are my heroes by the way another hero of mine Iron Man Tony Stark shout out to Robert Downey jr. I Hero of mine, you know, I'm way into Marvel now. I've got three kids in those ages and so I
00:37:14
Speaker
Well, Elon is the embodiment of what Tony represented. But yes, in Elon's company and SpaceX and all of that thing, all those things, he can be a little bit more risky, right? Because if he has a failure, he doesn't have to stand up in front of a congressional committee and defend it, right? Or have a bunch of boards, and we are subject to that.
00:37:34
Speaker
Yeah, I think there was, I think there was something with tiles sometime, right? Exactly, exactly. There was, you know, amongst other things on the shuttle, you know, in the broader NASA program and things like that, you know, where they basically, you know, discern that, you know, some of those things were preventable. So given that, because we're risk averse and we're NASA, that's why we have to, we only use radiation hardened chips in our flight hardware and things like that. So anyways, tomorrow,
00:37:57
Speaker
Today, we've got the RAD750. That's what we have in the form of radiation hardened chip. Tomorrow, we will have what they call the high performance space flight computer or HPSC. That will be a multi-core GPU like chip that's radiation hardened. And so in that, given that feature rovers may have that chip installed in them,
00:38:17
Speaker
in the future prediction vision right now, in the next five years, that we will be able to do complex deep learning, machine learning, neural nets, and things like that on board, on surface assets in space. And so given that, in that environment, I'm sorry, it just came through, you had to pause, you're going to ask something? No, I was just going to say, I mean, I guess the importance there from what you were saying of being able to do the machine learning on the device in space is that
00:38:46
Speaker
that if that rover needs to make a decision in real time, it can't wait for the round trip of information processing, sending information from Mars to Earth and back by the time that that result comes back on the question of to go left or right, it may already be in a hole or something like that.
00:39:10
Speaker
You nailed it. That's exactly right, Joe. And this gets back to sort of the point of the whole learning with less labels and things like that. Given that in the future, surface assets like rovers, orbital assets like small sets, cube sets, or even bigger flagship type of missions, they will have that capability.
00:39:32
Speaker
we need to collect ground truth for them because the process of doing machine learning is you have to have something to measure against. Whether you're doing any type of machine learning, its core food and input is labeled training data. And so given the fact that we're going to have a lot more killer applications in space and elsewhere that we can do that, we need more labeled training data, but unfortunately we're devoid of it today. Or to get it, like I said, even on the ground, terrestrially,
00:40:00
Speaker
It's expensive. Let me give you an analogy. If you and I, if the three of us, if me, Joe and Rolf want to go label cat videos and we want to build the next best cat video machine learning classifier today, it is cheap and we can do it fairly quick. You know, it doesn't require apologies to all those who are expert at labeling cat videos, a ton of skills to be able to do it in terms of
00:40:21
Speaker
You know, it doesn't require, say, lots of training. We can set up a survey real quick and teach them how to do it real quick and what the points that we're looking for. And that's it. And we can do it for cheap or basically quantified cents per label for each label on the order of cents. Now contrast that with now put yourself on Mars. We would like the rover to be able to recognize a plateau with bedrock with an inlet.
00:40:51
Speaker
you know, on it, of sedimentary rock, because that indicates to us that there might have been, you know, an inlet and water, okay, or flowing water, you know, based on that type of rock. For someone- So this is a great question. Yeah, go ahead. So, I mean, before you tell us what the answer is or what the best way of collecting this data is, I mean, the first thing that springs to mind is you just kind of build a small Mars,
00:41:18
Speaker
Two things bring to mind. You build a small Mars on Earth and have some sort of model that goes around on it, or you send it out to whatever the closest geological terrain you can find is. I don't know, Nevada, or maybe there's something on the island somewhere that works out to be the best location. So what's the best way to collect this kind of data?
00:41:40
Speaker
That's absolutely the best way to do it. What you just said, Rolf, and that's kind of what people do today. They, we've got a Mars yard at JPL.
00:41:49
Speaker
which if you guys are ever in the Pasadena area swing by I'll show it to you basically that's a simulated Mars terrain or yes they go out and they'll take it up into the Mojave or you know on the way to Nevada they will do things like that now the challenge with that we can do that but what what's the challenge the challenge is it's costly not everyone can have a Mars yard
00:42:11
Speaker
You know, now, not everyone's going to be putting rovers on Mars, but is that so true actually in the future in 10 years with commercial space with academia universities and institutions participating? I would argue in the next 20 years if we're being visionary, a university will help put things on Mars, you know, or.
00:42:29
Speaker
This is great. I like that. This is a good prediction. And so given that, not everyone has the ability to collect, curate and grab said label training data. And even if we do, everyone that's involved in that labeling activity that we just talked about, I'd argue it needs to be upper education, graduate level, master's or PhD on their way level training. And to be honest, if you quantified the cost per label, we're talking dollars per label or tens of dollars per label. Right.
00:42:59
Speaker
So an expertise that goes beyond visual recognition. Exactly. And so the goal of this DARPA learning with less labels program and why I think it's so visionary and why I think we need it is that given that, you know, if you multiply those numbers out to get like a kind of commercial grade cat labeling, you know, machine learning,
00:43:20
Speaker
model, you know, that's state of the art. We're talking cents per label, but then we're actually talking, say, hundreds of thousands to millions of dollars to get a really good one where it's seen all the variations of hats, even at the cents per label category. And we're talking about maybe months to build if we put this on Mechanical Turk or whatever. So from a commercial perspective, even though those are big numbers for you and I and and Joe, it's not big numbers for companies. And so they do this all the time. That's why those are so valuable in the government sense.
00:43:49
Speaker
We are talking 15 years and billions of dollars to generate the same or to have that supply chain of people skills You know to be able to use those facilities and to collect such a thing so obviously that's costly and they want to reduce by an order of magnitude the cost really by six orders of magnitude the cost to simulate those environments and to get good clean labeled training data for machine learning that will achieve comparative accuracy and results and
00:44:17
Speaker
And so there are techniques today where people are, that people are building on this program and elsewhere to do that, to reduce the cost of machine learning. One way you could think about is if you've heard about these things in machine learning called GANs or Generative Adversarial Network, those GANs, what they do is it's a deep learning technique. You give it a bunch of data that you have labeled or not. And what it does is it learns a representation of the data.
00:44:44
Speaker
And what it can do is it creates what they call an encoder and a decoder step. The encoder learns the representation of the data, all the data that you gave it. And the decoder step, what they do in these GANs is they take the representations of the different classes of data that they learn. Let's say again, in our.
00:45:02
Speaker
Mars example, it learns about bedrock, it learns about sedimentary, it learns about whatever. And what the GAN does on the decoder step is that it actually combines different classes to achieve new simulated things that did or didn't exist before. In other words, it can generate training data from a bunch of existing training data, but it can actually generate new training data. So at a zero cost, it can generate new training data that's labeled for you that didn't exist.
00:45:32
Speaker
The way I think about it is I think about us closing our eyes and visioning. You don't remember things exactly the same way or you close your eyes or you think of something new and you think of a person possibly that didn't exist, right? What is it doing? It's taking your internal representation, your learned model of various things and classes and people, sometimes our brains
00:45:55
Speaker
you know, we add things together. We add or we add the weights together from a certain representation. And then when it comes out in our decoder, on the other end, we get something that didn't exist before or we don't remember it the same way. And so the same technique can be used to do what they call one shot or zero shot learning to synthesize and generate labeled training data in a way that's fake, but that's good enough, right, to give us really good training at no more cost. And so all of these techniques
00:46:24
Speaker
are really amazing. And it's what I'm excited about for this learning with less labels things. But also, there's another element that we should all be kind of concerned about that I think people are. These are also the exact same techniques that people are using to generate what they call deep fakes.

Existential Threats and AI Regulation

00:46:40
Speaker
Why don't we take a quick break right now? Sounds awesome.
00:46:58
Speaker
All right, we're back. All right. So a lot of interesting topics here. I think to tie it together, I wanted to ask a question to you, Chris. Let me ask you this question. Which of the following major problems is the one that is pointed at humanity? Is it going to be conscious robots, climate change, or wildcards, some sort of other, you know, other, other issue?
00:47:27
Speaker
Yeah so so for me you know the way the way I would like to answer that is you know as fearful as I am like Elon of you know the robots taking over and things like that and you know my recommendation you know on that end is that they all have an off switch to just remember that you know but then yeah so for me at least my experience you know is that the climate and things like that are kind of front and center and those are the types of concerns I think we need to kind of be paying attention to because they have the most
00:47:58
Speaker
I would say direct impact. I'll give you an example.

Airborne Snow Observatory Project

00:48:02
Speaker
My involvement in this project called the Airborne Snow Observatory, ASO. Basically, the gist of that is in the Western US, even though it seems like we have a lot of water in an ocean, it's too expensive to desalinate it. And so we have to rely on the mountains and the snow where they accumulate.
00:48:19
Speaker
and the snowpack and the water runoff and water in dams and releasing it out of dams and water management, water rights and things like that. It's a big big deal in the western U.S., California, Arizona, Nevada, Utah, things like that.
00:48:32
Speaker
And so right now today in the Western US to measure the snow, because we need to know how much snow is left, we got people going up into the mountains and sticking big sticks in the mountains to get the snow depth to see how much is left. And that's actually costly, both in terms of danger to human life,
00:48:50
Speaker
because people can get killed doing things like that. It's also inefficient. We can't cover as great of an area to do it, and it also just costs too much for the people to do it to get big expeditions to do it. So Airborne Snow Observatory, ASO, in 2013, the goal of that project was basically to create an airborne suborbital mission to measure over the Sierra Nevadas and the Rockies
00:49:18
Speaker
to measure the snowpack in two ways, to measure how much snow is there using a lidar instrument to get snow depth, and to use a spectrometer to get the rate of snow melt. Because if we know how much snow is there and accumulated and how fast it's melting,
00:49:33
Speaker
The water managers can better plan how much water to dam and when and how to release it, you know, and make everyone's lives sort of better West. And so the challenges related to all the things that we're talking about with technology and AI and machine learning, data science and things like that. ASO was very interesting. Its data was only useful if we could deliver it the same day.
00:49:57
Speaker
So we had to turn all of the processing around in a day, in 24 hours. And so to do that, we literally moved our compute system to Mammoth Lakes, the Sierra Nevada Aquatics Research Laboratory, or SNARL there. And during the campaigns, the flight campaigns, they got off the plane, the flight people, they took a terabyte brick over to SNARL, they plugged it into our system, and it had to work lights out autonomously. So a lot of the things that we're talking about for machine learning
00:50:23
Speaker
for labeled training data and so on and so forth. There, some of those techniques were sort of pioneered a little bit in thinking about data processing and automation. How do we take the next steps beyond OCO and the things that I was talking about, which brought us into big data volume and variety concerns. And now how do we start to apply automation? Because one of the challenges with Snarl is it was in the mountains and you couldn't just call someone up on a cell phone.
00:50:47
Speaker
uh, a snail phone, a Snarl phone, I guess. But, uh, yeah, so, so anyways, you couldn't do that. And so, yeah, so we had to use the internet. We had to use things like chat rooms, chat bots. We built some intelligent agents. Look, I related it. We actually built an intelligent agent that sat in a chat room that helped us do data processing remotely there for ASO at Snarl. So, so yeah, so for me,
00:51:11
Speaker
You know, given just the challenges, you know, with understanding better water and the climate and things like temperature and things, I probably have more experience in that. I don't want to throw off the people that are worried about the forthcoming robot apocalypse, but I want to quell your concerns by saying, again, there's an off switch. I'll show you.

AI's Future Implications

00:51:28
Speaker
Well, we would certainly like to instill more of a fear, a healthy fear of evil robots in you. But I think we're glad to have people like you working on some of the other problems. Yeah, no, for sure. I mean, yeah, just on the robots thing. I mean, I think they do have an off switch. But as soon as they figure out how to prevent you from getting access to that off switch is when the issue comes to problems.
00:51:58
Speaker
First, it'll be hidden on their body, and then it'll be covered up with tape, and then it'll be harder and harder to find. Exactly. You're saying this metaphorically. Actually, if Marvel movies have taught me anything, and I just think about Ultron and stuff, you guys are probably right. No, but in all seriousness, these computer systems that you're developing and that technologists are working on,
00:52:26
Speaker
are getting really smart, really fast. And it's interesting to think about the ways in which autonomous systems can start to think and solve real problems that the human brain, I mean, obviously computers already solve tons of problems that human brains are terrible at solving much, much better, but increasingly these systems are getting good at solving problems that human brains are good at and will be better than us at solving lots of problems that we've
00:52:55
Speaker
pride ourselves in being able to solve in pretty much time. And I think also it's interesting the particular application here, which is trying to develop autonomy on another planet, in which case you really have an incentive to develop as much autonomy as you possibly can. And here, if we know anything from our science fiction, that's where things go the worst. We're not trying to be negative.
00:53:23
Speaker
So I don't know if you guys have seen recently the draft, um, office of management and budget memo on the regulation of AI. It was put out by a guy, Russell T Voit. Sorry, Russ. I'm totally mispronouncing your name. I'm from Southern California. So it's called, sorry, just interrupt. So this is the, this is, you've anticipated another question, which is how, how policy or, or, uh, you know, the government could, could address some of these issues.
00:53:49
Speaker
Oh, yeah, yeah. And I think it's all related, you know, this is called the guidance for regulation of artificial intelligence application. And basically what it's recognizing is, you know, there are lots of concerns along the lens of what you're talking about. Let's take one and make it real from our little example that we were just using from Mars.
00:54:04
Speaker
So, so, and let's go back and relate it back to the learning with less labels thing. Like I told you, if we use these generative adversarial networks, we can generate sort of new thought of data. The computer is thinking like we do. It's closing its eyes and it generates new things. And so that can be like we cut off at the break for deep fakes. It could be used to simulate, you know, president Obama trashing Trump or politics. It could also be used.
00:54:26
Speaker
You know, in a benefit way that could also have a concern that's going to be related to this OMB memo that I'm talking about, the benefit way could be, what if I told you that we could use a generative adversarial network to generate fake Mars terrain that actually never really existed, but is close enough that would solve the problem of not having sort of enough bandwidth to get all the label training data that we need to do really, really good Mars autonomy.
00:54:50
Speaker
Right. And so so the challenge. So so how can we tell that these images are fake or incorrect? Well, the computer doesn't know anything when you do a GAN, for instance, that the Mars Curiosity rover only has one arm and it doesn't have two. But yet when we've run it through this in some of our experiments, this GAN that we created, a guy, Brandon Rothrock and Masahiro Ono at JPL, they they basically found that the GAN was generating images of the rover that had two arms.
00:55:21
Speaker
Now, there's nothing that told it it didn't have two arms, right? That is impressive. Yeah, it's a semantic inconsistency. Everything else, it looks like a flippant image of Mars. As good as you would see from any of the nav cam and other things, right? And so we see this a lot. This is another DARPA program, but eventually it's not here yet. It's starting. It's called Semantic Forensics.
00:55:43
Speaker
Um, the idea is when you use these GANs to generate new label training data, it will generate things with what they call not like pixel level inconsistencies, not like the, you know, to the human eye, it's not going to be like have bad artifacts, like the first version and first generation of all these things where it's just like, Oh, that doesn't look like Obama. It's all choppy and grainy. You know, it, it actually does look like him now. The challenge is what happens is he's going to have an earring, right? Or there's going to be.
00:56:12
Speaker
The higher, higher level features object at the... Bingo. There's going to be some semantic inconsistency about it. And it's going to be hard to spot because there's going to be so much of it. And, and so, you know, we need techniques now, you think, you know, we're visioning the future and we need techniques to think about these semantic insinsistencies because it's not going to be that your eye can immediately detect it's fake anymore. You're going to have to realize something's out of place.
00:56:36
Speaker
And so anyways, back to the government part, there's this OMB memo that they have. It's called, again, it's a draft. It's called the guidance for regulation of artificial intelligence applications. And the government is trying to pay attention to this in the current administration. They say we need to have awareness of public trust in AI. I completely agree about that. There needs to be public participation.
00:56:59
Speaker
You know, in other words, people have to basically, you know, they've got a knowing that this is going to be a part of their life. There's got to be awareness, widespread availability of standards and information to inform people about that. There's got to be some scientific integrity and quality about the information that's being generated and being decided upon. We got to be fair and non discriminatory. Let me just touch on that real quick.
00:57:23
Speaker
The early versions of these computer vision algorithms that are in smart cars or in cars, autonomous vehicles that have the ability to do autonomous driving like Tesla and things like that, the early versions of their computer vision algorithms used massively public data sets as much as they could to train on. And then they also did go out and do driving and autonomous driving and they collected and curated a lot of training sets. But it happened in places. It didn't happen all across the US and in the world because it's obviously going to happen where it's cost effective and
00:57:52
Speaker
where the technology is and things like that, right? So guess what happened, and maybe you guys already know the answer. What they found out is they introduced bias into their training data because, for example, they didn't have enough data of people in wheelchairs, or they didn't have enough people of color, or they didn't have enough people that didn't fit the traditional mold or demographic in the places that they trained on such that the training data that they were training on, those cars couldn't recognize some things.
00:58:19
Speaker
Well, those are the kinds of things that are hard to catch too, because you really have to have, you have to know what you're looking for. If you're going to, if you're going to figure out that there's some sort of bias that you weren't expecting to introduce in the first place. And so, and so that's what they're talking about on that regulation. They're saying, Hey, we got it. We got to be fair and non-discriminatory in the way that we do AI and think about it, you know, and, and, and, and be cognizant of that because it can have real effects. Go ahead. I think you were going to say something. No, I was just going to say that.
00:58:49
Speaker
Those self-driving cars work great in Palo Alto and San Francisco. But if they go somewhere else, they're not trained on that. That's right. How many cows have they seen? Or jumping deer or things like that, right? Or Philadelphia drivers, right? Or New York cabbies. So Chris, it's the beginning of a new decade here, and we just did our
00:59:15
Speaker
episode on predictions, which are all 100% correct, by the way, if you wanna go back and listen to that. So far. So we would love to hear from you what you think, well, what do you think is exciting about the upcoming decade or what, do you have any bold predictions?

Predictions for Mars Landing

00:59:31
Speaker
Yeah, my bold prediction is that I don't think it's gonna take to 2030 to get humans on Mars. Oh wow. Yeah, I believe, you know,
00:59:42
Speaker
My prediction there is that just looking at both the advancements in propulsion, the advancements in computing, the advancements in AI and machine learning and things like that, our ability to understand big data and environments and things like that, and just the people and the vision to get there.
01:00:07
Speaker
I talked to too many people of my generation, and I think this, I just get the feeling, Joe and Rolf, that you guys are also my generation. So I just, I talked to too many people where we did not experience, you know, boots on a planet or on some celestial body. You know, like, you talk to people that, you know, grew up in the 40s and the 50s or even the 60s who could remember that era and, you know, the space race.
01:00:32
Speaker
And you talk to too many people. It's like, oh, yeah, you know, robots and things. I love robots. I mean, we want to put more. It's in my interest and do it. But, you know, to have the vision or to see explorers on another planet, you know, for me, I just I really believe there are many, many, many, many, many of us, you know, and entrepreneurs, you know, public private partnerships, you know, the people like Elon and others that also in their lifetimes and even as soon as possible want to see it.
01:00:59
Speaker
And so that's a prediction I have. There's a lot to be figured out. I don't want to make a claim that's unfounded in things. And actually, it's only five years ahead of when NASA is planning to do it in the mid 2030s. But I think we could, if we set our minds to it and whatever, get there by the early 2030s. That's a prediction I have.

Conclusion and Thanks

01:01:20
Speaker
Well, OK. Chris Matman, thanks a lot for being with us today on Cognition.
01:01:25
Speaker
Thanks for having me. Thanks for having me, Rolf and Joe. What a fabulous time.