Introduction and Guest Welcome
00:00:12
Speaker
Hello Basement Programmers and welcome. This is the Basement Programmer Podcast. I'm your host, Todd Moore. The opinions expressed in the Basement Programmer Podcast are those of myself and any guests that I may have and are not necessarily those of our employers or organizations we may be associated with.
00:00:29
Speaker
Feedback on the Basement Programmer podcast, including suggestions on things you'd like to hear about, can be emailed to me at tom at basementprogrammer.com. And I'm always on the lookout for people who would like to come on the podcast and talk about anything technology related. So drop me a line. And now for this episode.
Guest's Background and Recent Activities
00:00:49
Speaker
Welcome to the latest episode of the Basement Programmer podcast. The podcast has been quiet for the last few months, mainly due to a lot of changes going on. But I'm back and up and ready to record and bring you regular content. It's only fitting that I re-kick off the podcast with a friend and fellow .NET developer, Ty Augustine. Ty has been a supporter of one of my other passion projects, Hour of Code, for the last few years, and an all-around awesome guy. Welcome, Ty.
00:01:18
Speaker
So, Ty, last time we got together was when we were both presenting at Boston CodeCap. So what have you been up to since then? In between that, you know, my son's also starting up his basketball season at school. So, you know, between that work, home life and projects and, you know, school's basketball, my time has been all tied up.
00:01:43
Speaker
Well, keep them busy, I see.
AI Innovations at AWS reInvent
00:01:46
Speaker
So it's December, which means AWS reInvent has just happened and probably about 100 things have dropped. This year's reInvent was kind of the year of AI. It seems like everything had something to do with AI or an AI spin on it. Sasai, walk me through some of the highlights from your perspective. Let me just start by giving you a little background about myself.
00:02:11
Speaker
My name is Ty Augustine. I'm a solutions architect at AWS. And I specialize in Microsoft technologies, specifically .NET and SQL Server with a focus on migration and monetization. And, you know, before coming to AWS, I was a .NET stack architect, developing enterprise applications for the life science industry. And I did that for about 20 years.
00:02:39
Speaker
While I loved developing for all those years, I got to a point in my career where developing software can just be mentally draining. I mean, you know that, Tom. It's constant problem solving. And, you know, don't get me wrong. Programming is still one of my biggest passions, but I just got to a point where
00:03:01
Speaker
I felt like I'd enjoyed more if I were doing it as a hobby or for fun. Then an opportunity for a .NET specialist, your old role in fact, opened up on the Microsoft team and here I am and I'm loving my job. Funny enough, I actually interviewed Tai for his job and definitely wanted him on the team. Yes.
AI's Impact on Cloud Services
00:03:30
Speaker
So, you know, reinvent this year. Talk a lot about, you know, making the cloud services faster, cheaper, easier to use and reliable. Surprise surprise nice, but not too surprising. You know, that's pretty much go every year.
00:03:50
Speaker
The one thing that should really catch everybody's attention, though, is how big a deal they're making, which is code, and more from just putting in simple prompts, which is still super new. But AWS is going all in and trying to bake this into their products. And even the CEO Adam Silecki even said that Gen AI, it's nascent stages.
00:04:20
Speaker
So, you know, while some of it still sounds kind of like sci-fi-ish and fun for now, it's going to get crazier and it's going to be part of our everyday lives. And it's kind of wild if you think about it, right? So anyway, it's clear that AWS is making a head on, you know, where computing is going and then we want to make sure Amazon stays on top of being the best, the biggest cloud provider along the way.
00:04:50
Speaker
So, you know, before we kind of jump into some of the regenerative AI reinvented announcements, I found, you know, that I found
Migrating .NET to Linux
00:05:02
Speaker
interesting. I think there's a lot other announcements that, you know, as a developer, I found peripherally interesting, you know. And one of them is in the form of compute, especially, you know,
00:05:18
Speaker
as .NET developers because part, you know, developing, at least for me, is I always want to just kind of stay on top of the stack. And staying on top of the stack often means, you know, getting to the next version and keeping up to the latest stuff and modernizing. And, you know, that modernization journey also includes freeing yourselves from Microsoft licenses.
00:05:45
Speaker
That framework workloads to dot net modern dot net or dot net core and then, you know, moving that application from running on windows to Linux. And then, you know, which, by the way, you get this huge Microsoft license freedom, which equals a huge cost savings.
00:06:05
Speaker
And then finally getting that application to run an ARM-based processors for additional, you know, price performance savings. And, you know, and the Graviton journey is a really, really a great story. You know, so, you know, Bakken just rolled out its first homemade chip called Graviton.
00:06:24
Speaker
just to kind of show that the cloud work modes can only run on ARM processors, not just the, you know, old school Intel ones. And, you know, people kind of scratched their heads at first, and they were like, yeah, okay, you know.
00:06:40
Speaker
It works pretty good, I guess. But then in 2019, a year later, AWS dropped Graviton2 on us and it kicked way more butt with huge more improvements over the first gen. And then after that, we knew that ARM wasn't just a science project anymore.
00:07:02
Speaker
And in 2021, they announced Graviton3, which, you know, had, you know, 25% faster than its predecessor. And now, just recently, AWS revealed Graviton4 in preview as the latest member of that day chip family.
Advancements in AWS Graviton Chips
00:07:21
Speaker
And this baby brings another 30% in extra performance over the already beastly Graviton3.
00:07:30
Speaker
And at re-invent, Adam Silesky called it the most powerful and energy efficient ship that we've built thus far. And, you know, I'm not going to lie. I mean, it feels like
00:07:43
Speaker
with every graviton drop it's it's it's denoided up it's you know it's it's glowed up from the lab with all this magical juice cloud juice or you know but it's it's so i mean it's pretty soon you're going to probably see arm chips you know running the whole dang cloud which is which is which is pretty cool
00:08:04
Speaker
There's just some pretty impressive performance increases there. You're talking like 30%, 40%, like, you know, just iteration after iteration. That's pretty amazing. And there was the, what is it, there was the new, there's a new instance that I think even came out too, what is it, the RHE.
00:08:30
Speaker
And with this instance, it's like triple the CPU with memory over the previous generation. So, I mean, you're talking about some serious data, some, some, these are like these are really beefy instances for like memory workflows.
00:08:46
Speaker
database in memory. It's pretty cool stuff. And then another one of the announcements was this another EC2 instance, what is the U7i.
00:09:01
Speaker
And this thing is freaking huge. I mean, it's like 32 terabytes of DDR RAM. It's powered by the latest fourth-generation Intel Xeon chips. And with this, you're looking at around 125 percent
00:09:21
Speaker
faster performance versus the previous Vue 1 instances. And these things can scale up to 896 VCQs. And that's the most of any instance type at AWS. In the whole AWS cloud arsenal, this thing is like a massively parallel processing power. And if that's not enough,
00:09:49
Speaker
The U7i gives you up to 100 gigabit bandwidth to kind of blast that data into your storage, into memory.
00:10:03
Speaker
That's astonishing. I remember when the X1s came out and thinking, wow, this is the biggest thing ever. And now they're just like, the X1s like a baby compared to these things. Things are close to supercomputers. I mean, before we know it. I mean, so these instances are good for in-memory databases like SAP, HANA, Oracle, or SQL Server.
00:10:29
Speaker
So when you're thinking about like crunching those transactions at scale, this instance right here is your huckleberry. I thought some of those like the graviton and then this BP instance was kind of really interesting from the developer standpoint on the compute side.
00:10:55
Speaker
Yeah, you could run my code on that and have it perform.
Aurora's Limitless Database
00:11:00
Speaker
And then on the database side, there was some really interesting announcements there. There was Aurora database, which got a crazy new upgrade called limitless database that kind of blew up their scaling game. I mean, we're talking about auto-growing a single Aurora database.
00:11:24
Speaker
to process millions of writes per second and mad storage on the petabytes of data, which is pretty wild. And before, you can scale up the Aurora read traffic with read replicas, but the write capacity and the storage was still kept per database instance.
00:11:46
Speaker
Limitless kind of tosses those limits out the window. Now, you know, Aurora, you can just keep plopping down more compute and storage for your database and behind the scenes. And then the best part is that whenever capacity that you were already using from your regular reader and writer, it kind of just adds to that. So, you know, more resources equal more power.
00:12:15
Speaker
But for real though, this takes away the headache of trying to build your own ganky solution for splitting data and workloads across multiple database instances when you outgrow one box. Now Aurora will just kind of handle that mess and automagically grow for you. So it's pretty cool.
00:12:40
Speaker
That is cool because it takes away all that headache of trying to manage the database and just takes it to the next level of, like you said, just automatic.
00:12:55
Speaker
And then, and then there was some interesting things that kind of happened on the server side as well.
Serverless Caching Solutions
00:13:00
Speaker
So, you know, also launched a crazy simple serverless caching option for lastly cash and we're talking about spinning up.
00:13:11
Speaker
and auto-scaling Redis or memcache cluster in like 60 seconds, which is no joke, you know? And how it works is that the last cache serverless kind of keeps an eagle eye on your app's compute and memory and network usage. And as soon as your workload patterns change and the app needs more capacity,
00:13:37
Speaker
Resources scale up instantly behind the scenes, no configuration, no configuring the capacity ahead of time.
00:13:47
Speaker
or no manual adjustments needed. And you get this. If you enable multi-AZ automatic failover, so your cash can stay highly available, even if the whole zone goes down with 99.9 time uptime SLA. And then the best part is the pricing. Since it's serverless,
00:14:14
Speaker
you only pay for the cash storage and the compute resources that's actually used by your app per second. So there's no waste capacity or unused reservations or no upfront costs. You just kind of just consume now and pay later. Yes, but hopefully pay a little bit less later than it would be if you provisioned it yourself. Exactly.
00:14:44
Speaker
So those are definitely some pretty cool additions. Personally, I love the fact that CodeWhisperer now works at the command line for the Mac.
00:14:55
Speaker
Code Whisper just got some new updates. That's cool. It now suggests infrastructure as code with CloudFormation templates and CloudForm modules as you type. So Code Whisper got some really cool updates there as well. And it got additional language support for Python, TypeScript,
00:15:23
Speaker
Also, Code Whisperer is also coming directly inside Visual Studio, which is cool as well. So I'm looking forward to that. I am too. I heard that.
00:15:37
Speaker
I don't want to ask you anything that's, you know. Other than what was announced, I don't know when that's coming. It's going to be cool to have that come to Visual Studio. And there was some other kind of cool things there as well. I mean, there was this nifty service called Console for Code. Did you see that? I think I heard something about that.
00:16:07
Speaker
It's really, really cool. So you can, you can quickly turn your console clicks into reusable code. You know, we all prototype stuff by manually configuring resources in the AWS console.
00:16:21
Speaker
But translating those actions into scripts for production takes some tedious cut work. So enter console to code. It kind of watches what you build in the console and automatically generates the equivalent infrastructure of code templates as you click. And under the hood, it's using generative AI to spit out CloudFormation, CDK, Terraform, whatever
00:16:48
Speaker
flavor code you want, you know, and it also follows the best practices. So this means that you don't have to choose between, you know, fast console prototyping and robust infrastructure as code anymore.
Console to Code: Generative AI in AWS
00:17:03
Speaker
You kind of get the best of both worlds with, you know, the console simplicity converted into resilient scripts, which is, I mean, this is pretty cool stuff. I mean,
00:17:17
Speaker
I mean, turning your prototypes into reusable code is like code magic. So I mean, it's cool. Cool stuff. I'm sitting here laughing because the amount of time that that would have saved me over the years is astonishing. Yeah. Whoever came up with that idea and decided to bake that into a product to launch Thank You.
00:17:44
Speaker
Absolutely. We have, I also sort of, you know, co-catalyst is getting Amazon Q kind of built in as well. Cool. That'll be good to give us some help there.
Enhancing Coding with Amazon Q
00:18:03
Speaker
I mean, let's talk about Amazon Q. I mean,
00:18:09
Speaker
AWS also revealed this crazy new AI system, which we're calling Amazon Q. And this specifically will help with work stuff like coding and troubleshooting and more. The thing is, they powered Amazon Q by letting it binge read 17 years worth of AWS documentation.
00:18:36
Speaker
an example. So it's a master of building cloud apps. But here's the cool part. You can come to Amazon IQ to master your customer or your company's data, your code base and systems.
00:18:59
Speaker
So, you know, you can have these smart conventions to help solving problems using your businesses' specifics, not just general AWS knowledge. And for developers, this means that code that Q can also explain your spaghetti code. It can suggest best practices as you architect your apps.
00:19:28
Speaker
upgrade versions, write tests, fix bugs, or even generate, you know, whole new features. And when stuff breaks, Q can help you track down those issues like network problems for, you know, which is way faster than, you know, reading your logs or guessing and checking, you know, what the problem is.
00:19:56
Speaker
Okay, Ty, but we're both developers. We always blame the network anyway. It's always the network's fault, not ours.
00:20:05
Speaker
And you know, you know, it was another cool thing. Well, you know, just recently in my, what I've been doing, I've been playing with SageMaker just recently.
S3 Express One Zone for ML and Analytics
00:20:16
Speaker
SageMaker and some of the Jupyter Labs, that's something I've just been playing with just recently. And over the weekend I was doing something with SageMaker and I had to,
00:20:28
Speaker
create an S3 bucket or something like that. So I went to S3 and then I noticed this new tab on one of someplace within the interface and I saw like Amazon S3 Express. And I'm like, huh, what's this? So have you seen that? Have you seen that yet?
00:20:50
Speaker
I think I read something about it. Isn't it like high performance, single AZ version of S3, something like that? So they launched a new S3 storage class called S3 Express One Zone. And this is specifically tuned for speedy data access. I mean, we're talking about 10x than the regular S3 standard. And the new option delivers
00:21:20
Speaker
blazing sub 10 millisecond latency for apps that, you know, demand consistently fast performance like machine learning, analytics, or, you know, media creation tools and how they make it so fast.
00:21:38
Speaker
by designing an entirely new S3 bucket type optimized for this screaming data. And like you're right, it's just a different one in one zone, which is still, you know, you still have the resiliency and the availability and the durability, but it's incredibly, incredibly fast, yeah. Developer tools.
GenAI in Visual Studio Code
00:22:08
Speaker
in Visual Studio Code is available as part of the AWS Toolkit now. So AWS dropped a new suite, a new visual app builder called Application Composer, and it's directly into Visual Studio Code. And we're talking about drag and drop.
00:22:31
Speaker
cloud formation, stack creation, right inside of Visual Studio Code, which is really, really cool. So you can either design a whole new app from a blank canvas, or you can import an existing confirmation template to just kind of poke around. And the application composer gives you all the standard components ready to configure and connect with the single drop-down.
00:23:00
Speaker
But it goes way beyond the basic. This thing integrates GenAI to instantly suggest resources and settings for over a thousand cloud formation components. So you can slap together a serverless app faster than ever, even if you're not like a YAML expert like myself. And the best part is you immediately see
00:23:28
Speaker
the template changes impact your whole architecture while coding. So no jumping between files and diagrams.
00:23:36
Speaker
It's like a, like a, like a live blueprint for your application, application infrastructure from right inside Visual Studio Code. And the real time visual feedback helps you spot gaps or issues in your layout super early. So, you know, you don't have to worry about these things, you know, even when you deploy. So, you know,
00:24:02
Speaker
clicks and with really no coding or infrastructure coding background. That sounds really cool. It should definitely be a big help for people trying to get started with cloud formation.
00:24:19
Speaker
Absolutely. So there was a number of new announcements for bedrock models. And I think, you know, we'll begin to see more of those. They came out with the new Amazon Titan X model for the express in the light version, those in the different sizes.
00:24:42
Speaker
There was also the Amazon Titan embeddings. So if you want to do vector searches, you can create those vectors from these embeddings. Then there's stable diffusion, Excel 1.0 from Stability AI came out, Meta, where we also announced
00:25:11
Speaker
That llama to the 70 billion from meta is now on bedrock. We also upgraded the Claude version from Claude to to two point one from traffic.
00:25:27
Speaker
You kind of see that, you know, that AWS is always giving customers the freedom to choose their different models because, you know, like it's still in its early stages and there's not going to be one model that kind of does everything.
Diverse AI Models with Bedrock
00:25:47
Speaker
So, you know, for your use case, you're going to have to kind of use different models for different things. And it's good that AWS gives you the choice of using these different models.
00:25:57
Speaker
Yeah, I was seeing a lot of that with like my customers are appreciative of the fact that Bedrock is giving them the ability to make choices and not throw everything in behind one specific model. So I think Bedrock is really cool in that area. So one of the things that we also saw that was announced was Party Rock and Amazon Bedrock Playground.
Showcasing Community AI Apps
00:26:26
Speaker
Did you see this?
00:26:28
Speaker
I've seen some people on social media throwing together apps really quick. It's pretty cool. It's like a showcase for some of the coolest AI apps that's built.
00:26:43
Speaker
by the community. And there's like a, you know, because I didn't even know about this up until I thought of traffic for this time. I was like, oh, wow. Are you, Rock? And there's like a discover page that kind of curates the most viewed and mixed and shared creations made by Amazon Bedrock. And we're talking about no code tools.
00:27:07
Speaker
It's been kind of mind-blowing just to even see that even on there that there's, you know, 20,000 unique AI-powered web apps made just in a few days since it was launched. And PartyRock lets anyone intuitively build stuff like music generators, digital pets, and more with, you know, simple drag and drop.
00:27:34
Speaker
and the Discover page that finds the best community inventions for inspiration. And you could just go in there and just kind of poke around. It's pretty cool. That sounds really cool. I mean, 20,000 in, I mean, re-invent's been, what, about two weeks, I think? So that's a really impressive adoption there.
Integrating SaaS with Step Functions
00:28:01
Speaker
step functions now has integration into SaaS apps, just got way, way simpler. So the step functions now support HTTPS endpoints right out of the box. So this means that you can call your REST APIs in your webhooks.
00:28:21
Speaker
from SAS tools like Stripe, Slack, directly within your serverless workflows. So this is really, really cool that you're able to do this within your step function.
00:28:35
Speaker
There's this test state API that lets you inspect the raw HTTP requests and responses as you build. You know, so, you know, between the SaaS integration and the debug ability, stuff functions continues to take the pain out of, you know, stitching together services into a resilient workflow.
00:28:58
Speaker
which is really cool. And along the lines of step functions, step functions also got some integration for bedrock. So they dropped a native integration with bedrock.
00:29:14
Speaker
which means that you can visually now orchestrate your bedrock foundation models and your human review steps without requiring any blue code. You simply drag and drop the invoke model or what is it, the create model customization job APIs right into your workflow.
00:29:38
Speaker
So, again, it's very, very cool stuff. And with Workflow Studio, you kind of get a bird's eye view into the end-to-end life cycle. Step functions continue to bring all the magic and the coolness. Let's see, SageMaker, do you play with SageMaker?
Optimizing ML with SageMaker
00:30:02
Speaker
Have you played with SageMaker much?
00:30:05
Speaker
I have not. It's one of those things I probably should, but I never really got to at this point. Like just recently, I've just been playing with it because as much as the bedrock APIs are there for .NET, you can only still go so far if you're doing ML because a lot of the ML stuff is still in Python.
00:30:34
Speaker
So I've been playing with, would say, SageMaker and Python and Jupyter Labs. And it's very, very cool stuff. This new trick for training models, which kind of makes it way faster and cheaper. It's a smart data sifting capability that's in preview that automatically allows you to filter out uninformative training
00:31:02
Speaker
Samples on the fly. So essentially, this means that stage maker takes a peak at each data point in real time.
00:31:12
Speaker
and only processes the examples with the most useful for optimizing the model. So this means that you can cut down on the noisy or redundant data, loading your training sets without extra work. SageMaker handles the selected back pressure under the hood while preserving accuracy. So early customers that have been using this have seen 35% drops
00:31:40
Speaker
and training costs in hours using this smart sipping technology with no changes to their workflow required. All optimizations are happening based on the dynamics of the model and their data. That's cool. I love the fact that AWS pulls out these things and gives them to customers to just say, hey, pay us less money, please.
00:32:07
Speaker
It was always one of my favorite parts of the job.
00:32:17
Speaker
I mean, that's one of the, that's, yeah, because you can't increase prices, right? So you have to always lower prices. And with the continuous innovation, we kind of just pass those savings onto the customer, down to the customer. Yeah, which is great. Another, there've been other improvements within SageMaker Studio.
00:32:43
Speaker
which is the SageMaker IDE, which kind of got a major blow up and improved options for data scientists and developers. First, they amp up the IDE choices. Now you can get a code editor that's kind of based off of VS Code. There's a faster Jupyter lab.
00:33:06
Speaker
and in our studio all use the same interface now. So you use what fits your flow. There's no real difficult contact switching. And this lets data scientists stick to notebooks for exploration and then pass the baton
00:33:25
Speaker
for the engineers to handle in, you know, the code editor. And so this is all within the studio's hub, which is really, really nice. It's one connected workflow. And the IDs that open up, they open up in full screen browsing tabs. So, you know, you don't have to worry about keeping, you know, the, it takes up the whole screen's real estate business. No more squinting for, you know, squeeze panels or anything like that.
00:33:56
Speaker
Yeah, my eyes aren't getting any better. And another thing that's really cool that they've done
00:34:08
Speaker
with Jupyter Labs and SageMaker is they also, you can now bring your EFS volumes, your elastic file system volumes to Jupyter Labs code editor in Amazon's SageMaker Studio. So this means that you can now mount your own EFS storage volumes directly to your Jupyter Labs.
00:34:35
Speaker
So, you know, you can now share data sets, libraries from other code bases across, you know, your whole team without constantly shuffling files around or downloading to notebooks.
AWS Cost Management Tools
00:34:55
Speaker
Cool. That sounds like a good time saver.
00:34:58
Speaker
Absolutely. It's like, again, everybody now gets to read and write from within their coding environments. There's no more get polls, which is what people were doing, or manual uploads before running their experiments.
00:35:17
Speaker
AIML and vector databases and we were talking embeddings. Amazon Memory ZV for Redis now supports vector search in preview and also Amazon Document DB.
00:35:37
Speaker
also now supports vector search. So the big news, this is kind of big news for anyone wanting crazy fast vector search with their in-memory Redis databases. I mean, AWS, this is a preview right now, but with the support of MemoryTV,
00:36:01
Speaker
They're multi AZ durable redis offerings searches. So you can also now store your indexes and query millions of vectors and embeddings at lightning speed in your durable document DB clusters.
00:36:18
Speaker
And if you're, again, wondering what vectors are, vectors are basically numerical representations of unstructured data like text. And they capture the deep, the deeper semantic meanings of strings to power the, you know, the cutting edge AI recommendation engines and recommendations and
00:36:42
Speaker
and searches, and you can generate those vectors using Amazon SageMaker or Bedrock or some other machine learning service. Kokes, that really kind of drives the ability to create all this wonderful stuff that Gen AI is doing for us these days, right?
00:37:04
Speaker
you know, if you're building a RAG implementation, if you're, you know, using generative AI in your building like this, what is it, retrieval augmented generation, if you're using one of these systems, what you usually do is you put something into your prompt, that prompt, that string gets converted into an embedding, which is this numerical vector that we were talking about, and you take these embeddings and you would put it into a
00:37:33
Speaker
a database store. And in this particular case, you know, you can save this data into Redis or Mongo. And then when you do a query, you're doing a query against these vectors, which makes it really, really fast and performant.
00:37:52
Speaker
One of the things that I, this isn't quite a developer's thing, but one thing that I kind of, you know, because sometimes even when I'm even playing within my own personal account, sometimes costs kind of run away with you, run away from you if you're not watching it. So one of the cool things that also came out was the cost optimization hub.
00:38:18
Speaker
And this cost management feature helps you consolidate and prioritize your cost optimization recommendations across your AWS organization, member accounts, regions, and it helps you figure out and how you can get the most out of your AWS spend. So with it, you can easily identify, filter, aggregate,
00:38:44
Speaker
over 15 types of AWS cost optimization recommendations, like, you know, EC2 size recommendations, or Graviton migration recommendations, or idle resource recommendations, or service savings plans recommendations across accounts of regions through a single dashboard. So, you know, I thought that was kind of cool.
00:39:12
Speaker
That definitely sounds like it would be helpful. For years, you've been able to look at an individual account in an individual region, but to have that consolidated view sounds really cool and really, really helpful.
00:39:28
Speaker
There was some cool announcements across cloud operations as well. There was things like CloudWatch now allows you to support data query across multiple sources. So this enables you to gain visibility across your hybrid and multi-cloud metrics in a single view. So I thought that was really, really cool.
00:39:55
Speaker
So I gotta ask, of all the announcements that came out, what's your favorite one? What are you looking to play with the most?
00:40:03
Speaker
That's a good one. Well, I've already been playing with it. It's probably the Jupiter in the SageMaker Studio, all the new things that's within there. I didn't know what was before, what's there today. So I know that they made all these different, all these updates within SageMaker Studio.
00:40:28
Speaker
So, I don't know what it looked like before but right now that's my big thing is the stuff that's within SageMaker. Well, it's been an absolute pleasure to catch up with you and hear about all these announcements. I mean, I have to admit, I've probably only
00:40:47
Speaker
had heard half of them. So it's always a pleasure to talk with you about this. I've got to say Christmas is coming up. Any plans for any plans for a Christmas break?
00:41:03
Speaker
You know, every year there's a basketball tournament that happens, a travel tournament that happens around just a little bit after Christmas. Last year we were stuck in, I don't know if you remember, on the east coast there was this huge winter storm, ice storm, snow storm. We were in North Carolina and we were kind of stuck
00:41:27
Speaker
In North Carolina during that during that storm and travel that we that those few days was close to impossible.
00:41:38
Speaker
But safely, you know, we got out of there and no one was injured. This year we're going to the basketball tournament is in like the Albany, Troy, New York area. So, you know, we'll be going up there for a couple of days. And, but, you know, aside from just those couple of days, that's it. That's all I'm doing. How about yourself?
00:42:01
Speaker
Now I'm staying at home and doing some study and covering for other people who are going to be taking vacation. So yeah, nothing major there going there.
00:42:19
Speaker
All right, well, thank you for being part of the podcast. As I said, it's been a while since I've produced an episode, so it's great to have you back as my first guest in what will hopefully be some regular occurrences or regular appearances. I want to say thanks for having me here. I had a blast speaking out with you. Yeah.
00:42:49
Speaker
Well, anytime you want to come back and talk about tech on the podcast, I would love to have you. Thanks for listening to this episode of the Basement Programmer Podcast. I really appreciate you tuning in. And if you have any feedback, comments, of course, send me an email. Also, please consider subscribing. It lets me know that you're enjoying this production. I'm looking forward to you joining me for the next episode of the Basement Programmer Podcast. In the meantime, take care, stay safe, and keep learning.