Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#7 - Datomic with Robert Stuttaford image

#7 - Datomic with Robert Stuttaford

defn
Avatar
71 Plays9 years ago
A roving tour of Datomic with the pioneering Mr Stuttaford. For detailed show notes please check the web site https://defn.audio/2016/08/10/episode-7-datomic-with-robert-stuttaford/
Transcript
00:00:27
Speaker
Okay, well, for some reason I was laughing at the beginning of this programme and I think it must be fucking annoying.

Introductions and Birthday Revelations

00:00:35
Speaker
Anyway, hello, welcome to episode seven of Deafened. Ray McDermott in Belgium and Vijay Karanova in Holland. Hi, Vijay, how are you doing? Hello, Ray, I'm doing pretty good and it's my birthday. No shit. Yes. Sorry to swear at the beginning of the podcast, but fuck it, we're started now.
00:00:59
Speaker
Well, congratulations. So you've inked, have you? Yes, I have aged a bit. Yes. But that's pretty cool. Anyway, let's get on to the episode then. It's not about me. So we've been... It's partly about you. I think we got...
00:01:15
Speaker
Sorry? It's partly about you. Yeah, it's all about us. The fact that you had a birthday is awesome, mate. You're still alive. You should be happy about that. And I think all of the listeners should just take a moment to celebrate your existence. Yes, that's true. But it's not really a national holiday or anything, but I hope one day it will be.
00:01:35
Speaker
Well, at least today it's a national holiday and the entire European Union is celebrating Sundays. Absolutely. I'm very happy for that.

Addressing Technical Issues

00:01:44
Speaker
It's a sacred day in my week. It's on the 7th. Oh yeah, yeah. Yes, so it's a magical day.
00:01:56
Speaker
Let's get over with it. And first of all, I think last time we had some snafu in terms of recording the audio. So we apologize for that. And we figured out and we fixed the bug now. We won't say whose fault it was.
00:02:12
Speaker
We're not going to issue a very long root cause analysis here. We're not going to be open about this. We've just fixed it. Try again. It's much better. So it should be way, way better now. And just we'd like to give a quick update on the news and events.

Community Events Announcement

00:02:26
Speaker
And there was a new event or there is a new event on 6th of October run by Jext guys who are very popular in the Closure Community.
00:02:37
Speaker
They built Yada, I think, the web framework, and they've contributed lots of libraries, and of course, John Peter. He had amazing blogs while he was working at a bank and working at a newspaper, I think, Daily Mail or something. So those guys are having an event on 6th of October. That's a full-day event in near London. So you guys should check it out, and we'll probably put the links in there.
00:03:02
Speaker
doobly-doo of the show notes and of course there is a euro closure coming up on 25th and 26th of October so I think a couple of days ago the call for proposals is over so on 17 they'll probably announce the whole speaking lineup so we are very excited and of course we'll be there as well hopefully we'll be recording some interviews or something with the speakers there fingers crossed
00:03:28
Speaker
And there is also Closure Exchange that's in December in London, run by SkillsMatter. So all these things you can find on Closure, sorry, on Deaf and Audio with our show notes. So let's get to the topic of the day.

Interview with Robert, CTO of Cognician

00:03:44
Speaker
And so a couple of weeks ago, I went to South Africa and I met one of the prominent Closure developers in the Southern Hemisphere, I would say.
00:03:52
Speaker
And it is Robert, and Robert is the CTO of Cognician. And he has been one of the, I think, probably the early adopters of most of the closure new technologies. So we'd like to give a big, great welcome to Robert. Thanks, guys. Thanks for having me on. And happy birthday, Vijay.
00:04:18
Speaker
Thank you. So Robert, so we've been doing this podcast for a long time now. Six whole times. Oh, yeah. In the internet timescale. Yeah. But first of all, I mean, we'd like to ask you a couple of questions like, so a bit of understanding about your programming history, you know, where did you come from and where did you, you know, when did you get into closure bandwagon, so to speak, and why?
00:04:49
Speaker
Sure. So I started programming at the age of about 13 or 14 when I discovered that programming was the thing you could do. I was very fortunate to have computers as a child. I was very, very fortunate indeed. And I pretty much started working directly after high school. I never really went to college or anything like that. I'm going to jump straight into doing things in anger. And that's pretty much 20 years ago now.
00:05:16
Speaker
And yeah, I've basically been a career programmer ever since. And in terms of Clojure, I discovered Clojure about four years ago, three, four years ago now. I'm actually not entirely sure it's been so long, it feels like it's been so long. And I'll be honest, I can't imagine moving on from Clojure. I'm just enjoying it so much. And it's just, it just works. Yeah, just works for us.
00:05:40
Speaker
But so you're the CTO of this company so obviously you have the more leeway in terms of deciding which technologies to be used and did you find any resistance or because this is one of the problems that keeps coming up in adopting new technologies at the companies so
00:05:55
Speaker
Indeed. I was very fortunate when Closure actually entered my life. It was at a very key point in our company history. We had just successfully built our version 1 using technologies I will not mention.
00:06:12
Speaker
We were really just looking for something to take this to scale, to build this the right way in quotes, rather than the dirty and fast way we had done it before. I had looked at lots of different technologies, and basically the tentative plan was, well, we'll just do it with Rails, and we'll use MongoDB as our database. That shows you how long ago this was.
00:06:36
Speaker
and discovered Closure and found that it just ticked so many boxes so quickly that I was able to build prototypes very quickly, get my head around the way to think about building these programs quickly. And it's been pretty much a love affair ever since.
00:06:58
Speaker
Yeah. So you were saying MongoDB, and I heard on the internet that MongoDB is the Snapchat of the databases. So it basically drops your data all the time. But OK, let's not poke the Mongo guys too much. But so how big is your Closure team?
00:07:14
Speaker
Right now, our full technical team is 13 people, but a couple of those people are more on the UX and design side and don't have to actually worry about programming so much. So we've basically got 11 people. Some folks are still studying, and so they work for us part-time. Some of us are lifers. So yeah, there's a good 10 of us that are in the mix every day. And are you doing the front end and the back end using a closure stack, or have you got different technologies on the front end, Robert?
00:07:43
Speaker
We've got Closure all the way. So we're basically living the dream. ClosureScript on the front, Eden and Transit over the wire, Closure on the back and Datomic in the database. Wow. That's pretty cool.
00:08:03
Speaker
So let's start with, I know that as we discussed when I was in South Africa with you and so your entire stack is closure and maybe today's episode we wanted to focus on Datomic and we thought because you're one of the prominent users of Datomic or at least prominent in terms of adopting it from the beginning and you've been using it for a long time already.
00:08:27
Speaker
Sure. So we thought you could share some ideas or how you're using Datomic. But before that, we'd like to see, let's

Adopting Closure and Datomic at Cognician

00:08:34
Speaker
see what is Datomic. I know Ray has been into Datomic a bit as well, right? You've been trying out and...
00:08:40
Speaker
Yeah, I've looked at Deutomic quite a bit, looked at the way that we interface with it, whether it works, where we operate with it, but I haven't got a full business writing on it, so I'm dangerous to speak about it, but obviously I would defer to Robert in terms of the actual hard facts around it. But it's very interesting to think about, given the fact that Deutomic is,
00:09:10
Speaker
is at 0.12 or something now or 0.9. I think it's been at 0.9 some long number for quite a long time. So why did you decide to bet your business on it when things are still evolving quite rapidly? I noticed recently for instance that they've moved to a hard requirement for JDK8 for one of their latest releases.
00:09:32
Speaker
In other words, they're still quite happy to play with it. That's a strong word, but it's not 1-0 yet. They claim it to be production quality. Do you feel it's production quality as well? Are you happy with that decision?
00:09:50
Speaker
Absolutely. So to kind of help you to understand why the decision was so easy for us, when we actually adopted Datomic, our tech team was only three people. And I was one of those three people. So only two other people to convince. And it was actually interesting that one of the other guys had literally just joined our company three or four months prior and had literally just learned Ruby on the job with us. He had come to us from the PHP and the Perl world of all places.
00:10:20
Speaker
And Python is the other language you had. And so you just learned Ruby and just built a whole system with Ruby. And then, you know, I discovered closure, fell in love with that. And you had to kind of basically start all over again. But in terms of the problems we were solving and the way we wanted to solve it, we knew that we wanted that aggregate only, that append only model of gathering data and working with our data.
00:10:45
Speaker
you know, kind of dealing with your data as an immutable record of fact, rather than, you know, this update in place world, as Rich Hickey describes it. We wanted to use the event sourcing model for our data, you know, for our primary interface, because it supports the offline, the occasionally offline or the occasionally online model of clients so well. So there were many kind of boxes that were ticked there. But in terms of choosing Datamic,
00:11:14
Speaker
Basically, we were sitting with Closure and ClosureScript. As these awesome technologies, we were jamming with them, we were building cool stuff, and then we had to put things into a database. And then it wasn't so cool anymore. Then we were dealing with ORMs or JDBC or MongoDB, or we were having a laugh at some of the very early commits in one of our oldest repos, where we were noodling around with MongoDB. And it just felt like such a disconnect.
00:11:42
Speaker
We've got this awesome, immutable, functional programming style for all of our stuff, and it's all great. But as soon as we actually record our state, we're putting it into this crazy box. And then, Datomic got released. And we found out it was by the same guys who built Closure and ClosureScript.
00:12:02
Speaker
And we tried it out and it did exactly what it said on the tin. You know, were there bugs? Of course, it said there was with any software there are bugs. But it fit our minds, it fit our model, and it immediately solved problems for us from day one. And it's continued to solve those problems for us ever since.
00:12:22
Speaker
Yeah, I think let's get a bit more detail into how you're using Datomic maybe after a couple of minutes. So I saw the videos by Rich Hickey explaining why Datomic is different than other databases. So fundamentally having the time concept is built into the database so you can query at any point.
00:12:47
Speaker
And also, the architecture leads really the separation of concern, so to speak. Writing part is independently scalable versus reading part. So can you give us some idea about, because obviously, Datamic is a closed source database. So it's very difficult to poke into and then get an understanding of it. So it's primarily from the documentation, at least for me. So I didn't use it in production. So can you give us some idea about the architecture of the Datamic and how it differs from the other databases?
00:13:18
Speaker
So as with other databases, like MySQL and Postgres and the others, there is still a separate process that you connect to, a TCP connection to a separate process, and what's inside that process is essentially a black box. It's got a very clearly defined API, and you know what it's going to be doing when you give it instructions, but it's still essentially a closed box.
00:13:39
Speaker
Where it differs, though, is in how it splits out, reads writes and storage. That black box that I mentioned, the Transactor, the only thing it's actually doing is handling writes. Storage is out in one of several storage engines, one that's built into the Transactor, if you're using it on your laptop, or all the way to very big, scalable databases like DynamoDB.
00:14:06
Speaker
And I think the key thing about the architecture is how reads happen. And those reads actually are processed inside your own app process. And, you know, we could spend two hours just talking about why that's awesome and what that does for your architecture. But suffice to say it basically by designing it that way, and the fact that the database is immutable, it gives you many affordances for how to think about, you know, storing long term state.
00:14:34
Speaker
and how your programs work with that data. Yeah. So the data is essentially stored in the indexes, right? I mean, that's what I understood from the documentation so far. So because in the traditional databases, you have the data stored in some binary format, and then you create indexes, and indexes are separate from the database. Well, the original data that's separated out. But in Datamek, the data is basically the index. So every time it is written into four different indexes, right?
00:15:03
Speaker
Yeah, so it's actually five places. So there's the transaction log, which is the place that's kind of our source of truth. So you could lose everything else and keep the transaction log, and you could rebuild everything from there. But in terms of the query model, where reads actually occur, it comes from what are called these covering indexes, where all of the data are actually directly in the indexes, rather than the indexes having references to somewhere else.
00:15:28
Speaker
And as you say, there are four of them, although not all data is in all four indexes. I think only two of them are completely mandatory. Yeah, yeah. And for the querying part of it, so there is data log, I know a bit of it from the history. So how different it is compared to SQL based interfaces to query the data, for example? So, I mean, the query
00:15:53
Speaker
It's very easy to think that the read side of Datomic is just all about data log, but that's actually not true. Data log is only really there to ask more complex questions, questions that involve many relationships. Very often, all you're doing is getting a reference to one entity in the database and then reading out some of its attributes, and you don't need data log for that.
00:16:15
Speaker
Similarly, you may just be getting a flat list of all of the entities that have a given attribute and you don't need that. You don't need data log for that either. It really feels like you're working with this infinitely big list of datums in your local memory. And there's a whole bunch of different APIs for dealing with that. But the programming model, it really feels like you're just dealing with local data rather than some big separate thing out there.
00:16:44
Speaker
I remember some time ago, probably it was Stuart Holloway's video or something, saying that the data log can work on any type of data. It's not necessarily linked to Datomic. And he's one of the things that he pointed out that if you see the query, the last part is the DB, and then you can add multiple other data sources to it. So you can actually query from multiple places and match the data together.
00:17:05
Speaker
So how real is it? I mean, do you think, did you use that kind of stuff or do you use that in your applications? We use it and we use it in anger. So when we put our system together, we used two databases. We've since transitioned to using a single database, but for a long, long time, for a good year and a half, there were queries across multiple databases in a single data log query. We have done things like query against a history database
00:17:33
Speaker
and the now database in a single database query, and then the speculative queries where you create a new database that uses a transaction that you haven't committed yet, and you do queries against that and your actual database.

Exploring Datomic's Architecture

00:17:50
Speaker
Basically, it does what it says on the turn. It is completely separate from storage. You can just give it the correctly shaped data and ask that data questions directly.
00:18:02
Speaker
So Robert, I think you mentioned the history and the current database. Are you referring to the same things there?
00:18:09
Speaker
Yes, so the history database is essentially a different view over the same data. Normally, when you take a data atomic database, you're looking at what was true as of a certain point in time. So you only look at the things that have been asserted, added to the database up until that point in time. Whereas if you take a history database, you're saying, give me all of the assertions and all of the retractions, I want to see the history of this entity, show me not only when things were added, but also when things were removed.
00:18:39
Speaker
And so you can do things like build app an activity stream, save for an entity, and actually list out all of the meta data for each of the things that happened going back in time. So the two databases are aliases for the same data store. Yeah, it's essentially like a SQL view. You're basically building a view over an index and the index is just arranged differently in the history database.
00:19:08
Speaker
in that you also get all of the retractions. Yeah, okay. So there aren't two separate databases with foo being the current view, the current database and bar being the history database. In fact, they're just two views, views of the same datomic database, the physical database of the atomic.
00:19:28
Speaker
Exactly. I think there's maybe some clarification of terms that we can do there. Once you get used to using Datomic, it feels natural to talk about the database, which is the thing that is referenced by the URI, which is the whole thing, and then a database where you take a reference to a particular point in time and you query against that database as a value. If you want to think about the big, all-encompassing thing, you can think of that as the fact store.
00:19:56
Speaker
That's where everything is, the storage. And there are multiple views into that storage, one of which includes retractions, which is also known as the history database.
00:20:06
Speaker
So Robert, as far as schema is concerned, what do you do with that kind of stuff? Because obviously in Datomic, you have to declare the schema at the beginning of your database life. You have to define what the types of the data are. And compared to Mongo and other kind of schema-less databases, that seems to be more of a barrier to entry. Obviously, you have to
00:20:32
Speaker
Evolve things as you add data, as you add columns and things like that. Maybe there's attribute changes. How if you find those aspects of the datomics schema?
00:20:43
Speaker
So some of those things are, you're right, some of those things are actually better to deal with now. So when we first adopted Datamic, once you made a schema attribute and gave it all of its values, its controlling values, that's it, you cannot change it, it's going to stay that way for the rest of time. A release of Datamic, I think middle of last year somewhere, they started supporting schema alteration.
00:21:09
Speaker
where you can actually start to alter certain facets of schema. They did not allow you to change absolutely anything, but they did give you an escape hatch, which is to be able to rename schema.
00:21:23
Speaker
And so the absolute worst case, which would introduce a lot of transactor overhead, is that you can basically make a new schema attribute that does the things the way you need to and transact all of the values over from the old one. But of course, the disadvantage to doing that is that you lose all of the historical transaction relationships that exist.
00:21:46
Speaker
In terms of actually modeling data as compared to Mongo, where you can just basically put anything, any valid JSON into your database, Datomic scheme is actually super flexible in the sense that although it insists that when you name something that you give that named thing a type and a cardinality, which is to say that an entity has only one of this or it has many of this,
00:22:14
Speaker
It doesn't impose any restrictions on how you compose attributes together. So you can have an entity with one attribute, user email. You can have another entity which has the user email attribute, a user full name attribute, a user password attribute.
00:22:29
Speaker
So in that way, it kind of takes the best of the schema-less approach, which is to say arbitrary composition of attributes, but it has the best of the kind of the typed approach, which is to say, once we have a value to talk about, it's going to follow some very strict rules.
00:22:45
Speaker
And in that way, you essentially get kind of rectangular databases where if you want every entity to have exactly all of the same attributes, that gives you a rectangular SQL-like experience. But then you can also have a graph-like database where you basically got the Wild West. You know, you can connect anything to anything else. And we are kind of...
00:23:07
Speaker
in the middle. We have large collections of entities which are identically shaped, and then we have large collections of entities which share no shape with anything else in the database. And we use both of them interchangeably, and it's not a concern.
00:23:25
Speaker
Is it okay if we ask you what is the size of your datamic cluster? Is it trade secret or okay? Go ahead. Do you mean in terms of our hardware?
00:23:39
Speaker
Yeah, like how many, I know you're hosted on AWS, right? Amazon services. So how many transactors, obviously one probably, and then how many peers do you have and how is it deployed? So we have, so in the data-atomic world, you can only have one active transactor. So it only makes sense to have one standby transactor. And we certainly have both of those. And those are on C4 largest, I think, two core, four giga gram machines.
00:24:08
Speaker
And then we use DynamoDB as our storage. And we had to spend quite a bit of time tweaking the read and write provisioning for that. Because for the most part, when putting things into Datomic, it's lots of small transactions, which uses a very low write throughput. But every now and then, it has to do an indexing job.
00:24:29
Speaker
and indexing jobs are very short periods of big writes, you know. So we had to do a bit of head scratching there, but we got it right. And then in terms of the number of connected peers, so we are currently on 12 peers, and I think we have one spare. And, you know, the transactor itself, you consumes one of those licenses. And so we basically have 10 connected peers doing various things.
00:24:58
Speaker
In the traditional database world, I think there are a couple of things that come up every now and then, like the tooling that you can use to manage the database or query the database. And the other one is that whenever you put in a database, then people will ask, OK, how about full-text search? So there are add-ons or there is probably Lucene or something that you're going to use. So of course, these are two-part questions. So one thing is about the tooling, like what kind of tools that you use. And the second one is, like, do you use any additional indexing things apart from it?
00:25:28
Speaker
So the best possible tool you can use with Datomic is a Closure REPL, by far. Because it's got the full API, it's got the full benefit of all of the existing source code you've written around your database, which is really where the rubber meets the road. You can't reason about a Datomic database without also reasoning about the problem it's trying to solve.
00:25:51
Speaker
And so you gotta have that stuff on hand as well. I think two, three years ago when there was a Closure Club thing, I actually built some sort of a small web app to explore Datomic database. That was with the first version of Ohm and yeah, I spent 48 hours of fun time.
00:26:09
Speaker
Yes, it's almost like a bit of a rite of passage to build one of those. Yeah, it's great fun. In terms of tooling, so we actually do something interesting. Our database, given that it's been running since I think 2012, I want to say, January 2012 or January 2013,
00:26:28
Speaker
One of those two, I'll make sure and let you know. We've got a lot of data in our database now. And our full database backup when we do a backup and give it a test is a good 14 to 16 gigabytes. And that's, of course, because it's storing all of the past, all 39 odd million transactions. And so you can imagine that's quite unwieldy to work with if you need to, for example, restore a database to your local machine to do some debugging or some analysis and so on.
00:26:56
Speaker
And so we've actually written some pretty fun closure code that exports select pieces of our Datomic database, what we call kind of the control data set, and produces a transit file or a file that's encoded in transit. And then we've got code, obviously, that will suck that in and create a new database out of that. That was a hell of a lot of fun to write as well, using closure transducers.
00:27:21
Speaker
Yeah, yeah. And that helps our dev team out quite a bit because then they're working with a kind of a 60 or 70 meg file and they're basically able to use all of our systems with that smaller database. Okay, so I was asking the second part was about the indexing and do you have any additional component that does the full-text search or...
00:27:44
Speaker
So we don't actually use full-text search, we don't need it in our world, because a large part of what Cognition actually does for its customers is to make discovery of learning content easy. And when you're teaching people who know nothing about your content, the very last thing they're actually going to be able to use effectively is a search tool.
00:28:05
Speaker
because they don't know any of your phrases, they don't know any of your language. And so we are actually producing experiences which gradually introduce users to content via other means other than search. So we've never actually had cause to switch on the full text indexing that's built into the atomic. Okay.
00:28:24
Speaker
So we have this peers and so Datamic introduces this new taxonomy, right? There are peers and there is transactors and stuff. And one of the things that people talk about like peers are basically one per application so that the read part is essentially scaling independently because every app instance or something gets the data and you can query it. So it catches it locally. So the first time when people hear it, they're like,
00:28:50
Speaker
Oh fuck no, I mean is it going to load the entire database into the memory? So how does that work in your opinion? Is it really that scalable and did you find any problems there?
00:29:04
Speaker
So essentially, no, we've found no problems with it at all. It actually saved us a hell of a lot of effort as an engineering team. They say there's that old joke about the two hard problems in computer science being naming things and cache invalidation. Well, they tell me it completely solved cache invalidation for us. We don't actually have any kind of view caching code in our infrastructure at all. We've managed to write view rendering code that is fast enough given locally cached source data.
00:29:33
Speaker
In terms of using up your memory, it will certainly use up a lot of memory if the web server that you're using is querying a broad spectrum of your database. But remember...
00:29:48
Speaker
A lot of what's in your database is in the past and has, depending on your domain of course, is no longer in the now database. So you'll actually find that what's true now is much smaller than your grand total database and very often that's the only bit that actually needs to fit into memory. But because Datomic is a pool-based system, it'll only cache what it actually needs.
00:30:11
Speaker
and it uses the least recently used cache, which means it will throw away the stuff it's no longer needing. One of the big advantages to buying the Atomic Pro is the ability to connect memcached as a second tier cache. And essentially, for free, in our world, our system is actually served out of our memcache cluster, just because of the nature of the way that the Atomic is architected. And DynamoDB is more like a near-line backup for our storage than it is an actual read storage for our system.
00:30:40
Speaker
So we deploy continuously. We're rebooting our web peers all the time. And memcache just basically steps in to take up, to warm those caches of those web servers back up. Yeah, so it's a beautiful architecture. It works really well.
00:31:00
Speaker
And the Transactor is essentially like a single threaded thing, right? It takes the request to write to the database and then it executes them in one by one. That's what my understanding is. Okay. But did you find any issues there?
00:31:15
Speaker
Well, it's not single threaded. It's definitely a multi threaded system. But there is only one thread actually performing rights. You know, there's all sorts of crazy pipelining stuff to prepare transactions before transaction time. There's all sorts of crazy threading happening after transaction time. But it's all going through this very kind of eye of the needle bit in the middle. And in fact, I can't remember if this came from an official source or not. But I'm pretty sure every transaction happens inside a swap bang on an atom
00:31:45
Speaker
Yeah, probably. There's one control atom for our 14 gigabyte database that holds all the roots. And that's what's in that one thread. It's that swap bang. So yeah, it is multi-threaded for sure, but there's only one writer at a time. And didn't Stuart Holloway once say that there is only one atom in Datomic? Yeah.
00:32:07
Speaker
That's the one. It's a pretty important dozen. Especially for us. It's atomically special. Yes. And I think I remember reading somewhere, Stuart Holloway said that Datamic is like isolation level 9,000 or something because there are no locks, there are no latches, there is nothing of that sort. So I think that that's probably one of the interesting things that I read about Datamic.
00:32:30
Speaker
And so the back-end, obviously, Datomic, when Ritchie introduced it, I think he was saying it came with DynamoDB as the first back-end. And then there is other back-end support slowly being introduced.
00:32:46
Speaker
Yeah, so when we first grabbed it, we actually used Postgres as a backend for a good year and a half before we switched to our second generation AWS cluster. And that was great. That was totally fine. We weren't really at any kind of scale at that point.
00:33:04
Speaker
Now that we are supporting more concurrent users, DynamoDB is definitely paying its dues. But in terms of other places that you can connect to, there's React, there's Couchbase, there's Cassandra. Those seem to be the major ones that seem to be well known. There's generic SQL, so you can actually connect to any SQL database.
00:33:28
Speaker
It would be pretty interesting to see what happens if you try to connect to a... What's that simple command line SQL? The really small one?
00:33:38
Speaker
SQLite. That's the one. It'd be fun to see what happens there. Yeah. And then, of course, the Transactor process has got to add Java's H2 built into it as well so that if you're just on your local machine. Yeah, that's the dev database, right? Yeah, the dev or the free storage. Dev mode. Yeah, yeah. But did you find, so do you think that is there any difference between these storage engines? I mean, using different storage engines, it shouldn't be that much different, right? Because you have an abstraction on top of it.
00:34:08
Speaker
So you really don't care what is in the backend. Well, let me talk you through how easy it was to transition from Postgres to DynamoDB. Basically, on the day when we had done all of our homework and our preparations and our dry runs, we essentially took our web, our load balancers down, put up our polite status message saying we'll be back.
00:34:32
Speaker
took a database backup using the Datamic level backup tool and then restored that very same backup that was taken on the old transactor into the new transactor running on the different storage. Once the restore was finished, we switched over the DNS to the new load balancers and we were back online.
00:34:54
Speaker
Essentially, if you use the Datamic Backup and Restore tools, it is storage agnostic and you're able to switch from storage to storage. If you get into incredibly high performance situations, then obviously you are going to start to encounter storage specific concerns, but we haven't gotten there yet. We've still got to have that interesting time.
00:35:18
Speaker
Yeah, I guess they support multiple back ends because different organizations are comfortable or more comfortable, less comfortable with on-premise, on-cloud, and various setups. So it's what, to some extent, they want to try and be helpful to their organizations, don't they, to enable people to deploy operational systems that they're familiar with and can tune and all those kinds of things. And likely already have running.
00:35:46
Speaker
Exactly. Yeah. Well, that's one of the reasons why they're comfortable with them, I guess. Exactly. It's true. And you've been interacting with the Datomic guys a lot. I mean, how did you find the support and everything? Because this is one of the biggest, strangest things for me, because of course, I'm not completely like, you know, Stalman level, open source guy, but you know,
00:36:09
Speaker
I use what is convenient, but I prefer having some sort of an open source stuff and the only support Well, if you're not paying them then only support that you have is through the mailing list and I've been subscribed to the mail I see Bob and other Bobby or something and some other people being very active and there on the mailing list but how was your experience interacting with Cognitekt and while you're deploying it and How was their support?

Support and Adoption Experience

00:36:35
Speaker
Yeah, so we officially bought our Pro license basically two days before we actually switched everything on for production use. And then up until that point, we were essentially in the Google group with everyone else.
00:36:50
Speaker
But because I had an open conversation with the Cognitec folks about it and what our plans were, they very kindly reviewed how we were going to put things together before we actually officially bought our license. We found that the support was good. And then once we actually paid for a license and we were now kind of had official email support, we leaned on it and we leaned on it a lot. And the support was excellent.
00:37:19
Speaker
mostly because at the time that we were doing this stuff, it was when Datomic had first come out. And I'm sure that that team was hungry for that feedback. You know, we were one of the very first people to pay for it and to put it into production use kind of right away. You know, a couple months after, you know, we went live January after the August, it became available. So pretty quick in, you know, in that kind of world.
00:37:44
Speaker
And we had great responses. Rich himself helped us with a couple of things. We had Stuart Halloway helping us out for the most part in the early days until they hired some folks in. Really, really great support. And I think that's because even though Datomic is closed source, the people who build and maintain it use open source all the time. I mean, they maintain the closure language itself.
00:38:08
Speaker
You know, so they know what the deal is, but, you know, they've got to eat as well. I think the rich kind of summarized it as I've got a kid in college, you know, it costs money. He needs to make some money somehow. And so they totally get that the atomic is closed source and that is painful for some people. And so they're aware that, you know, if you do buy in that you those people have got to be well supported and that we absolutely are. I've had no complaints with their support at all. Yeah.
00:38:37
Speaker
Of course, it's not about the obviously, I mean, every business has their own way of, you know, doing their own business, you know, that's the whole idea of starting up. And there are some other databases which did take these these kind of ideas, like there is world DB, there is a very specific division between the transactor idea and then having this one. And I remember there was a paper some time ago, I'll probably add, we'll add the link in the in the show notes that was comparing like what exactly are the databases were doing.
00:39:07
Speaker
when you're scaling up. So most of the time it was a parallelization of the queries and this kind of stuff and transactions especially because you keep reading in the transaction and writing it again and they need to figure out what is the best way to utilize it. And there have been other efforts to make these kind of things. But I don't think there has been enough push from the community to get these ideas into the other databases.
00:39:36
Speaker
And if you see, for example, Postgres, and Postgres has been like solid database for years and years, and it took this whole MongoDB mayhem in the market to push them to look into, okay, we need to add the JSON support. Because look at it. If I want to run one of the magical things with MongoDB, of course, we can all poke holes into the way it is architected.
00:40:00
Speaker
But the developer effort to start to work with MongoDB is, I mean, just like half an hour or something. I just installed MongoDB and I have everything. Yeah. Brew install or something. With Postgres, I install it and I have a bazillion commands to manage all this stuff. Then there is a create DB and it creates a user and all sorts of stuff.
00:40:23
Speaker
So that kind of friction that in the developer experience or in the way the data is stored, in the way the use cases, those are the things need to push for the open source stuff. But yeah, I don't see other databases getting this time concept. But do you think this is very specific to some domains or the concept of how Datamic looks at the data?
00:40:49
Speaker
You're speaking about the strong notion of time. I'll be perfectly honest, I haven't worked in many domains, so I don't know that I could specifically speak about other domains, but I have had conversations like this with people from those domains. Specifically, when I've spoken to anybody who needs to keep track of what happened in the past,
00:41:13
Speaker
Ad tech, financial tech, any kind of basically business that needs to keep track of what the hell happened. I get lots of nodding heads and hands to foreheads like, oh my God, why am I not using this? Why am I not thinking this way? It's very rare that I come across somebody who says, oh no, we don't need that. Or if I had that, I wouldn't use it at all. It doesn't seem like it would help us with anything at all.
00:41:42
Speaker
Talking about the now query, crafting that query such that you're always talking about a consistent view of the database. That's just pure gold as a programmer. Not having to worry about that problem ever again is worth money and that's why we pay it.
00:42:00
Speaker
I hope that answers your question. Yes, yes, of course. There's actually a related database standard to do with time. SQL 2011 included the notion of time into the SQL standard, the ANSI SQL standard. Interesting. And actually, when I was doing some research on this,
00:42:22
Speaker
I noticed that, bizarrely, DB2, the granddaddy of databases, actually was the first one to adopt this notion of temporal time into its databases. So, in theory at least, to be honest, I've worked with it, we have DB2 at work and now in theory, on mainframes and on Linux and on Windows via DB2, you can access this notion of temporal data management
00:42:50
Speaker
I don't know how it compares well enough to the atomic but the concept is well understood I think in the database community because of course everyone knows that querying what happened yesterday is a nightmare in current databases. So actually they're doing that, they're adopting that in the mainstream and that might put a bit of pressure on the atomic guys to bring a bit more to the table.
00:43:18
Speaker
Well, if you think about it, if you think about the grand mission that I think Rich has, and again, I may be putting words in his mouth here, but I think he's trying to solve programming. It's not just about getting a locked-in subscriber base or purchasing base. I think he would be very, very happy if more databases took a strong notion of temporal time.
00:43:41
Speaker
You might have opinions about how they execute on the details. I'm almost certain he would. But I don't think that they would be at all upset by it. I think it's something that we sorely need to see more of, is to deal with records as facts rather than this place that you can erase at will.
00:44:06
Speaker
Yeah, it's interesting because you know the original wiki, the C2 wiki, if you see there has been a lot of discussion about what exactly database and then they say database is essentially a storage of facts and then there is always this relational algebra on top of it and everything is just a statement saying clarifying at this point in time, these are the things.
00:44:32
Speaker
I think Rich explained it better when he was introducing Datomic. There is this place and then you keep updating the stuff in the same place.

Datomic's Unique Features and Licensing

00:44:40
Speaker
That was useful back in the days when space was expensive and now space is so cheap that you can just keep adding more space and then keep adding the facts.
00:44:51
Speaker
and delete them whenever you don't need them. Speaking of deletion, I think Excision or something, that wasn't part of the initial version of the database. Excision, sorry, yeah. Yes, so I love that word. There's a really good Ian Banks novel called Excision, I think, or Excession. Anyway, so yes, so they actually added this Excision capability to Datomic to satisfy the Europeans.
00:45:19
Speaker
I can't remember what the name of the law is, but basically in Europe, if you're going to run a system that stores user data, you better be able to prove that if that user says, let me out, that you've deleted everything you know about him. Of course, the atomic being an immutable database, it says, well, we're not going to let go of anything ever.
00:45:41
Speaker
And that's a problem for Europeans, right? So they had to add excision. But even then, if you look at how they tackled it, I think they tackled it in a really nice way. You can remove data, but you cannot remove the fact that the data was there at some point. You have to be able to reason about that entity somehow, even if you've left all of the details about that entity behind.
00:46:04
Speaker
Okay. I mean, this is the whole right to forget thing that we fight for. So yeah. Anyway, so you have a question or something.
00:46:13
Speaker
Yeah, it was just an additional point there because I don't think it's for Europeans only, hopefully. Of course. But yeah, we're just leading the way in Europe. But yeah, the whole notion of how do you get rid of data is fascinating, especially I think one of the powerful points about this whole historical data stuff from
00:46:36
Speaker
from that atomic side is the ability to audit things. So is that something which you've benefited from in your work in terms of being able to show the business this thing changed at that point in time and we can see a kind of the history of this client or history of this data set through time. Is that something which you've benefited from?
00:46:58
Speaker
tremendously. So one of the things that you can do, and it's almost one of the things I recommend it any, you know, as soon as you're planning to put the atomic into production, is you can annotate transactions as they get written. So what we've done is whenever a transaction happens in the context of a web session, and that session has a logged in user, we actually link the transaction to the user in the database. And say this transaction was created by this user.
00:47:28
Speaker
And obviously most of the things that happen in our database happen through our website. And as a result, most of our transactions are so annotated. So when somebody's account gets deleted, and I'm using air quotes here because we can't really delete them in our world, or, you know, some critical configuration changes that takes a client's implementation offline, it is a matter of minutes to go and find out who did it.
00:47:51
Speaker
And then, you know, it's a very short conversation thereafter to figure out what happened and why. Whereas, you know, in a SQL database, unless we had specifically gone in and added an entire layer of, you know, tables and schema auditing and whatnot on top, we would just, we would be host. We wouldn't just, we just wouldn't know. And we've definitely taken advantage of that in the past.
00:48:15
Speaker
It's city things, frivolous things like users claiming that they've reset their password four times and they can't log in and then we go and have a look and they've reset it once. We say, well, we only see one reset, please try again. Oh, okay, I'll try it again. Things like that. But also just retrieving data. People using content management systems think, oh, damn it, I deleted that thing. I accidentally cleared and then hit save. Well, don't worry. We'll just go back and get it for you.
00:48:40
Speaker
So it's definitely helped us tremendously. Yeah, infinite undo. You've got undo. That's awesome, isn't it? I mean, that is a superpower, isn't it? The undo. It's undo in our entire production system. Yeah. I mean, they talk about it. I mean, David Nolan talks about undo on the front end. And actually, that's not such a big deal, really. But what you're talking about now is at the back end where you can properly undo things really properly. That's awesome.
00:49:05
Speaker
It is. I agree. Yeah. So the other thing is, I guess you get kind of, with that kind of journaling and annotations, you get a kind of logging system built into your database as well. That's really useful. Yeah.
00:49:24
Speaker
And in fact, we use Onyx, which I'm not sure how many people would be familiar with that. And that's a whole discussion all on its own. But it's essentially an event stream processing system. And so Onyx annotates all of the transactions it causes as well. So if we ever wanted to understand a stat change or some sort of a calculation that went off, it's pretty easy to track down that that was actually one of our systems that caused the problem rather than one of our users. Yeah.
00:49:55
Speaker
Okay. One, maybe let's move the topic on a little bit. We're just coming up here is the tooling around the atomic. How do you find that Robert? I know you've talked about the REPL, but what about some other stuff about like, for instance, I played a little bit a while ago with some of the functions in the database, the stored procedures.
00:50:19
Speaker
And we're kind of told, we've been told over the last 10, 15 years that stored procedures in databases are a bad thing. You know, that they're architecturally horrible. And the reason for that usually is because they're in some arcane language and they've got crap debug support, et cetera, blah, blah, blah. All the kind of, who is that guy, the flaming guy? Oh my God, the guy on the internet who said that stored.
00:50:49
Speaker
I'm pretty sure there's a guy on the internet, yes. Let's dispense the guy on the internet. You know who I mean. The guy who does the, he made a big blog about how bad stored procedures, Jeff Atwood, Jeff Atwood, that guy. You knew him. Coding horror. You knew it was Jeff. Coding horror, that's right. The hair on fire guy. He made a thing about the stored procedures being bad, et cetera.
00:51:15
Speaker
And I've used those arguments to try and kill stored procedures in many projects. So I was just wondering whether you've actually used stored procedures, because I found even the atomic stored procedures, although they used closure, which is really great, we're still lacking a little bit in terms of tooling and didn't quite answer all of those questions about how do we use stored procedures in a great way.
00:51:41
Speaker
So I guess the question we have to ask, and it's one I use all the time as a technical leader in our company, is what problem are we trying to solve?
00:51:49
Speaker
And if we're trying to find a convenient place to put code that we're going to use over and over with our database, then we've already got a great answer for that. And that's with the rest of our application code in the system that we already all understand. And that's Git and GitHub and built-in inversioned software. If the problem is solving a transactional consistency issue, for example, making sure that this particular thing is created once and only once,
00:52:19
Speaker
Then the right tool for the job is a function that's installed in the Transactor. The canonical, the hello world example there is the bank transfer. Moving money from account one to account two, you want to make damn sure that both happen at the same time or not at all.
00:52:37
Speaker
And that's really where I see that, you know, that tool becoming valuable. We've only ever used database, you know, transactive functions, as they're called, which is the one that does the transaction, you know, that kicks out the transaction if it fails for some reason. We've never used normal database functions, which are the ones that you can just install in your database and call arbitrarily. Because again, you know, we've already solved this problem for all of the rest of our code.
00:53:01
Speaker
It may help to clarify why that is. The only database we use is Datomic. The only language we use is Closure. If we talk about a database at all, at any point in any of our systems, it's going to be about Datomic anyway. If we decided to use database functions, and we wanted to do so consistently, we would end up with a lot of code in the database.
00:53:24
Speaker
And that would lead to the problems you were leading to with stored procedures, which is it's hard to reason about that scale of code organized in that fashion, I think anyway. I've never tried. Git is wonderful. GitHub is wonderful. Why throw all of that away? Yeah. Well, you don't get all of the other things, do you? You don't get all of the tooling either, all of the debugging and all those kind of things.
00:53:49
Speaker
Well, because it's just a normal Closure function, you can just run it in your Closure REPL and do all the usual things you can do to functions in your REPL. It is running in the same runtime, which is to say Closure in the Java Virtual Machine. So I would argue that you do have all of the typical tooling you would have. When it actually gets run against your production data, maybe not. But in terms of testing the function, I think you're OK.
00:54:18
Speaker
Okay, so there are there are different editions in the in the datomic thing, right? So I think one of the problems that I, I was wondering whether it is really problem or not, like scaling it up and and ending up paying, or maybe ending up having to pay too much money. So how does the peer scale and if you keep scaling it up, you know, what what kind of additions that you think or what kind of pricing structure would make more sense?
00:54:47
Speaker
So this is actually a really fascinating question to ask because we're actually grappling with these problems at the moment. Basically, one of the real disadvantages to Datomics pricing model is the fact that you pay per connected peer or per connected process essentially. And that is that if you think about it is really quite at odds with the Amazon Web Services scaling model, which is to add machines, right?
00:55:12
Speaker
So if you wanted to scale a service automatically with Datomic, you better have a lot of spare peers provisioned and waiting to be used in your scaling group. And if you don't, then you have to start to do other things. So basically what we're looking at right now, and Amazon makes it easy because their pricing makes this easy.
00:55:34
Speaker
you know, two machines cost the same as one machine that's double the size, right? So they've kind of taken that concern off the table. But essentially, we're looking at, and we're in the kind of the performance and load testing space now, looking at what kind of performance we get out of more machines versus bigger machines.
00:55:51
Speaker
And given that we're dealing with Java and we're dealing with threaded processes, I think we're going to be okay with bigger machines. But yeah, it is a concern. If you wanted to, for example, scale to thousands of nodes, good luck doing that with Datomic.
00:56:08
Speaker
You're going to have to buy shares in Cognitix, I think. Exactly. But it has been a fascinating thing for me because I worked at Oracle, the more dollars, so to speak. Well, I mean, they're pretty awesome. It's an awesome company. But every time I think about databases, I mean, the way they're pricing structure is very, very weird because there is
00:56:34
Speaker
especially with Oracle initially it was based on the processor and then based on the course and based on the and then later they came up with something called virtual private database so what that essentially means that you need to have a user in the database a database user for every application user right so if you have a 600 people using your goodness me
00:56:55
Speaker
web application, then you need to have one user in the database. And the strangest part is that then your licensing is based on number of users per core or something. So if we forget all the discussion about open sourcing and this stuff, if you think about purely from the commercial angle, it is very difficult to come up with a pricing model for database. I mean, can you say data nowadays? It could be hard.
00:57:19
Speaker
Exactly. I mean, you can shovel everything into the database and then you keep paying extra money. Is it based on number of requests that you're making? And yeah, so it's pretty tricky. But basically, sorry to interrupt you there, Ray. Basically, you don't want to punish users for using your database.
00:57:38
Speaker
you know, by making your database costing more. So if you look at how Amazon has done this for theirs, you're basically paying for read and write throughput, and you kind of get everything else for free, right? And they can control that because it's a database as a service.
00:57:54
Speaker
Whereas with the atomic, you know, they're really, they have to charge you once and get everything they can get out of that sale in one go. Um, and it, uh, yeah, it's incredibly difficult. I mean, I would love to have been a fly on the wall when they kind of threw the ideas around like, how do we, how do we do it for this thing? Um, it could not have been an easy discussion. Yeah.
00:58:15
Speaker
I guess there's two answers to that isn't there? One answer is that eventually they run some kind of service and then they do some kind of like you say pay pay on kind of transaction throughputs. Then the other answer is I guess you could say well again you should pay for what you use so
00:58:36
Speaker
Like Oracle and co-often do, they give you some kind of golden license or silver license or whatever, and that entitles you to use the product, and then you have to have some audits. They're dreaded database audits. Actually, let's be honest, Datomics should be pretty good at that. Give me what we've just said. I think they may have a leg up there, yes.
00:59:01
Speaker
So yeah, I would have thought that something like that where you can say, okay, well, for this burst capability, which is a great point actually, because what you don't want to do is you don't want to go with this vertical scaling where you have these big machines. Because that costs a lot of money on Amazon. You want these small machines which you can just burst out. And then you want these sort of rear view mirror style licensing models where you can say, oh, you know,
00:59:28
Speaker
You pay for what you've used. Yeah, exactly. So if I've used 10 machines on average this month with a high of 100, but a lot of two, well, I'll just pay it out per minute or whatever. And that shouldn't be too hard to come up with, I think. But then again, like you say, I'm not in that room, so it's easy for me to spend that money. So think through the difficulty of that. So for us, I mean, if that was the Atomics pricing model, that would be a nightmare for us.
00:59:56
Speaker
because it would be absolutely impossible to budget for. We just don't know what we're going to need because our business model is a SaaS-based business. We're charging for users, but we're charging per annum and we're charging in bulk rates. We're charging for thousands of users at a time, so it's really discounted.
01:00:15
Speaker
So we could suddenly have 30, 40, 50% of our user base all come on at the same time, cause us to spin up at 50 or 100 servers, and then we're on the line with the atomic for more than we can afford.
01:00:31
Speaker
Whereas even though it's expensive to pre-buy processors or processors and to have to have some in the bank for scaling, it is at least a manageable cost in that we can budget for it. And we can plan ahead for it. And once we've paid that cost, we have a band of space to work with.
01:00:52
Speaker
Yeah, but I still think that what you're doing is inimicable to what most people want to do with the cloud. Absolutely. You know, because having to have this kind of fixed capacity is quite difficult I think for many organizations, especially
01:01:10
Speaker
bigger organizations that want to scale out to tens of thousands potentially. Well, let's just be realistic at the early days, you know, tens to hundreds of nodes, but they want this openness to be able to go towards, you know, bursting towards several hundred nodes. You know, I know that the company I'm doing a lot of business with wants to do that. And it makes it difficult to sell the atomic in that organization.
01:01:38
Speaker
It absolutely does. I can totally see that. Yeah. So, you know, I think it's definitely, you know, if, you know, if Cognitex had to engage the community on this point and talk about their thoughts on that, I would be all ears because it is an area of pain for us at the moment.
01:01:57
Speaker
So I think before we continue, or at least I think we spent almost one hour already. So I might get this. Yeah, the time keeps floating. And so one of the things that I was curious about, or at least comparing when compared to the other databases, for example, most of the databases have drivers or clients in every language possible.

Contributions to the Community

01:02:21
Speaker
I mean, like from JavaScript to C sharp to everything.
01:02:25
Speaker
But Datamix seems to be fairly focused on JVM languages, or at least as far as I know, there is only Java API, which is first-class API, obviously, and then we have Closure API. So I think it would be much more interesting if we have drivers in different languages, because then people can actually try it out, because these whole hipster JavaScripting taking off every now and then, they might be interested in picking up the database. So what do you think about it?
01:02:55
Speaker
So one of the key disadvantages to trying to do that is the fact that Datomics architecture largely leverages the fact that its peer library has a cache.
01:03:08
Speaker
So right now, the peer library, the Java library, which you can use from any Java capable language, of course, it has this cache. So there is one basic escape hatch that you can use, which obviously doesn't come with all of the batteries that you get with the closure or the Java API, is the REST API.
01:03:32
Speaker
So just as you would spin up a Transactor, you can also spin up a process that runs the REST API. And that gives you basically peer API level access to your database. And so you can, given that it's a REST API, you can essentially talk to it from any language that can talk JSON. And I think that's just about all of them these days.
01:03:54
Speaker
Yeah, that's true. But the REST API, as you said, one of the things when I looked into REST API that you need to run an extra peer for it. Yes, there is that disadvantage. Maybe back, exactly. Back in the days, I don't remember though, but I think in the days when there was the license or the free license allowing you to have only one peer, for example. So that was painful. And also the Datamic console, the application itself, it will also take one peer or
01:04:20
Speaker
I'm not sure if that is the case anymore. I believe it does appear. I think basically the, you know, the transactor licensing code is pretty hardened and basically anything that's connected is appear.
01:04:31
Speaker
Oh, okay, yeah. Maybe that could be one feedback to Cognite guys. It would be awesome if they can open it up and there are multiple APIs available. But anyway, it has been fantastic to have you on the show and I think we learned a ridiculous amount of things about Datomic and how to put into production. That's great. Thanks for having me. Thanks a lot for sharing your experience. Yeah, it's been amazing.
01:04:56
Speaker
And of course, we have all the details will be on the Defn audio. And before we wrap up, I think a quick shout out to Nikita. So Nikita is also working with you, right? And he's working on data script. And
01:05:12
Speaker
Yes. So Nikita joined us in, I think it was March last year to work on one of our primary apps. And he's been having a whale of a time because he basically got to join us and use all of his libraries, which he built, to build cool apps for us. And also we have a 20% time for our engineering team. And so he actually gets to maintain those libraries on company time as well. So he's having an absolute blast and he's producing awesome stuff for us.
01:05:41
Speaker
Yeah, I saw his data scripting and then I know because I've been following him on Twitter and like everybody else and every time he comes to the Closure Cup, he builds amazing stuff within 24 hours. And he also did that using data script last time. So it is pretty cool to see. And yeah, I think maybe one day we'll have him on the show and then talk more about his closure experience from the cold north. Oh, you totally should. I think he'd have a blast doing that.
01:06:09
Speaker
Yeah, of course. And obviously, you're maintaining this Closure Codex, right? Compiling all the details about Closure World. Robert? Yes, so that was really just a way for me to kind of build something in Closure for fun rather than for work, although the work is fun as well. And to find a way to kind of put stuff out there in open source that I can share, that shows kind of the lessons I've been learning.
01:06:34
Speaker
In fact, I updated the codecs recently to take advantage of Datascript and RUM, which is Nikita's React wrapper. I'll share the link to the source code. It's a nice little toy example of how to actually bring Datascript into use on the front end whilst using data that originated from a datomic database in the back end.
01:07:00
Speaker
Okay, so that's pretty much it for today and that's all folks. And you can find Robert on practically everywhere on the internet and he's on Twitter and he's very active. I remember talking to him a long time ago on IRC and then he's also very active in Closure in Slack community. So if you have any questions about Datomic, you know who to ask. Yeah, I welcome it. Obviously, apart from the cognitive guys, but yeah.
01:07:28
Speaker
And so we'll post the notes on DeafN audio and the MP3 should be available on SoundCloud and iTunes and we've been very happy with the feedback that we're getting so far. So please keep it coming, positive, negative, anything. And as you know, this is one of the best, probably the best vegetarian, closurey and closure oriented podcast in the world right now, beaming into space.
01:07:53
Speaker
So and we'd like to thank Pizzeri for giving us permission to use his music as intro and outro. So this awesome stuff that you're listening to in the beginning of the show and the end of the show is made by him. And he's Melon Hamburger, if I'm saying correctly, Ray. Melon Hamburger, okay.
01:08:18
Speaker
Yeah, Melon Hamburger. So find him on SoundCloud, and you can find the links in the show notes. So that's it from us, and we'll talk again in a couple of weeks with a new topic in Closure World. Thank you. Thanks a lot, Robert. And it's been a great pleasure. Hopefully speak again soon. Thanks. Bye. Bye. Goodbye.
01:09:17
Speaker
you