Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Defining domain boundaries with Neil Syrett image

Defining domain boundaries with Neil Syrett

S1 E2 ยท How to start API Contract Testing series
Avatar
240 Plays3 years ago
Our first guest is Neil Syrett (Senior QA SDET) from ClearBank. Neil brings his experience from ASOS & ClearBank to expand on topics including breaking down monoliths, public API's, defining domain boundaries, modern testing principles, testing at scale, bi-directional contracts, difference between contracts & schemas and much more! Enjoy the podcast. Neil also has some great content on his blog. Also do checkout Pactman Consulting for a free course and how to contact me for a test strategy consultation.

Hosted on Acast. See acast.com/privacy for more information.

Transcript

Introduction to API Contract Testing

00:00:00
Speaker
Hello and welcome to how to start API contract testing podcast series with me, Lewis Pacman Prescott, where we'll be talking about the challenges of testing in microservices and how to start API contract testing to make microservice tests faster, more stable and more realistic. Really excited about the guests we've got lined up. Can't wait to get tuned in.
00:00:27
Speaker
Welcome to today's podcast. First guest is Neil Syrett from Clearbank. Neil used to be my manager at ASOS and now is a good friend of mine. In this episode, we talk about modern testing principles, testing at scale, bidirectional contracts, and much more. Enjoy the podcast.

Neil's Journey in Software Testing

00:00:46
Speaker
Please tell us, Neil, what you are, what you do, and your experience in testing. I'll be whatever you want me to be, Lewis.
00:00:56
Speaker
No, I'm a, my actual title is Senior QA Estet, and I work at Clearbank currently, but I've been in software and software testing since 2007. So coming up, what, 15 years now, actually. Yeah. Wow. So what kind of experience have you had? How have you worked in microservices before? Yeah. So I guess my first exposure to microservices
00:01:25
Speaker
in earnest was when I joined ASOS back in around 2010, I think. No, that's a lie, 2015. So yeah, it was at a time when they were moving from on-premise solution, monolith, traditional setup into the cloud, into using Microsoft Azure.
00:01:43
Speaker
and looking at how they can break down their monolith into discrete microservices. And yeah, obviously all the pains and learnings that came with it, getting used to failures in the cloud, understanding how to manage that, how to test for those different scenarios, talking about massive scale, Black Friday weekends were always fun.
00:02:06
Speaker
And more recently, moving on to Clearbank in 2020. So a different domain, but in terms of tech, quite similar. Also using Microsoft Azure. And they've been through a similar journey, actually.

Microservices at ASOS and Clearbank

00:02:18
Speaker
So they actually only started up in around 2015, 16. They built something to get live to keep their investors happy. But it was kind of a long myth. And since then, they've broken that down into these microservices.
00:02:35
Speaker
And yeah, at the moment we're trying to have constant discussions about where our domain boundaries lie and disagreements and agreements on how we get to that. Yeah, exactly. I think that's one of the big things around microservices is understanding what you own and what other teams own and then
00:02:56
Speaker
where that kind of responsibility lives. Because in environments that I work in, you don't necessarily have a team that cares about that overarching journey for the user. Marketing care about that, but not necessarily a software team. So I think that's definitely an area of contention when it comes to microservices. Yeah, but I think it is wholly necessary. I think if you're trying to build something, if you're always concerned about the whole user journey, that can
00:03:26
Speaker
I can weigh you down and you can lose focus. So I think having those clear distinctions, those clear domain boundaries are really helpful because it helps you focus and deliver on your objectives rather than getting bogged down or sidetracked. Absolutely. And I think there also can be quite a lot of overlap.
00:03:47
Speaker
and where that overlap lives you can duplicate the work, there can be tests which are maintained by different teams which actually cover the same thing. So I think you can find efficiencies there as well. Yeah and that actually is something when I joined Clearbank coming up for two years ago actually they had us, although they'd broken up their application into discrete microservices they still had a lot of kind of end-to-end
00:04:16
Speaker
kind of regression test packs that would kind of exercise in the full user journey and all the different kind of product offerings that they had.
00:04:24
Speaker
Part of what I've been advocating for is to clarify those boundaries and actually break up the tests as well and make sure that we're testing close to the main. We're not trying to duplicate our testing effort and do the same kind of exercise as what's been done with the microservices. Obviously, these are much clearer ownership as well and better maintainability and all the rest of it.
00:04:49
Speaker
Yeah, exactly. So in your intro, you mentioned about ASOS testing at scale in Black Friday. So what are the kind of areas that you're looking out for when you are testing at that scale? Yeah, so I think, as with any kind of performance testing, it's about understanding, well, any kind of testing, it's about understanding requirements. And I think, although
00:05:14
Speaker
performance and low testing, it's a different, potentially different skill set and different tooling. Fundamentally, the fundamental of testing are the same. You need to understand what you're trying to prove, and then from there you can go on to devise your test approach from there. But obviously, people often, if you ask them a question about functional requirements, it often comes quite naturally to them
00:05:43
Speaker
do you want this button to be green or red? Oh, I think I like green today. But if you're saying how many users are you expecting to be logging into this page at this time of the day or distributed over this period of time, those things aren't always as easy to come by, particularly if you're speaking to subject matter experts around the business. So what I've always found is what's really key is actually having the right level of telemetry in place
00:06:13
Speaker
observability trash in place in your application to make sure that obviously you can ask your stakeholders what they think the load will be or what the load profiles will look like. But often the proof is in the pudding actually. And if you've got something running production and customers are using it, then having the right telemetry to actually understand
00:06:32
Speaker
what that load profile looks like, that's really key to them building up the right load test model or the, what's the word I'm looking for, building up the right load model for your line of performance tests that are going to mimic what you're seeing in production. Absolutely. I don't know if you were in the kind of ground of it when I was there, but the
00:06:57
Speaker
The biggest thing that I remember about my time at ASOS was Black Friday sales, 24-7 support, being on Rotor, having to get in at midnight and then not leaving until 8am

Challenges of Testing at Scale

00:07:10
Speaker
the next day. Did you get involved with that yourself? Fortunately, I didn't have to come in when I was on Black Friday, but I did do it a few times for the
00:07:18
Speaker
for old legacy releases when we, before we broke up into microservices, and we still had to add outages. Asos are an international brand, but at the time they were very much focused on the UK market and having a few hours, if they had to choose an hour to be out, to have the website off, then doing that in the middle of the night UK time was preferable to doing it in the middle of the day. So yeah, when we had scheduled
00:07:47
Speaker
releases once every kind of month or six weeks and we'll schedule them in the middle of the night and we'd have to come in early hours to support that process. Wow that takes me back for sure all the time where you're moving stuff from one server to another before the times of the blue-green deployments and everything like that. Just taking a short interlude now to share with you an opportunity to grab a free course I have at patman.co.uk
00:08:16
Speaker
or explain how to implement API contract stubs within your end-to-end test using Cypress and PacFlow. If you want your end-to-end test to be faster, more stable and more realistic, then definitely check out the free course at pacman.co.uk.

Monoliths vs. Microservices

00:08:31
Speaker
Now back to the podcast. You talk about the journey that ASOS went on from
00:08:38
Speaker
monolith to breaking down into microservices. So what do you feel like the differences are between your test approach with a monolith and with a microservice? I think it offers a really good opportunity for a tester or a developer who's interested in testing to kind of get much closer to the implementation and stop looking at the entire system as a kind of black box and actually get down deep in the dirty
00:09:08
Speaker
the inside of the program and actually understand how to best test, to adapt your test strategy, to test in the most appropriate way, rather than looking outside in, you can actually start looking at how it's been implemented and looking at more intelligent, faster ways to test, looking at things like component testing, integration testing. And obviously with that, not just looking at the functional side of things, but also looking at, we talked about performance, but not just testing the performance of the entire system,
00:09:38
Speaker
but we can test the performance of a single service. And it makes it much faster to find where those bottlenecks are in the system. Much easier to diagnose those issues as well. So I think that's the main difference is being able to, rather than always looking from the outside unit, the application in more of a traditional testing mindset that actually is kind of get down in the detail and devise a test approach which actually suits the application.
00:10:05
Speaker
Yeah, definitely. So at Clearbank, you don't necessarily have the traditional kind of tester, do you? Yeah, that's right. Yeah. So yeah, can you elaborate on that a bit? Yeah, so I guess our role is more in the kind of model of a kind of a coach or mentor to the team and less as an actual individual contributor, if you like. So although I do still kind of
00:10:33
Speaker
write code on my own individually, it is much more about how do I coach the team into improving its testing practices and act as a consultant so when they have issues or when they have questions they can come to me or bring me into their meetings into their refinements or their planning sessions to consult me on the best way about going to test something.

Educating Teams on Testing Principles

00:11:00
Speaker
I think Alan Page was the one who came up with the modern testing principles. And I guess I've used that extensively to kind of educate other people around what my role is, because that's kind of how the role was pitched to me when I was hired.
00:11:17
Speaker
I find that's quite a useful tool to actually say, here is the kind of, you may have misconceptions about how, how you've worked with tests in the past or the kind of skills or the mindset that they came with. But actually what I'm being asked to do in this role is, is much more advocating for good practices around testing and, and, and facilitating and partnering with, with engineering teams, not necessarily just offering a testing service or being another member of the team working on tickets on the board.
00:11:45
Speaker
Yeah, so that sounds really interesting. So how do developers kind of react to that? How do developers find not having that kind of safety net, you know, of having a tester to fall back on? Yeah, that's a good question. And it's one of those things, you know, every team, every individual is different. I'd say the lion's share of the engineers I work with have got quite a kind of
00:12:10
Speaker
well ingrained testing mindset. But obviously you're all doing TDD or at least doing some kind of unit testing. So they are having to think about the testability of their application. They are having to think about testing. So go from writing unit tests to then thinking about kind of higher level tests. And it's not too much of a leap. I think that the biggest challenge I always find is
00:12:39
Speaker
if me or you, Lewis, are sat in a refinement meeting and someone's kind of saying, okay, we need this new feature and describing it to us. I think kind of naturally being kind of people from a testing background, we're automatically thinking about, okay, questioning those requirements, challenging assumptions, thinking about good questions we can ask.
00:13:00
Speaker
understanding how that feature or that requirement might integrate with other features or other product offerings, how it might be used by the customers, all those kind of things that, you know, kind of bread and butter to people who come from a testing mindset, whereas developers, not all bodies, but a lot of developers will naturally think to, okay, how am I going to implement this and be less inclined to kind of really question the requirements. So I find that the biggest challenge is often
00:13:28
Speaker
having to actually slow the team down in a planning or refinement session where they naturally want to go straight from, here's a requirement to, okay, how we're going to build it. We need this technology here. We need that data store over there. Whereas actually I find I'm having to slow them down a little bit and kind of say,
00:13:47
Speaker
let's just pause that, you know, solutionizing the discussion for a little bit. Let's just challenge or let's really understand these requirements before we go and make a decision. And that will help us make better technology decisions and better design decisions. So that's often quite a can cause some tension. But I think with practice you learn to, you know, when to kind of when to push it and when sometimes, you know, they've heard enough from me today, maybe I need to kind of take a step back and
00:14:16
Speaker
and let them run with it. So I think it's being pragmatic and giving the team as much coaching as they want to take, but not pushing it too far that you disillusion them.

Importance of API Contract Testing

00:14:28
Speaker
So the podcast is about contract testing, so I should probably get on to that subject at some point. So you were kind of the first person that really encouraged me to get involved with contract testing. I knew what it was, but I had never implemented it before. And then ASOS kind of had that opportunity to do that. But I know you're kind of experimenting with what your options are in terms of exploring contracts and
00:14:56
Speaker
and building them into your testing. So what are you going through at the moment? Yeah so I think we're probably not anywhere near as mature as we'd like to be on in terms of contract testing but certainly where I found the primary challenge we have at the moment is actually defining our domain boundaries because until we clear on where those boundaries are it's hard to actually think about where the contracts are and where the testing is required. At the moment
00:15:24
Speaker
although we do have like a distributed system, it is kind of still treated as a single product and those domain boundaries aren't clear. So me going in and trying to insert contracts and contract tests in the middle of what I think is two separate domains might actually stand in the way because that might not actually be a clear boundary that we want to enforce. Actually that might be a bit more of a fluid boundary that we might be
00:15:53
Speaker
might be refactored or changed over time. So that's one thing I've had to take a step back on. But I think in terms of where I see the benefits for Clearbank at the moment with contract testing is actually on our public interfaces. Clearbank, our main product, is a public-facing web API. It allows other banks, other FinTechs, other financial service providers
00:16:18
Speaker
to integrate directly with our public APIs, and we offer a bunch of different payment services. So those external APIs have contracts, they have well-defined documented contracts, and obviously we do test them, but we don't have a specific strategy around contract testing, which means that if we're not on our game, we could potentially make a breaking change to that interface.
00:16:45
Speaker
And that would obviously have customer impact. So I think that's one area I think that Clearbank, I would like to invest a bit more in contract testing around that area and see how we can refine that strategy and make sure that we're making sure we're not letting our standards slip in terms of our public interfaces.
00:17:06
Speaker
Yeah, absolutely. I think that's a really good use case is people often think about contract testing from a internal perspective, like what you own and what you have control over with your web app or your API service, but actually defining those contracts and having those tests to say, okay, this is what's going out to the public and making sure that you conform with those is a really good use case for that. I think that PacFlow are about to introduce with bi-directional contracts.
00:17:35
Speaker
will obviously allow you as the provider to kind of put up those contracts and then use that in that way. Yeah, I guess that is one challenge because we don't necessarily have great engagement with all of our users. Yeah. A lot of them we do, but not all of them, particularly some of our smaller customers. We won't always be able to take a kind of a consumer-driven approach. No. But yeah, so the owners will be on us to kind of define the test
00:18:06
Speaker
we need to test for each of our contracts. But yeah, like you said, the changes that Acaflow are introducing might make that a little bit easier for us. Yeah. And I think also the thing with the consumer, right, is that you can put that up as like your documentation, right? These are how we provide you with the information. This is what the responses look like. And then that's living documentation at the same time. And that may come in useful down the line.
00:18:35
Speaker
Yeah, one thing I often get asked is why do we need to define this thing twice? Why do we need to define this in Swagger or OpenAI and in your funny, packed language? How would you go about responding to that, Lewis?
00:18:52
Speaker
Yes, good question. So they they form different purposes and as you mentioned about the consumer driven part, right? Like the swagger docs are usually generated once you've built the API. So why would you go back and then create the contracts after that? But it comes down to the static dynamic. I think is how I would describe it is that your API docs are your static form of documentation.
00:19:20
Speaker
they're not going to break if you make a breaking change to that contract. They're just going to generate you either a new document or flag that some attributes have changed, not actually going to break in terms of your release process. I think that's where contract tests come in. Also, breaking changes can be very subtle. You might be changing the inners of how you respond to something or changing something from string to an array or something like that.
00:19:47
Speaker
And that's where your kind of open API documents don't really come in because they're not checking for that information. They're just presenting what information they're given. So yeah, I think those kind of nuances is how I would pitch it. Yeah, I think it kind of boils down to the independence of the tests.
00:20:06
Speaker
Whereas, like you said, the swagger documentation might be auto-generated, most likely is auto-generated. So actually, if the code changes, then the documentation changes with it. And if you're relying on that to validate, you know, schema validation, then you're potentially going to miss a breaking change because, like you said, your baseline, if you like, is also changing. So yeah, I think that independence is key. And it's actually something I've been caught by in the past is where
00:20:35
Speaker
You've got a kind of integration test calling into an endpoint, but that integration test is part of the application code. It's in the same solution, in the same repository, which is obviously where it should be, but in the same light, it's not very independent because, you know, with tools like IntelliJ and ReSharper and Rider and things like that, you know, it's very easy to do a rename across the entire repo. And suddenly you've missed that you've actually changed your test and your application. You've introduced break and change.
00:21:05
Speaker
and the tests are all still passing. So I think having independence of tests, particularly around public interfaces that can't change is really important here. Really good point, Neil. Really appreciate your time. Thanks for coming on the podcast. Well, thanks for having me on today, Lewis. I really enjoyed that. Hopefully we can do it again soon. I hope you enjoyed our conversation about testing in microservices.
00:21:32
Speaker
Thanks to Neil for being my first guest and Neil's first time appearing on the podcast. Don't forget to like and follow Ready For The Upcoming episodes where we'll dive in deeper on how to get started with contract testing. Also check out my blog and online courses at pacman.co.uk. We've got some really exciting guests coming in next few episodes, so stay tuned and thanks for listening.