Why Tolerate End-to-End Failures?
00:00:00
Speaker
The question that needs to be answered by a number of companies right now is why is it okay for 50 end-to-end tests fail every night but it's not okay for one unit test to fail in a row? Why are we setting the bar so low for end-to-end tests and saying it's totally fine if these 25 tests fail? That's OK. We still have confidence in our product. That tells me, no, you do not. If more companies put automation tests into their CI-CD pipeline, you wouldn't see this sort of behavior any longer because it acts as a hard quality gate against bad code and would provide a higher level of urgency to fix these tests.
Introductions: Aaron, James, and Jonathan
00:00:44
Speaker
Welcome to the board slash where we lean into the future of IT. t I'm Aaron Chesney with my beautiful co-host James Carmen. Today we're going to be talking about testing automation and we have a special guest with us, Jonathan Thompson. Jonathan, why don't you tell us a little bit about yourself.
Jonathan's Diverse Career Path
00:01:03
Speaker
Yeah, so I got my my start in software about seven years ago. um And to to really go back, ah my engineering career started in 2009. I enlisted in the United States Air Force and I became a avionics engineer. I was working on the guidance and navigation controls for C-130H2 aircraft.
00:01:26
Speaker
I got deployed and found these these pamphlets on Middle Eastern culture and just fell in love with the idea of learning about the the topic of the anthropology. So I went back to college and started a degree in anthropology, ended up pivoting to history. and became a wing historian. But um yeah, a wing historian and an archaeologist, if you'd believe it, I worked as an archaeologist for three years, it doesn't pay well. um'm So I quickly realized that
00:02:01
Speaker
the the benefits of being a historian specializing in Chinese history in the United States probably doesn't ah translate too well to a good positive career. So I ended up pivoting to Quality Assurance Management and a medical device company where I worked for three years.
Stress in QA Management and Coding Bootcamp
00:02:19
Speaker
My wife came home one day and she noticed that I was particularly down um because my job was stressful and long hours and I had ah a difficult CEO at the time. And she said, you know,
00:02:34
Speaker
I've been hearing about this coding bootcamp at work. She specifically works at Northwestern University. She said, I've been hearing about this coding bootcamp. They just pushed through their first cohort. Would you mind taking a look at it? Tell me if you would appreciate that because you love computers. You love working with computers. So I said, sure, I'll take a look at it. It would require me to go back to school for nights and weekends, which working full time was difficult, but we made it work. and And so I decided, all right, let's do this. um And so I did this coding bootcamp for six months.
First Steps in Automation Testing
00:03:12
Speaker
I graduated in October of 2016 and I got my first software gig in December of 2016 at Vidori, ah Vidori Inc. in Chicago. They're a 50 person marketing firm and I was working as a software quality assurance specialist. um So how I got into automation is I used Selenium WebDriver to crawl websites for a module during this coding bootcamp. I thought, wow, this is really neat. I get to tell the computer what to do, and it just goes ahead and does it. That's that's outrageous to me. So I wanted to learn more about Selenium and found that it was most popularly used in automation testing.
Exploring Testing Frameworks
00:03:57
Speaker
And I had never heard of automation testing before. So I decided to start looking into that and testing out different types of frameworks. One of my favorites was tape with JavaScript. So I was using tape testing framework with Selenium WebDriver 3 to test out the specific modules that I created in this bootcamp. And actually, as my capstone project, I created an accession software for museums and made sure to include an automation suite that ran through how to accession a product, how to archive it, and and all the other fun stuff. So the Dory gave me this opportunity to become a quality assurance specialist that worked in automation. And they introduced me to Ruby and Water. Water is ah is a ah automation framework using Ruby and Selenium under the under the hood. So we used that for a few years and Cypress got released.
00:04:50
Speaker
And I noticed very early on, Cypress sounds outrageous, right ah especially in 2017. This is brand new. This is revolutionary. So I decided to start latching onto it quite a bit and and learning the ins and outs of the framework. And i've I've pretty much staked my flag in understanding Cypress and how it works. um And I've brought it to to numerous companies over the last few years. Um, so I, I left the authority after two years, started working at Transloke.
00:05:23
Speaker
And ah that was where i I built my first automation web driver. It was a web driver built on Selenium 3 with parts of water, splinter, and Cypress included. And it was ready to be open sourced before we decided to go back to Cypress. So since then, I've just been working with Cypress and Playwright in in my my current roles. And right now, I'm a test automation engineer at Collabority.
Comparing Bootcamps: Coding vs Air Force
00:05:52
Speaker
so since you've been to boot camp twice which one was harder the coding boot camp or the air force boot camp it's air force you know the answer to that eight it's obviously the coding one right coding was harder. like was hard Yeah. Yeah, I've got to tell you, um the air conditioning was better in the in the coding boot camp, and and they definitely have more diet coke. So that was that was a big thing. um
00:06:27
Speaker
Yeah, um Air Force boot camp was was interesting. I really enjoyed it It was it was different for me because I entered the military at the age of 24. So I was around a bunch of 18 year olds that didn't really that didn't really understand like what what life was Presenting what life was bringing to them and actually that I ah it was pretty funny to me I had to teach a few of them how to do laundry and because they had absolutely no idea how to how to work a laundry machine which to me was like mind blowing right like i've been doing this since i was fourteen years old but um some people out of there just don't know. Yeah that's a that's definitely a failed test.
00:07:11
Speaker
Aaron i just find it uncanny don't you that like you know the people we talked to. the background. it's It's almost cookie cutter. You know, boy joins air for joy. ah Boy becomes avionics engineer. Boy becomes archeologist. Boy becomes a historian. Boy becomes QA manager. You know what I mean? Like it just follows the exact same arc. Story is all this time. yeah story is all It is. It really is. You see this, if I had a nickel, you know what I mean? like It's one of the original plot lines, isn't it? Like, you know, the
00:07:45
Speaker
I'm sure that you guys encounter many archaeologists and in our career field, for sure. So when you're doing archaeology, do you did you carry a bullwhip with you just in case? yeah it's a ah It's a funny story and a bit embarrassing. I met my wife dressed as Indiana Jones. um and
00:08:08
Speaker
and And she was dressed as Indiana Jones, so it was love at first sight is this what it was. that That's fantastic. Kind of a love of self. yeah
00:08:23
Speaker
So were you and you were an archaeologist after you got out of the Air Force or while you were in the Air Force?
Historical Work and Archaeology in the Air Force
00:08:30
Speaker
No, I was an archaeologist during my time in the Air Force. So I spent five years as an avionics engineer and then three years as the, if you believe it, lowest ranking and youngest wing historian. um And as far as I know, Air Force history. I was a senior airman, which is an E4, and they usually don't provide wing historian jobs to individuals that are not staff sergeants or or above.
00:08:55
Speaker
So it was it was like a special little exception for me, which I thought was pretty outrageous. And during this time, I was working as an archaeologist. um My specific title was site supervising archaeologist on the Casa Conf historical site in Parma, New York. So it was ah it was a fun little ride. Yeah, we were digging up 19th century antiquities. um It was, it was pretty much 19th century early American settlement, um but we got to learn quite a bit about the castle comp family and what they consumed right they we knew that they had ah decent types of jewelry because we dug up a gold ring that was emblazoned with JFC, which was
00:09:34
Speaker
outrageous. That was one of our first finds in our first year. And then since then, we had dug up oyster shells, which um I'm not sure if you're familiar with upstate New York, but on the Erie Canal, they would they would bring seafood from from the coast down to Rochester, Buffalo areas. And these farmers must have been well to do because they were they were purchasing oysters. So they yeah they had some some kind of monetary means to them. That's crazy. I didn't even know that you could be an archeologist in the Air Force. That's just, that blows my mind. You know, a lot of people don't know that the Air Force enlists historians that the army enlists archeologists that the Navy has underwater archeologists. So it's, it's kind of a niche field, um but it's, it's a special one and it's critical to the Air Force infrastructure.
00:10:29
Speaker
but Learn something new every day.
00:10:35
Speaker
So I don't know how we're going to tie this into testing. What was the last artifact software that you dug up?
Transition to Playwright for API Testing
00:10:50
Speaker
you know i've been I've been working with my client for a few months now, and we use Postman for API tests, but you speak of an artifact. And and this is this is one where it's almost a deprecated framework. And this this client has been pushing for Postman to be removed due to regulatory needs. And I've been pushing for us to use Playwright as an automation solution. If you didn't know, Playwright is not just an end-to-end test runner, but it can also be used as an AVI test runner. So when I look at my days as an archaeologist and my days as an automation engineer, a lot of the time i'm I'm looking at what I would consider to be, or what we could consider to be,
00:11:36
Speaker
deprecated frameworks right older frameworks that are potentially brittle and you dig these up and you understand the inner workings of them and why they were purposeful or why they were rational at the time and then you take the best parts of them and implement that into your next solution so case in point i work to translate for a bit. And they had a ruby what a Ruby framework that ran its tests in two hours. Now, if we're going to be talking about running these tests in CI CD and making sure that you have quality gates, you can't have tests running for two hours, right? No one's going to sit there. A developer is not going to sit there, hit the button and then walk away for two hours and and wait for the test to complete.
00:12:22
Speaker
So I saw what what was going on. um The tests weren't being parallelized and decided, OK, we can redo this, but with parallelization.
Optimizing Test Runtimes
00:12:32
Speaker
And what I mean by parallelization is really exploiting the total number of cores that your computer or processor has and ensuring that one test is divvied to each core so that they run consecutively. Cuts down on automation time. So took this two-hour pipeline, cut it down to 10 minutes, And that's, that's a much easier pill to swallow when you're working in high stress environments or high, high tempo environments, right? You can just push that commit, come back in 10 minutes and there you go. You've got your tests. And, and without doing that extra work, what you end up doing is then delaying your run of tests until when it's convenient, which gives you a larger change set, which it includes, you know, the possibility of breaking more often.
00:13:21
Speaker
So, you know, like the two hour run, while we can't do that during working hours. So we're going to run that on a nightly basis instead. And then it's something that has to be checked on the side every morning. It gets overlooked sometimes. Oh, it always runs red. Well, why is it running red? You miss things. And then, you know, it, it, it. Code rot starts to fester because of it. So doing things like making sure that the tests run in a ah short amount of timeframe actually does help with maintaining a, every time you build, you test in, in it, it goes in that order, build, test, deploy, right? Um, then test again after you're deployed.
00:14:06
Speaker
and And it's just, i'm I'm glad that you bring up code rot because you see that so frequently granted. I've only been doing this for seven years, but every, I feel like most, most clients that I've been a part of have had some level of, of code rot, why there was 25%, 50% tests are in the red and, and there's this constant narrative of our tests are failing.
Immediate Feedback through Automation Testing
00:14:33
Speaker
We can't trust them. What can we do? Now, I learned from Angie Jones very early on that you should never trust a test that you haven't seen fail. right So it's good that we've seen it fail. This is this is positive. um the The issue is that ah what I consider to be the the crux of automation testing, right the purpose of automation is providing highly
00:15:00
Speaker
highly configurable feedback to your developers to showcase that, OK, you worked on this feature. It is not broken. right um and So when you have tests that are failing all the time, what we would consider a brittle test or a flaky test, it it doesn't provide that feedback. It doesn't provide that trust to the developer. And that's something that you need to rectify very quickly.
00:15:26
Speaker
Producer. what's What's going on? Oh, I meant to type these up and then I was going to have you guys say that. Oh, I thought you wanted to drop in. Okay. So what do you want to type them and then we'll ask them as we go. Or do you want now that you've interrupted everything, Mr. yeah i mean I might, I might as well. Right. I was going to ask, I was going to ask what your take is Jonathan on the, uh, the trend lately of, of teams not having QA resources or QA
Developers Taking Over Testing Responsibilities
00:15:52
Speaker
automation resources. And like developers can do all the testing. Yeah. Like it'd be interesting to get your perspective on that. Yeah. i actually when i was when i was ah So during during my career, I worked as a software engineer for two years. And I was specifically a back end engineer. um I got to work with a lot of quality tools. I implemented linting using Golang CI-Lint and created some CI CD gates to to ensure quality as ah as a software engineer. But one of the things that that I was tasked with
00:16:24
Speaker
doing was an initiative called DevOn's Test, where quality was pulled away from the team and the developers themselves would own testing. Now, I think that it was both successful and unsuccessful, because your developers, while they are brilliant individuals, may not have the correct mindset, if you will, for testing. right Because having been on both sides of the fence, you go from a builder to a breaker and and a breaker to a builder. and it takes it It takes a different level of context for for both of these jobs.
00:17:07
Speaker
so It was difficult at the start to really get buy-in from my fellow developers on my team, right? Because I was tasked with this initiative. My my manager at the time really pushed for it, really wanted it because our QA was trying to make their way into software engineering. And so the writing was on the wall. We're going to lose QA. So developers need to test. I think that it was unsuccessful. Because we we, as developers, never wrote automation. We wrote unit and integration tests. So the bottom layer of the automation triangle um pyramid. But we never got to the point that we were writing end-to-end tests, right? So I think that that was a specific failure. Where was successful?
00:18:00
Speaker
was the fact that it it helped reduce the the typical animosity, if you will, between quality assurance and software engineering. right Because there there tends to be in shops where development works on the feature and then pitches it over to QA, there tends to be a little bit of tension between the two parties. Or a lot. Or a lot, yes. I'm trying to sugarcoat it right now. of But yes, there there can be tension between the two parties. And and it can be it can turn into a ah relationship that has some level of animosity. So I do feel that the initiative was successful in toning that down a bit. Because we saw developers take ownership of the testing pipeline and realize what it is to test a product rather than just build it.
00:18:56
Speaker
Now, I don't feel that shops out there should not have quality assurance and I'm biased because I am quality assurance, but I do not think that, that, uh, that shops out there should not have a QA engineer attached.
Integrating QA Roles with Developers
00:19:14
Speaker
So I, there's a, there's a couple of things I want to dissect there was so one, just for the listeners that, that may not be aware of the testing pyramid is, is a few different layers where the bottom base of your pyramid is your unit test and that the next layer up is more your end-to-end integration ah type test. And then like the tip of your pyramid is your full end-to-end black box testing in your system, but all automated. So it's an automated testing pyramid. um So I just wanted to explain that a little bit because you did mention the testing pyramid in in that last bit.
00:19:56
Speaker
And then, uh, the other thing is that not only do I feel like there should be more of a blurring of the lines, uh, between QA and, uh, I, I usually insist on it because if you're working in a small team that doesn't have that QA resource, that responsibility but falls to you as a developer. If you do it, if you are the QA person, uh, QA person responsible for automation on a team, you need to be managing your test. like developers manage their code. Uh, so I, I do think that there's this crossover and blend. And I've had this conversation with a QA engineer that was on my team. Once it's like, you need to be as good of a coder as my developers. And the reason is, is that if you write, uh, unmaintainable, unreadable and sloppy tests, they're no better than if I was writing code the same way. And, and that has to be maintainable because when a test breaks,
00:20:57
Speaker
and you're not here to do it, somebody else has to go in and look at your test, understand what's going on, and be able to fix it. Because test break, just like code breaks, just you you have a lot more visibility to it, especially when you're automating. And with automation, you're also looking at trying to get, you know we talked about a parallelizing test in order to make them run faster. In order to do that successfully, you have to have what I like to call a net zero test, which means like your entry point for that test has to come all the way back around to where it started from. So your start and finish line are at the same point on the track.
00:21:39
Speaker
i mean I call those hermetically sealed tests, and what I mean by hermetically sealed is ah its outcome or artifacts cannot affect other tests in the run, right? So this is accomplished by proper setup, teardown, and artifact creation. I personally like to use libraries like Faker, Factory Boy, and other such means to ensure that my test data does not create what we call collisions or contamination. right When I create a test artifact, say I'm i'm working in a a transit application and I create a writer, right I need to make sure that this writer is 100% unique to never going to contaminate another test. So how do I do that?
00:22:21
Speaker
I set it up prior to the test. I make sure that it's unique with either a faker timestamp or the Zulu string, something along those lines. and Then afterward during teardown, I delete it or archive it so that it's gone and it cannot affect any other tests. Yeah. and The other part of that too is you also will need to make sure that your tests do not rely on there and each other in the order in which they run. They have to be able to run independently of the entire test suite. Yes.
Is End-to-End Testing Declining?
00:22:54
Speaker
And that actually brings up a, I have a conflicting opinion here than a lot of other testers out there, but I personally believe, and again, this is my own personal opinion that the days of true end-to-end testing are over.
00:23:10
Speaker
I think the days of true end-to-end and testing are over. And what I mean by end-to-end testing is running from point A to point C and testing point B as as the middleman, right? It doesn't happen as frequently any longer. And you see more frameworks, Selenium 4, Playwright, Test Cafe, Cypress, they all have means of mocking, stubbing, or spoofing data, spoofing endpoints to ensure that you have what I would consider an automatically sealed test. So the idea of an end-to-end test where you are testing against prod endpoints, making sure that the entire pipeline works constructively and consecutively, it doesn't happen as frequently anymore. as At least in my experience, it doesn't. Instead, what we're doing is more so feature testing with constrained mocks and spoofs. and I think part of the reason you see that is because the the data layer
00:24:10
Speaker
integrations to modern software have basically become so boilerplate in that there, you know, you don't have developers in there writing their own SQL as much anymore. There there's, there's this binding to the software that it's almost taken it for granted that, yeah, my data layer works. so I know that. Right. And, and that's kind of the way it ends up running. I actually, I really liked this question on the ideal split and testing your responsibilities, the quest to the testing pyramid. ah Should quality assurance engineers only be responsible for the top part? I say no, no. I, I think in that an ideal world as, and, and I, I less consider myself than a QA engineer and more consider myself an SDOT now a software development engineer and test.
00:25:08
Speaker
When I consider an SDAT and what they bring to the team, you know i've had I've had positions where i've where I've asked my developers, show me your unit tests, show me your integration tests, put me on the PR so that I can see what you're doing and verify that we are but we are testing the right things. Now, there are some out there that may consider that overbearing. But again, I've got to tell you, I've been on both sides of the coin, right? I've i've written unit and integration tests. I've written end-to-end tests. um If I want to make sure that we are writing high-fidelity tests and high-fidelity code, I think it just makes sense that quality is involved in the unit and integration tests. There have been plenty of times, as a developer, um because I had the testing eye, if you will, I would look at a PR and say, well, wait a second. What about this?
00:25:59
Speaker
What about this edge case? You know, is that something that we can program in a unit test? Is that something that we can do in an integration test? We will catch it, put it in. boop There we go. And that's, you know, I feel like that occurs because you have a specialist that is trained to test. Yeah. And in that goes back to that, that cross training, right? When you have a QA engineer on your team that's involved at that level, The, the cross training happens more naturally, right? You've got, you've got your developers that are getting the input from their QA engineer and the QA engineers in their code, seeing how they do stuff and will naturally kind of glean. Oh, I see how they structure this. This is kind of cool. Yeah. You know, and, and be able to pick up things. So, or, you know, those conversations happen.
00:26:56
Speaker
through that PR process of getting you know the feedback in in that loop kind of closes to where your developers understand more of the QA mindset so they make less and less of those errors. And QA understands kind of the developers mental process of structuring and mocking and things like that to improve his own ah test creation and structuring of of a test suite. I think it also bridges the divide between quality and development. right because and ah there's There's no better way for me to phrase this without without it coming off as pejoratively, but there there is a stigma that quality assurance engineers can come off as helpless at times.
00:27:46
Speaker
right? Developers need to do things for them. We need to set deployments. We need to refactor code. We need to implement data IDs for end-to-end tests because the QA engineer can't do it or won't do it or don't know how to do it, right? So working with the developers to truly understand the code, right? Maybe do some white box testing instead of black box all the time. could and in my experience has allowed me to get a better understanding of the development pipeline, better understanding of the feature and product. So instead of bothering the developer, you know, hey, we need data IDs on this feature. Can you go ahead and put them in? No.
00:28:24
Speaker
I'll just go ahead and do it. right I know where this feature lives. I know where this component is. I'm going to put this in programmatically, deploy it, make sure that it works. There we go. I'm done. Now, to me, that is an expression of QA agency, developer agency. right I have the ability to do this. It's just that we don't have a lot of shops right now that are really trying to engage in that bridge between QA and development. They're trying to keep them statically separated. And a lot of times they'll do that through their, the build process to come back around on that, you know, in your CI CD pipeline, they won't include running the automation suites. They'll like, Oh, that's separate. So we run that separate. And I think that's, that's a big problem and because for one, you want your QA accountable for the stuff that they're creating to make sure that it's always running in green.
00:29:25
Speaker
Just like you would the developers code and you want to keep the developers on their toes because if they fail in automation, they need to go back and find out why. And in correct that before they merge that into their, their main branch and have that, you know, filter its way to production and become a very costly thing that has to be fixed on the line. Right. The question that needs to be answered by a number of companies right now is why is it okay for 50 end-to-end tests fail?
00:29:57
Speaker
every night, but it's not okay for one unit test to fail in a run. Why is that acceptable? Why are we setting the bar so low for end-to-end tests and saying, it's totally fine if these 25 tests fail? That's okay. we we We still have confidence in our product. That tells me, no, you do not. Your tests are failing, right? We need to fix that. and And sure, brittle tests, flaky, something or other. There's there's an excuse for why it's occurring. Well, I can tell you right now, you can shore it up with mocks, you can shore it up with spooks, you can shore it up with stubs, or just better tests, right? If you're using what I call dumb waits, waiting on specific times, 10 seconds.
00:30:45
Speaker
Wait on an endpoint instead. Wait on a route to fire. Intercept a route. You can do it with Selenium 4 now. You've been able to do it for three years. so You know, that's that to me is the the biggest burning question in quality. Why is it acceptable that these end-to-end tests can fail routinely? And I got to be honest with you, I think that if more companies put automation tests, if more companies put end-to-end tests into their CI-CD pipeline, you wouldn't see this sort of behavior any longer because it acts as a hard quality gate against what is considered bad code, right? And bad code is failing tests.
00:31:23
Speaker
so If we saw more automation in CI CD, it would prevent more pushes from getting through more commits to main. Right. And would provide a higher level of urgency to fix these tasks because I've worked in environments where we've had a test failing for a month. And that's entirely too long. I've had, I've been in shops too, where, Oh, this test always fails. So we're just going to ignore it.
00:31:54
Speaker
so that we can get out the door. One of my favorite experiences, and and this was this was two years into my career, one of my favorite experiences, I was working with a senior engineer and and I asked him, how how are we going to fix this test? And he comments it out. I said, okay. That's the technology equivalent of when the check engine light comes on, you get a piece of electrical tape and you put it over top of it. yeah No problem. pull the hundred Not a problem anymore. So one of one of the things that that we struggle with and in you know software engineering in general is is how much is enough when it comes to testing? How who how does that how do you come up with that determination of what what level is enough?
00:32:39
Speaker
Right, right. and it's And it's difficult with end-to-end testing because we don't we don't have we don't have this idea of code coverage, right? um We don't have this this nebulous idea of code coverage that you see with unit and integration tests. You can run a tool with your unit tests and make sure that, OK, this function has been tested ah to whatever your threshold is. At at at one of my clients, it was 80%. 80% of this function has been tested properly. There we go. As far as I'm concerned, if you are able to get the happy path through the door, right get the happy path through the door, and then start working on very common edge cases. right So if we were to look at a user registration form and we have an email input,
00:33:35
Speaker
And someone tries to put in a string that is not an email rejects, right? Should we have an automation test for that? I think so. Yeah, I think that it makes some sense because that is a real life scenario where something could go wrong. But are we going to have an automation test for what if this form returns a 400 bad request store or like ah something or other? um I think that that could be done at the unit or integration layer. right So we get our happy path done. We get some edge cases. And I think that that's typically going to be good enough. Where it comes where where there is fuzziness,
00:34:14
Speaker
is when you found a fault or a bug and you want to address that through automation. I personally err towards addressing bugs or faults in unit and integration layer rather than the end to end layer. Yeah, and I totally agree with that. um in In one of the criteria I use, if it's something a stakeholder would care about, so if you're showing this to a product owner or higher and it would be something that you would demo to them because they care about it, then it should probably be in your automation test suite. The other thing I've done for this is for larger applications, I've used something that I call a user flow map and basically taking an application and saying, okay, yeah, the user can go could go
00:35:06
Speaker
This way, and then they have these choices. And then if they go this way, they can do these choices. You can get kind of like this, this branching and almost like, looks like a reverse tournament bracket type of thing. And then you can route those paths through the system and you can say, okay, yes, I have a test for that. I have a test for that. and And as you color in the map, you know, that you've got all of your happy paths. marked out in the system so that you know where your test suite needs to be. And sometimes that can be a very large and daunting task when you have a large application like a ah certain math project that James and I had the pleasure of working on, right? There's a lot of different options and a lot of different paths and and ah in and being able to traverse those it in
00:36:01
Speaker
yeah and Typically, the automation tools are great at being able to do that. like Selenium is a ah perfect example of it. It can go through, it can click and make those decisions and and you can verify some of those things um that are output. you know Sometimes it struggles with with getting ah certain outputs because they're you know behind like JavaScript or something like that. or You know, and um, they're, they're using like a canvas. You can't verify things on a canvas easily. And in that type of thing, but for the most part, you can say, okay, yeah, well, we did get a canvas. So let's, let's hope they've got the testing in there to to cover that. And there may be tools now that would be able to do that kind of thing. So that's, that's what I've used to, to ensure that we did get a good amount of testing.
00:36:57
Speaker
within an application is to use that kind of a mapping. You may have may have seen some of my user flow maps, James, from from ah from that project. Absolutely. so So one of the things that you know the way I think about testing is you know it's it's risk mitigation ultimately. right So you identify, here are some areas where we could have some risk of ah fault, write some issues. And you've got to use the right test for the right type of risk that you're trying to mitigate. right um For me, you know like business logic and stuff like that that, all of that should be done down in the the unit test to make sure that we are actually executing the correct logic. I don't want to use end-to-end tests for that. It seems like a pretty heavy way thing to to make sure I'm doing the right logical thing from my code is executing and the right logical steps or whatever. But you you talked about end-to-end testing that you that it's kind of dead. And ah to an extent, I agree.
00:37:51
Speaker
ah We may not agree with how we're killing it, but I think I definitely agree. I don't like end-to-end tests. I want to minimize the use of them. But the one thing where that they are good at is making sure that those interconnections between the steps along the way from one end to the other end, that all of those ah binary interconnections between different systems are working properly. So how do you address that in the in the mock situation? When you mock things, you're just making an assumption about, well, what is this interconnection? How does it actually work? What's the contract, so to speak, right? ah How do you mitigate that risk that I that i made a crappy assumption when I when i made my mock?
00:38:30
Speaker
yeah Yeah, so Cypress came out with the component test runner a few years ago and and I feel like that really put into perspective that end-to-end testing is cumbersome and instead we should be focusing on feature or component tests with our automation. right so How do we test this component in situ, if you will, to use archaeological terms? How do we test it in in place? um and And what I would argue is that you would use mocks and stubs to to perform that, and then rely on integration and unit layer to ensure that everything is hooked up behind the scenes. Now, that's not to say that we shouldn't have one all-encompassing, happy path test.
00:39:24
Speaker
And the example of having a user registration, I think that it would be worthwhile to have one test that runs through the entire pipeline. And then other tests along the way that would test things like null entry for specific specific values, like last name, first name, um trying to click the Submit button 10 different times, something along those lines, or trying to submit without accepting terms of service or terms of agreement. um What we could also potentially do is component tests where if we know that our user registration model is going to be sending off to an end point once submit is kicked off. Um, and we know that it will return, say an error page. If the submission fails, we could mock that API to return a 400 battle cost or a 401 unauthorized and ensure that that error
00:40:18
Speaker
air condition comes up instead of having to wait for, say, an outage or something along those lines to be able to test that end end. Does that make sense? Yeah, it maybe enlighten us because you you use the term component tests. What is a component in your mind? To me, a component is is similar to, say, a page or a specific development feature, right? so The way that that we can look at it is page object modeling. You have your page, and you have your objects that are that live within the page. So in this in this sense, the component would be the user registration page and whatever is entailed in it. The first name, the last name, the email, um the submission button, you know those sorts of things. um how How does the component interact with other pages?
00:41:10
Speaker
what sort of endpoints are associated with it and how can we best test this while also providing a high level of fidelity and value to the company. and You mentioned like mocking out like API services from the back end. Yes, I agree with that. but That's a great way to test the front end functionality. the The problem, I guess, is is that that's not being addressed is you know you're you're mocking out a response. Let's say it's a JSON structure. it's ah It's a status code and all of those things, right? How do you know that you got the shape right of that JSON response? How do you know that the API actually does return 4.0, whatever, you know in that case?
00:41:52
Speaker
Yeah, I feel like that could come down to your integration tests or or schema checks, right? um I've I I will be honest with you. I've never worked for a client that didn't have some level of API testing using Postman or insomnia um just to double check things and it's very common that you'll have pre request and post request scripts to double check schema and ensure that things are operating properly at the Jason level um plus. Postman tests are fairly cheap, right? I mean, let's be honest, you run Newman, it kicks off, it's done in about 10 seconds, um and you have high fidelity, high value tests provided to your to your ah your company. um I'm not sure if I would say that we could do that, like,
00:42:40
Speaker
We can do that, right? Playwright can do it. Cypress can do it. Selenium 4 can now do it. Whether it's worthwhile or not from an end-to-end perspective is a different story. Because if we're, again, to look at the testing period, end-to-end is at the top of the pyramid. And these should be high value, very high expensive tests because they take a long period of time to run. Whereas, say, writing up a simple Postman test using either Postman's built-in test runner or Newman, that'll take Five minutes, something around there, right? Hit the end point, double check the schema. There we go. Yeah. And so and I kind of want to add in into that too, is that you have this idea of contracts because in modern architectures, you you don't have, you know, the monolith system that is the all encompassing back end with a front end base on it to where you need to do all this end to end testing.
00:43:37
Speaker
You, you have these seams between layers that have contracts in between them. So you have maybe a, a service that's reliant on a downstream service. And there's a contract between those two that creates a testing scene. So if you're testing that scene on both, from both ends. So and when I make this call, I'm expecting to get this. And if I get that, I'm able to process it. And, and in you do that as a mock. And then you open it up and go, okay, both of these are deployed. I'm going to run this test just in, allow it to hit that other service. Then you get that a plus B equals C, you know, B plus C equals hang on. How does that go? A plus C basically the VIN diagram where they overlap. and yeah Right. So you get new, if eight, Oh, that's what it is. If a plus.
00:44:36
Speaker
If A plus C equals B and C plus B equals A, then, you know, A plus C equals B or whatever, you know, something like that. I think it's unfortunate we don't have video attached to this because watching Aaron's brain, like just, just fry, trying to figure this out as he's, this is hilarious. but while he was While he was talking about it, I'm like, oh yeah, because mathematically speaking, yeah I have to write it out it's an identity. I do want to point out that I'm not advocating for for using mocks to replace integration tests or unit tests. That's not what I want to see. and I don't want to see a company listen to those podcasts and say, well
00:45:24
Speaker
Jonathan said end-to-end tests are dead, so we're just going to mock everything and and throw our hands up and say that's it. right um I feel that this this occurs this type of testing occurs when you have solid unit and integration tests in place. right? when When your developers Quality Assurance have built up the infrastructure to create integration tests to make sure that those pipelines are are certainly verified, right? Because at that point, you're creating a duality, if you will, right? Why are you testing the same thing again in your end-to-end test when it's already been tested at the integration layer?
00:46:04
Speaker
that That to me is is cause for we can mock this out because it's already it's already crossed off. Do you consider yourself to be ah a developer or a test automation engineer? That's a great question. I consider myself to be and a a developer. The title that I personally like to use is Software Development Engineer and Test. um my my biggest My biggest quandary with being in quality assurance
00:46:36
Speaker
was I came out of this bootcamp. I had a certification in full stack web development. I've built 20 or 30 separate apps. I'm more than a quality assurance engineer. And I think that I struggled with that for the first five years of my career was I'm more than just a tester. And yet it was difficult to be seen as more than just a tester so when i got to a client who looked at my code and said you write good code become a developer and then they allowed me to do so they gave me the that opportunity i jumped on it.
00:47:11
Speaker
And ever since then, you know I can tell you, why did I come back to QA? I love it. I love the frameworks. I love the tools. I love the rush of finding a bug or a fault and not coming to the developer and saying, your stuff is broken, but instead saying, we have a problem. Let's fix it. Let's tackle this, right? That's a dopamine hit. That is a big hit. and And I tell you what, i mean I mean, I've built and I've broken software. I think when you find a good bug and you come up with a solid solution in a collaborative environment, there's nothing that hits that. There's nothing that hits just right like that.
00:47:54
Speaker
so At the end of the day, I consider myself a software developer who specializes in automation testing. that That's to me my title. That's what I like to consider myself because i like I've said, I've done both. I've enjoyed them both, but testing is my niche. Testing is my natural fit. so We have this thing that we do. that we call the lightning round. And these are very important questions. We get to the real heart of you know what it means to be human and and the real big problems in society. and
00:48:32
Speaker
Very, very deep topics. And there are absolutely wrong answers. So just forewarning you. Good to know. Yeah, yeah, yeah. So I just wanted to set the tone of of you know this. we We don't want to deal with levity here. this is This is very serious now. When we get into this lightning round, it's very serious. So I'd like to start off just to just to make sure we level set you on know what this looks like. ah first First question. Fill in the blank. Taylor Swift is powerful.
00:49:04
Speaker
She is powerful. You know, I got to love her. And we we don't allow any elaboration ah unless it's from us. That's sort I don't make up the rules. ah like I mean, I just did, but. Go ahead. On a scale of one to 10, how good are you at with a wall? Terrible. um A one. one on Wooflepuff. Are you writing these down, Aaron? We will be tallying up the score. Have you ever worn socks with sandals? I have not. Have you ever tasted soap? Yes. Yeah. That was a bad day. All right. Say, good day, mate, in an Australian accent.
00:49:55
Speaker
ah Good day, mate. Not bad, not bad. All right. He's our second best at that. What's your favorite clothing brand? Favorite clothing brand? I'd have to say, gosh, Old Navy is up there right now. How long can you hold your breath?
00:50:16
Speaker
um Probably two minutes. Two minutes. That's above average, honestly, based on our our empirical evidence of this show. so um Give me a word that starts with the letter Q. Quincy. No. Sorry. Do we allow proper nouns? Quality was what we were looking for there. Can it be quixotic? Let's go with quixotic.
00:50:44
Speaker
Most of embarrassing store you might be seen shopping at. I, I, if they had a store that peddled specifically clothes for dogs, you'd find me in it. I would be buying something, a sweater for my 90 pound golden doodle to see if it would fit them. Which leads into our final question. Big dogs or small dogs? Big dogs, a hundred percent of the time. Yes. Yeah, I agree. That is the correct answer. So I think we've pretty much covered all of, all of the, what it is to be human with with this round of questioning. do You think Aaron, are we pretty much covered everything? Yes, I think so. I think we are ready to wrap up the day. Thank you to our special guests, Jonathan Thompson and my beautiful co-host James Carmen and all of our staff that puts our episodes together. Stay tuned for our next episode, where we'll be talking about something IT related on the forward slash.