SE Radio 595: Llewelyn Falco on Approval Testing
Llewelyn Falco, creator approval tests, talks with SE Radio host Sam Taggart about testing code in general and the various types of testing that developers perform. Llewelyn elaborates on how approval tests can help test code at a higher level than traditional unit tests. They also discuss using approval tests to help get legacy code under test. This episode sponsored by Data Annotation.
Llewelyn Falco, creator approval tests, talks with SE Radio host Sam Taggart about testing code in general and the various types of testing that developers perform. Llewelyn elaborates on how approval tests can help test code at a higher level than traditional unit tests. They also discuss using approval tests to help get legacy code under test.
This episode sponsored by Data Annotation Tech.
Show Notes
SE Radio Episodes
Other Resources
Transcript
Transcript brought to you by IEEE Software magazine and IEEE Computer Society. This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.
Sam Taggart 00:00:56 This is Sam Taggart for Software Engineering Radio. I’m here today with Llewelyn Falco. Llewelyn is an agile technical coach and internationally renowned speaker. He’s the creator of ApprovalTests, co-author of the Mob Programming Guidebook and co-founder of teachingkidsprogramming.org. Llewelyn is here to talk to us about approval testing today. We have talked about testing on many previous episodes such as 516 with Brian Okken on Pytest, 431 with Ken-Youens-Clark on using unit testing for teaching. And if you dig way back in archives with Kent Beck talking about the history of unit testing way back in episode 167. So welcome Llewelyn. I’d like to start by just discussing the general testing landscape and, and how developers are testing their code today and how approval testing fits into that whole landscape.
Llewelyn Falco 00:01:44 Well, so it’s a large landscape, right? And I tend to straddle two very extreme sides of it, right? So as you mentioned, like I’m the creator of ApprovalTests and so we do a lot of work in the open source world and usually on approval tests, although I work on some other projects as well and in that world, testing works really well, right? Like, so we’re doing test first development. I have like a standard python mob that meets every Sunday and we usually do about two hours of work. And in those two hours we usually release a feature and I mean like finish and release, right? So every two hours we are pushing a new version of the software out to Pybuy. Likewise, like, like I have a, a guy I meet up with Lars for the Java approvals and we pair and we can usually do a feature in two hours as well, right?
Llewelyn Falco 00:02:32 And then again that gets released to Maven immediately. So that is, the tests are great, the code is easy to work with, the whole DevOps pipeline is in place and these things support each other, right? Like I wouldn’t feel safe to release so quickly if I didn’t have my tests there really like protecting me and telling me, hey, it’s okay to do this. That’s not the only thing, right? Like there’s a whole other section to DevOps that do that. But also, you know, in that world we have Dependabot and if somebody updates a dependency, we detect it immediately and automatically and then we get the pull request and because we have good tests, our tests will run and if our tests run, it will automatically merge. So like when Log4j came out, we didn’t even notice, right? Like they released the patch, our system detected it upgraded and released without us knowing.
Llewelyn Falco 00:03:26 But the other side of that is the clients that I work with, right? So I’m a technical coach and what that means is like companies bring me in and we sit with their programmers and we program together and we learn to program better. But the thing is like the companies that bring me in are never the companies that are doing really, really well, right? Like they’re always companies that are struggling. It’s really unfortunate. If you look in the world of sports, the athletes that get the most coaching are the people who are at the top of their field, right? Like Kevin Federer is amazing and he has like 12 coaches, right? Like it’s just like a whole ecosystem taking the people who are the best and making them even better. But very often that doesn’t happen in software. Like if you’re doing okay or you’re doing good, we’re like okay, we’ll leave you alone. And it is when they’re struggling they were like, okay, now, now we’ll send in help.
Sam Taggart 00:04:17 Yeah, I had a very interesting conversation with a friend of mine. I was complaining about a specific framework and how all the projects I got that were written in that framework were horribly written and he made the comment, well if they were well written they wouldn’t have called you. So I thought that was kind of funny.
Llewelyn Falco 00:04:31 Exactly right. . And so there, I’m seeing the opposite side. And, and on that side, almost universally everybody has tests. They might not have tests in a specific project, but they definitely have tests. Like if you, if you were to zoom back out, right? Like say there’s the company has like a hundred projects, probably like 50 or 70 of them have tests of some sort. A client I was in earlier this year, they were using Sonos and Sonos does a lot of code metrics and will gate the check-ins and they would not allow you to check in new code if it didn’t have the coverage. But a lot of their code was not designed in a way that the tests were really helpful for the engineers. And so we wrote some code and we split it up and we tested it and we knew that it worked and we used a thing called executable command tests, which are really powerful tests but they don’t really increase your code coverage because the idea is like they are acceptance level tests, right? So you can imagine the nice thing about acceptance level tests is they sort of like they’re system wide and they do this great thing, they give you a lot of assurance that the thing works but they’re very hard to set up and conduct and keep consistent.
Sam Taggart 00:05:51 I was just gonna interject and ask about code coverage because I wanna make sure that our audience understands exactly what we’re talking about. So when you say code coverage, what do you mean?
[...]