Defining our QA process

As I mentioned before, we’ve been going through some structural changes recently. If we add to the equation a really tight deadline where we decided to drop the process and survive however we can; now we have some madness around. And, one of the craziest fields is quality and testing.

That’s why I spent half of today in a meeting room discussing how can we do, what can we try and why is so important. And, guys, I have to admit that I’ve enjoyed the experience. This is one of the first times I have to propose and implement a process on my own and… Such a shame I can’t share with you all the details and products from that meeting. But I can tell you what worked and what I learnt.

I always fancy starting with why we need the change; this helps to highlight the issues, stating that it’s not only impacting the people involved and we have to work together the solution. I started detailing how every team is dealing with quality right now (without process, every team has adopt QA on his own way), summarising it in bullet points as potential solutions and asking a really simple question afterwards: “right now, if you have to develop a new feature or refactor a piece of code, would you be confident enough to push it to production right away? “.

Obviously, we don’t want to push things blindly, even if the developer promises us how complete and awesome the solution is. But, if the one doing it doesn’t even have the confidence to make the step… Something is missing. So we start discussing why that was happening, and what it needs to change to get there. As you assume, the first solution was obvious.

Automation. We know how useful it is, especially helping us trusting a refactor and preventing regressions. As you should know, the team less confident about their code changes was the only doing less (none) automation. In my opinion, the main reasons were: lack of expertise, and without knowledge is easier to see some practices daunting and don’t realise its value; and a more complex automation problem, because UI are always more tedious to programmatically test, and without expertise you don’t know how to build testable products.

We discussed how to incorporate automation in our process, how can QA (just me) and Product leverage the task, and some technical details about the challenge. As some teams are using BDD, I propose The Three Amigos technique to define the tests, and I’ll provide them with examples, guidance, and support during their implementation; although they were really vocal about being the ones trying different frameworks and solutions. Fair enough, it’s going to be your child, I understand you want to choose it. Automation, checked. This will prevent some of the regressions, build back some trust and make them realise how untestable their current solution is, keeping that in mind for future builds and projects.

But, I’ve already told you that Automation is not the one and only solution, and there are some circumstances and problems where automation is just the inefficient and overkilling solution. You won’t write every possible test in the code, as it would be impractical. That’s why testing it manually, particularly new features, and getting feedback and soon as possible will help us meet Agile speed. Remember what I said of Quality Assistance).

That’s why I proposed QA Demos, where the developers show the new feature to Product or QA, and we try to break it in front of them for some short period of time. This way we can get the obvious feedback really fast, and developers can learn how we test. If we do it enough times, there is going to be a moment when they’ll be able to do it autonomously, knowing that their releases will contain fewer bugs, and building a better product because they have tested in mind. It’s a huge win situation, although there are some challenges (the biggest one I want to talk about it in my future post: Developers Archetypes).

This solution was less popular. They were sure that asynchronous solutions are more efficient, so deploying something and testing it whenever we can makes things faster (apparently, the industry doesn’t share that statement). Luckily, we decided to try it out in a couple of stories per sprint and check how it goes.

I finished the meeting with the QA manifesto, wrapping the actions, discussing some of the details and feeling that I’ve made a huge step in my career. I’ve done it on my own, and now I’ll try my best to achieve a process where we can trust our changes, and hopefully where the Quality Assurance role is not needed because everyone is ensuring the Quality of our product across all the journey. So, guys, I encourage you, if you don’t agree with the process, if you spot lacking points, try to change it. People is keen to improve what they neither like. They’re more supportive than you think. Well… Sometimes.

May the force be with you,


BDD: not by cucumber alone does man live

Don’t get me wrong, I know how useful BDD is cleaning test reports, grouping different frameworks and giving the ability to scale as it allows non-technical people to expand the feature file. But it has loads of painful points which makes it a powerful tool bridging the gap but most of the complex scenarios are impossible to handle.

That’s why, after some time using it as the default BDD as the default option, I understood its value and now I only use it for the simple and repetitive workflows, allowing anyone to easily understand what’s the current coverage of the “pre-commit” verifications. It also brings a common language for developers, testers and business people to specify the product’s behaviour, as explained by the Three Amigos approach, making it an awesome tool for clarifying misconceptions.

But I’ve found some difficulties implementing it. Since it’s foundation it heavily relies on TDD practices, so implementing it on teams not used to TDD techniques carries an extra challenge. I haven’t found proper documentation for the test implementation, as most of the articles online only state how empowering it is for business people. It requires a rigorous delimitation of what should contain every scenario, breaking them into really small modular ones which goes against our common belief.

Most of the implementation problems happened when we tried to reuse our BDD framework (made with Python and Behave) which was fitting flawlessly one of our small REST API testing suite, into a more complex project. We started lacking some context management, sharing variables between steps, dealing with multiple users… So we try to evolve our framework to the perfect machine that would handle it all. And it became a beast. We started to deal with “worlds” per test suite (a world for the billing system, another different one for user management), that way we wanted to keep things as modular as possible. So it became an ugly beast.

So, at that point, and being coeval with my “I don’t trust in anything I know anymore” moment, I decided to start focusing on “is BDD really what fitting us?” instead of just “how can I make BDD fit us”. Is it delivering more value than a traditional automated test? Are we using it correctly? What are the alternatives?

That’s how we realised that in some projects it wasn’t the way to go. For instance, when we deal with complex user scenarios and we’re not going to scale the test adding new repetitive steps, it didn’t make any sense to spend most of the time adapting the BDD framework. In those cases, we kept the structure for a consistent reporting, and we dealt with the test on the old fashion way.

Did you find any limitations on BDD as well? Is it fitting your needs seamlessly? In a future post, I want to talk about some examples that caused us problems. In the meantime,

May the force be with you,

P.S: If you want to dive deeper into the BDD world, you may enjoy these resources: