Three steps to improve the quality of your development process in a startup

I’ve been working in the IT industry as a QA engineer since the start of my career. Sometimes, my position was just hands-on testing in a waterfall-like process; other times I worked closely with the developers to write all the automation needed before the feature was considered done. Every company offers a different approach on how to increase the quality of their product, and I’ve also seen companies trying the same thing with completely different results. But there is always something that kept being constant: when a company reaches enough user-mass or the codebase gets big enough, the conversation about ensuring quality gets more and more important. Speed and productivity matter less if you alienate the users with buggy releases, and your engineering team gets afraid of areas in your code which changes _always carry side effects_. That is the moment when the way of _just getting things done_ that has worked till now starts getting on the way of a healthy product. That’s the point when the organization thinks on increasing the QA headcount.

Currently, I’m facing this situation again, with the difference that I feel way more prepared. Of course, every company has a different way of working, but starting to be a veteran in the industry allows me to recognize common troupes and analyze the situation with frameworks which worked for me on previous cases. For some of these steps, I highly recommend finding a person who will bring the expertise on automation or testing principles in your team, but a lot of these steps can be performed without by anyone, especially if you already count with a Quality-crusader in your ranks. And that’s what I’m going to help you with right now, giving you an example of the steps you can take to start improving the quality of your product and processes by yourself.

Before starting, I would like to share the first step I take when joining a new startup, particularly if there’s not a defined Quality process in place. I always start finding the Quality Crusaders currently working with you, those bright knights who, regardless of their work title, push for a quality-first approach of working and cares about following best practices. I even ask for them during my interviews when considering a new job. Having allies who will share their view on how to improve the current situation is invaluable. So, find them, and involve them in these first steps!

Step one. Describe the development process.

Ok, so we are all making software here. And the process seems pretty straightforward, doesn’t it? Some people sit in a corner. They write some code. They push that code somewhere. And our users enjoy it. Pretty simple. So let’s add a little bit more flavor to it, shall we?

I like to start naming the different the different states that a task (bug fix, feature, etc.) goes through before our users enjoy it to pieces. Of course, this is more of a guideline than a cooking recipe, but I’ll describe an example as broad as possible so hopefully, you can get inspired. In this case, a task can come from a bug report, a new feature requested on some work that we want to do internally (tech debt, etc.). At some point, an engineer will pick it up and start working on it. It will land the codebase. It will be deployed to some testing environment. And finally, it’ll be in our user’s hands. So, without caring about actions and transitions, the states would be:

Coding – Code on the repository – Testing environment – Production

Now, let’s start the fun part. If we know that our code will go through these steps, how can we improve the quality adding actions or extra transitions?

Step two. How can we inject that quality?

This will drastically change depending on the previous experiences of your team, as well as what are they currently doing. Involve different people in this step, as they’ll offer valuable insight from their previous works of things that worked and didn’t. This is an example of what I’ve proposed in previous occasions. I usually include a short description of the different steps.

Copy of Quality Process

For instance, involving people with different expertise on a Design Document before starting to work on a feature has worked with great success for me. That document can take any shape, but having a little discussion on what is the problem that we’re trying to solve, where does it interact with other modules, what is the threshold needed to feel the implementation is good-enough helps A LOT to most engineers when working on it. Just having “what might go wrong” in the back of your head while you’re coding a solution can work wonders.

Another key element, apart from the description, is figuring out the owners of those actions. As I said, many don’t require any new technical expertise in your organization like Design Document, or Demo Testing where the developer runs through the solution with a peer or a product person to spot errors and understand how others verify that feature. Others really benefit from a specific skillset. That’s why, for instance, I took the ownership of working on the automation that will provide a smooth and reliable way to assess a solution. I’d work on building and improving the frameworks (Unit, Integration, End to end, Performance testing…), as well as working with the team so we can all embrace quality.

I also recommend that you don’t try to tackle all solutions at the same time. Set priorities, where can you get more bang for your buck, which skillsets do you already have, etc. All of these solutions will go through different cycles, and spending enough time and energy to set a nice foundation will help with the resistance that every organization faces when introducing change. Focus on feeling a constant momentum of change an improvement, instead of the speed of that change. Personally, the workplaces where I could focus on a steady pace, end up having a way more impactful change in the end.

Step three. Measuring the impact, reviewing and improving.

This is, in my experience, the hardest part. It’s really difficult to measure the impact of shifting to a quality-focused development process. Are you having more bugs because the engineers are working on a harder problem? Or did we get better at finding those bugs? As a quality engineer, part of my job is also finding ways to measure the impact of any change, so we can iterate through them and find a better approach. We all share a common goal: develop a product that will bring more value to our users, and doing it in an environment that we enjoy working at.

Spend some time profiling the process so you can get data on it. Sometimes, it’s as easy as holding periodical meetings where people discuss how it worked or didn’t work for them; or you can find entry points of measurements to take a data-driven decision. Be mindful of how hard this task is. If people feel that they would be reviewed by these metrics, they would be reluctant of iterating and embracing them. This is not about pointing fingers, but devising a way to improve the process for everyone.

As in most tasks I usually work on have more an internal impact than an external one, I like to measure my success on the perception of the codebase by our engineers. Of course, I hope that changes in frameworks and automation will impact the end user experience; but I like to focus on our team first. Examples on measurements that I use are: periodical surveys to the engineering team on how much they like working in our codebase, or how safe they feel when doing a refactor or change based on our automation; pairing with different developers so I can see how they work and their interactions with the frameworks; workshops on how to approach different bugs or problems; etc.

Another thing to be mindful of is that, at this stage in a startup, Quality is a buzzword used as an escape goat quite often. Most teams had to focus on quick and “dirty” deliveries on their first steps as an organization. When the conversation about Quality starts arising, it’s usually because that speed has started to produce difficult releases, and “the lack of a QA in your team” is frequently used as an excuse. Having people with different expertise always bring value, but it’s everyone’s responsibility in following the best possible practices to their knowledge, and voicing their complains about the current process. This situation can also create a toxic situation where, after hiring someone who that expertise, it’d be required from them to just fix everything.

Final notes

To wrap it up, I want to finish that this is not a magical tool that will increase your productivity and ensure the quality of your product by itself. This is a progressive change of mindset. It’s about changing how we work to spend more time on “how we can ensure and verify the solution”, and less on “how can we get done with this as soon as possible”. I personally believe (and I’m backed by plenty of evidence) that an increase of the efforts regarding quality will also bring speed on the long run, as it will require less of the VERY COSTLY redoing and fixing, that can also undermine the spirit of our engineer team.

If you’re starting to ask these questions in your organization: congratulations, you’re going through a change that your users will thank, and will produce less churn on your engineering team. But, I’m also really curious about your approach. What is the situation that brought you to start talking about Quality? What are the current solutions that your team is working with? Which are the points are you most struggling with?

Subconcious testing

I’ve finished reading Subliminal recently and it confirmed an idea I always had: guts in testing make a huge different. I’ve already talked about great testers being born instead of trained. Experience, study and failing to teach you new skills to achieve better efficacy, but most of the decisions I take during testing sessions are gut-driven (and experience driven as well, always remember previous shit!).

The best (or worst, depending on your side of the game) bug discoveries I’ve made were trying something off record purely based on my instincts. Building that load test focusing on the weird interaction between components because several parts use the same database, or just letting my inner annoying user take control of the keyboard and mouse. Something tells me that the calendar widget may easily fail, and the payment app is not going to correctly handle high load.

This makes some conversation with developers tense because they want a reason why we can’t go live get, and waiting for me up build a test that will check something that we don’t have any evidence yet that it might fail is not a valid reason. With time, they end up trusting my guts as much as me; but I find wise to hold it while you’re still building that trust, as it’s not wise to have your new team against you during the first weeks. When they see its value, they stop thinking that is a waste of time.

It also makes Quality Assistance methodologies trickier because teaching reasonings that are not fully conscious requires a deeper knowledge in the field. I like analysing my actions after doing them in order to understand the reasons and having an easier time explaining them.

But, if testing is about guts, how can we train it? For me, analysing my reasonings helps me identify what I correctly did and where to focus my efforts; and observing my colleagues asking for reasons make me understand new ways and approaches. It’s hard to learn from other’s experiences, but it’s still better than nothing. That’s why I prefer reading about testing cases and stories rather than studying. But there’s still a long road ahead so, give me a hint, how do you do?

May the force be with you

Gino

Cool automation tools

Few months ago I started gathering all testing articles and examples that I came across. I have a really awful memory, and discussing with my QA Lead about different approaches I could just remember some of the details, which led to awkward conversations; so starting to build my own “Testing Bible” sounded like an amazing idea. And so I did.

It’s actually a Google Doc where you can find chapters about specific testing methodologies (Rapid Software Testing, Quality Assistance), automation tricks, arguments to convince stakeholders and developers, a bullet point list of my achievements and some cool automation examples. Today I want to talk about the later!

I really love hearing about tools or techniques that organisation has developed to meet their really weird and niche testing needs. Google Testing Automation Conference is an amazing place to find some of them, as well as companies with tech blogs (although in some cases they think testing is not an enough appealing subject to talk about). So let’s talk about some of the ones I felt in love with!

Netflix Simian Army

This is bringing fault injection (https://en.wikipedia.org/wiki/Fault_injection) to the next level. In order to build a resilient system, Netflix realised that they need to be ready for errors. The best way the found to probe it was injecting errors in the pieces that are out of their control, so they understand what’s going to happen when those genuinely occurs. The talk is centred about the army of simians that they have causing AWS errors in production.

Things, like killing AWS instances or faking an entire region downtime on Amazon, are the duties of this monkey gang. They periodically trigger them on production to see the progress and be ready for when shit happens. Because we know that shit always happens. Part of the source can be found here.

Facebook’s test killer

As I’ve already mentioned, Facebook is known for not having Testing roles. They heavily rely on automation, although we know that flakiness and inconsistency is one of the biggest pains automating tests. That’s why they’ve come with a system where tests have to gain trust before the system will consider them as necessary; as well as a way to identify, heal and fix tests that cause more noise than benefits.

The system automatically marks the failed tests as “unknown value“, passing them through various states and identifying the responsible of it. It helps to review the code, and the system is the one disabling someone else’s test when it starts feeling flaky, so it’s not a “me against you” argument.

Generate tests from logs

Creating a complete suite of unit testing can be a daunting task. particularly when you scale up the product size. With this solution, they rely on an extensive logging and machine learning techniques to generate stateless and contextual assertions, building the skeleton of the functional tests.

You’ll still need some human intervention, but this is undoubtedly an amazing starting point, leveraging the most tedious part of the process. This is proper automation: something that assists us to achieve our goals!

Spotify’s Rapid check

Testing all the range of values in a unit test is impossible, that’s why Spotify has used the RapidCheck idea to help them build better coverage of the range. It randomly tests a broader set of values, setting a skeleton of important cases. Every tool that helps us get confidence on the solution faster is welcomed!

Do you know about another interesting testing tool? I’ve left some of them unsaid, but only because I love having pending subjects to talk about in future posts!

May the force be with you,
Gino

It’s always worse elsewhere

Yesterday I share a couple of beers with some old friends who also work in the Software industry. They’re working on a small Consultancy agency building websites using the same core, cutting most of the production costs as most of them use fairly similar implementations. Everything sounds right until they get surprised when I asked about their testing framework. “Well, before sending them to the client, we navigate through the site checking that anything breaks”. Fair enough, I don’t expect everyone to use Selenium. “The problem is when a client finds a bug. Most of them are part of the core so that bug is probably on every one of our projects, and we’re afraid of refactoring anything”. If the core is not a volatile piece of code, probably it’ll be a good investment to build some functional tests verification on top of your unit test. “Unit what?”. Bum. And then I remembered.

I was also part of a Consultancy brand (it was my first job in the industry). I remember the “we can’t afford to write the unit tests, our problem is really complex and we don’t have time, we’ll just verify it at the end”. I bought it. I truly believed that we were good enough that unit test was only a plus, and the projects were difficult and fast paced… NO WAY we could afford wasting time on tests! I remember some senior colleagues explaining that to me. Why would I don’t follow their example? They had way more experience than me, and they developed way faster. I wanted to be like them.

And I remember the struggles. I remember that every single bug fix carried weird and unexpected regressions. How the clients complained every week, and the constant fight about who is the responsible for the fix (Was it a bug, or a failure in the requirements?). The hellish deployments, the tedious manual verifications of just what was needed, the numerous problems with our version control tool as we always realised too late that the release didn’t contain that change.

I also remember the change. When I left the company and learnt something new in the next one. And then the next one. On my first day, one of my new colleagues was disturbed because I wasn’t writing unit tests. He didn’t even ask a reason, but I tried to convince him that they just slow my momentum. I remember his face. He just taught me how to make it easy, and handled me a copy of Clean Code. I had to pair with him and… Oh God, that was beautiful. He showed to me what TDD feels. And I started realising that, actually, the team didn’t spend that much time dealing with the horrors I was used to. It was so simple and beautiful.

That’s why I spent some time yesterday trying to show them the importance of this practices. Explaining to them why most of the companies they want to work in ask for TDD practices or writing functional tests. Exampling to them how they’ve been important during my projects. Giving them resources, guidelines and the chance to ask any question. But, obviously, I’m just a QA, so what the heck I’d know about coding.

Yeah, dear readers, it’s always worse elsewhere. That’s no excuse to not try to be the very best you, including the best professional you can be; but sometimes appreciating what you’ve learnt and why you’re doing it is needed. It is also important to identify who would use some help, and offer it. For me, life is how you feel sharing with the people you choose your knowledge, resources and smiles. I’ll talk in further posts about some of the

I’ll talk in further posts about some of the anti-patterns I’ve seen (and do) during my career, and some lessons I’ve learnt fighting them. But, for now,

May the force be with you,

Gino

Why being a tester spoiled my life

I truly believe that a great tester is born, as well as most other professions. Testing requires a strongly critical thinking, as well as perseverance and lateral thinking. There are some technical and products details that undoubtedly help in your daily work, but in my opinion, those skills don’t make a great tester on their own.

I’m not a great tester, and that’s what is driving me to understand, learn and practice how you become one. But I’m a critical thinker. I love breaking down situations too, in the calmest possible way, and identifying what is happening, which are the potential outcomes and how can it go wrong. I’ve been doing it my entire life, and probably that’s why I enjoy being a testing professional. But I can’t stop being a critical thinker in my personal life.

There are times, MANY times believe me, when spoiling yourself thinking what can go wrong will ruin an experience. There are times when understanding why you ended in some situation make it way less enjoyable. And, if you’re sharing it with someone, people will hate you when you try to anticipate the issues and prepare a plan B in advance. Some just want to do things, regardless of the result. A simple example is my peculiar relationship with food.

My mum in endocrinologist, and I grew up learning the calories of the food, which are their main nutrient and what is missing in any meal. And that has spoiled my life. I obviously eat junk food, and I enjoy eating colossal portions, but I can clearly point you how it will affect me, and what I’m doing wrong. So when I see someone eating a really big budget with doughnuts as buns, I just can think “SERIOUSLY??”, instead of enjoying it.

But peeps, if you’re cursed like me, remember that you can use it and become part of the testing community. Your colleagues will praise your ability to anticipate fires, and they’ll learn it after throwing away your recommendations; you’ll help people understanding why things didn’t work and you’ll show people that some details really matter.

This is why I’m still a tester. I spent part of my career pretending to be a developer, dreaming of being a designer and even thinking how would be producing. But I’m cursed, I was born to be a tester, and breaking things is SO funny.

May the force be with you,

Gino

The no-QA way

We’re still finding out sweet spot regarding Quality and there are always some developers that think no “formal tester” is needed in our process right now. This approach even gains more followers when you have an ex-Facebook in your team.

Don’t get me wrong, I never take the bait to jump yelling “You NEED me, this will be a MESS without me!”, mainly because I think this approach is as valid as any other if you focus on the right things, and meet some conditions. And I really enjoy discussing testing, especially when it allows me to understand my peers’ point of view.

So we started a conversation, but it didn’t lead to anything because we were just pointing the pointless parts of our current process, instead of really talking about how to make it work. Afterwards, I schedule some time on my calendar to sit in one of the meetings rooms and use the Whiteboard to clear my ideas. Would it be possible? What are we missing to get there? What would it offer us? How would it impact our team and process? Which are other solutions?

Then I felt way more ready to have that conversation again, but I decide it will improve with a broader audience, and sharing the questions in advance so everyone’s is better prepared. And, this time, I really enjoyed the chat. We still complained about what is currently failing, but we managed to come with tentative solutions. As well as some answers to the previous questions.

Let’s start with motivation. What would it offer us? The answer was almost unanimous: less friction in the process. As developers will be more autonomous and the QA layer is not going to be annoying, coders can focus on the code. It also carries some challenges, but we’ll talk about them later on. The goal is having a simpler and smoother process, and also, you don’t have to spend on QA itself. Win / win situation!

But, obviously, there are some requirements to get there. So, what are we missing to get there? First and more important, by far, is the culture of owning quality. If there won’t be a “quality gatekeeper”, everyone should be responsible for their deliverables quality as well as ensuring that everyone’s working on it. We’ve been moving towards that (as well as most of  the Agile testing approaches), but we’re not there yet and it’s really needed making this huge step.

This is especially tricky when you think about testing the “big picture”. It’s quite straight forward that when you deliver your solution, it should contain unit and functional automated tests verifying its behaviour. Hopefully, you have a culture regarding integration and component testing as well. But what about ensuring that the application works as a whole? Who will be responsible for that? Who will create, maintain and run those tests? What about the strange and complex cases that are almost impossible to automate? When and who should explore it?

One of the huge differences between the Facebook and our case is the amount of dogfooding that they do. We don’t have beta users, we don’t do canary releases, we don’t even use our tool that much internally. That’s one of the biggest challenges that we’d need to solve before taking this approach. It can be done, things like getting used to exploratory testing the application after when the development is done, but no developer is used to do it. One of the reasons why Quality Assistance keeps a testing expert available for the team is to help in this transition.

Then… how would it impact our team and process? Well, spreading out the responsibility about quality need some process changes, like defining some steps or techniques to review the solutions in pairs, not only reviewing the code and the tests. Get some understanding of the application that we’re building, and we need to play with it if we want to verify the changes. We’ll need more involvement with the product in order to find dependencies and better testable solutions from the beginning, although it will improve after the team gets better knowledge about it.

So, would it be possible? Well, one of the Agile principles I TRULY believe and share is “Fail fast, fail often”. As long as we timebox the try, set some retrospective meetings and are aware of the potential impacts; I think is stupid to answer the question instead of trying it out for a while, analysing and understanding the outcome afterwards. For sure, the first version won’t be the good one, but will teach us some tricks for the next one.

And which are the other solutions? I’m still keen on giving it a go, so it helps us identify what are we missing from the current approach and what we love from the change. But it’s always good keeping other solutions in mind. We can always go back to the old model of a QA wall where we throw everything, having someone to help us embracing quality and building tools to aim us (Quality Assistance), heavily invest in automation, or whatever pop your mind that might work.

There are thousands of different approaches to find Quality at Agile speed, and none of them works flawlessly. They neither work in every organisation. And I know a thing for sure: spending time and resources on finding a sweet spot for your process is always a good investment. Period.

Do you have experiences with a non-QA environment? I’d love to hear them!

May the force be with you,
Gino

Tester’s commitments v 2.0

I’ve already talked about James Bach’s tester’s commitments, but I have problems making that promise to my colleagues, as I don’t share some parts of them so I want to create my own version. I want to include Quality Assistance values, as well as some experiences; and keep them as short, simple and relevant as possible. These are my commitments:

  1. I provide a service. You are an important client of that service. I am not satisfied unless you are satisfied.
  2. I am not the gatekeeper of quality. I don’t “own” quality. Shipping a good product is a goal shared by all of us.
  3. I will assist you in the design of the product to ensure its testability.
  4. I will support you in any task to deliver a better quality product.
  5. I will provide guidance in your testing efforts, sharing my knowledge with you and helping with any tool or technique needed.
  6. I will learn the product quickly, and make use of that knowledge to test more cleverly.
  7. I will help you identifying important things to test first and try to find important problems first.
  8. I will strive to test in the interests of everyone whose opinions matter, including you, so that you can make better decisions about the product.
  9. I will write clear, concise, thoughtful, and respectful problem reports. (I may make suggestions about design, but I will never presume to be the designer.)
  10. I invite your special requests, such as if you need me to spot check something for you, help you document something, or run a special kind of test.
  11. I will not carelessly waste your time. Or if I do, I will learn from that mistake.

I know that the list will evolve with time and experience, but these are the commitments I make today for you.

May the force be with you,

Gino

Defining our QA process

As I mentioned before, we’ve been going through some structural changes recently. If we add to the equation a really tight deadline where we decided to drop the process and survive however we can; now we have some madness around. And, one of the craziest fields is quality and testing.

That’s why I spent half of today in a meeting room discussing how can we do, what can we try and why is so important. And, guys, I have to admit that I’ve enjoyed the experience. This is one of the first times I have to propose and implement a process on my own and… Such a shame I can’t share with you all the details and products from that meeting. But I can tell you what worked and what I learnt.

I always fancy starting with why we need the change; this helps to highlight the issues, stating that it’s not only impacting the people involved and we have to work together the solution. I started detailing how every team is dealing with quality right now (without process, every team has adopt QA on his own way), summarising it in bullet points as potential solutions and asking a really simple question afterwards: “right now, if you have to develop a new feature or refactor a piece of code, would you be confident enough to push it to production right away? “.

Obviously, we don’t want to push things blindly, even if the developer promises us how complete and awesome the solution is. But, if the one doing it doesn’t even have the confidence to make the step… Something is missing. So we start discussing why that was happening, and what it needs to change to get there. As you assume, the first solution was obvious.

Automation. We know how useful it is, especially helping us trusting a refactor and preventing regressions. As you should know, the team less confident about their code changes was the only doing less (none) automation. In my opinion, the main reasons were: lack of expertise, and without knowledge is easier to see some practices daunting and don’t realise its value; and a more complex automation problem, because UI are always more tedious to programmatically test, and without expertise you don’t know how to build testable products.

We discussed how to incorporate automation in our process, how can QA (just me) and Product leverage the task, and some technical details about the challenge. As some teams are using BDD, I propose The Three Amigos technique to define the tests, and I’ll provide them with examples, guidance, and support during their implementation; although they were really vocal about being the ones trying different frameworks and solutions. Fair enough, it’s going to be your child, I understand you want to choose it. Automation, checked. This will prevent some of the regressions, build back some trust and make them realise how untestable their current solution is, keeping that in mind for future builds and projects.

But, I’ve already told you that Automation is not the one and only solution, and there are some circumstances and problems where automation is just the inefficient and overkilling solution. You won’t write every possible test in the code, as it would be impractical. That’s why testing it manually, particularly new features, and getting feedback and soon as possible will help us meet Agile speed. Remember what I said of Quality Assistance).

That’s why I proposed QA Demos, where the developers show the new feature to Product or QA, and we try to break it in front of them for some short period of time. This way we can get the obvious feedback really fast, and developers can learn how we test. If we do it enough times, there is going to be a moment when they’ll be able to do it autonomously, knowing that their releases will contain fewer bugs, and building a better product because they have tested in mind. It’s a huge win situation, although there are some challenges (the biggest one I want to talk about it in my future post: Developers Archetypes).

This solution was less popular. They were sure that asynchronous solutions are more efficient, so deploying something and testing it whenever we can makes things faster (apparently, the industry doesn’t share that statement). Luckily, we decided to try it out in a couple of stories per sprint and check how it goes.

I finished the meeting with the QA manifesto, wrapping the actions, discussing some of the details and feeling that I’ve made a huge step in my career. I’ve done it on my own, and now I’ll try my best to achieve a process where we can trust our changes, and hopefully where the Quality Assurance role is not needed because everyone is ensuring the Quality of our product across all the journey. So, guys, I encourage you, if you don’t agree with the process, if you spot lacking points, try to change it. People is keen to improve what they neither like. They’re more supportive than you think. Well… Sometimes.

May the force be with you,
Gino

Quality Assistance: Lessons learned

This morning, during my 10 idea routine, I push myself to list techniques and activities that I’ve tried to change the Quality mindset around the company. You already know that we’ve been doing modelling some of the process following the Quality Assistance approach. I know that the outcome of these activities will drastically vary for each team, but hopefully knowing some experiences will help you try something similar. And, as Scott Berkun explained amazingly here, it’s more important to learn from the failures than to just share the final idea. And certainly writing this will help me remember some details. I don’t trust my memory that much…

I’ll try to give you as much background as possible, although reading about how the organisation is working now and how I’m facing it know may help. Now.. Let’s focus on some things I’ve tried!

Three amigos. This is a pretty standard activity, although we needed some twist mainly because of resource issues (just me as QA and one product owner). We started picking some complex tasks per sprint and trying a standard Three Amigos, although at some point we decided that the engineers should do the exercise of listing the scenarios before implementing a solution, leveraging our work to double check and enhance with some “edgy but important” ones.

QA demo. We do it after finishing the development of some sprint stories where me (or Product) showed in 10 minutes at their machine how users are most likely to run through it, as well as trying to break it. There’s no better way to understand how to test than looking someone testing (and trying you communicate the mental process). There are some developers that, after this, embraced some testing activities as they saw me enjoying it, although others are reluctant.

Defining metrics to understand the current status. Things like amount of errors in each version, times, grouping types of errors, clearing non-useful data or even finding ways to generate it (like replicating production load on testing environments). We’re all engineers here, and no matter how much you try: data wins discussions. That’s a fact. I don’t have any data to back it, but you have to believe me.

Testing hour. I can’t think that I have too much time to run many experiments, but one of the activities I love doing is setting voluntary exploratory testing sessions. I’ve done it mainly for big new releases, and I try to guide them to focus on a different aspect each time, although it gets wild most of the times. Unfortunately, we don’t do enough dogfooding, so this activity helps a lot spreading the knowledge about the tool and getting feedback. In my experience, teams that are usually siloed and less involved with the products are the ones enjoying it more, and giving you fresh feedback (R&D, Ops, etc.).

Getting involve with the community. We’re quite small, and I love allocating some time helping out the customer support guys giving not technical answers in the “I think I found a bug” section of our forum. It gives me loads of insight about what they care about, and some valuable examples to back my feedback. And they actually find some really weird bugs!

Proactively spying production. Gathering metrics, we’ve come around with some filters to get weird errors and behaviours, and we can use our tool to understand how the a user can get there. We’ve found edgy bugs because of it. We also use Inspectlet to understand how the users react to new features, helping us understanding further user cases and finding strange behaviours!

Participate in technical design meetings. I’m used to joining product meetings to help define some new features or make the product more resilient, but being involved in the technical discussion gives me more insight into the implementation and the challenges, and allow me to raise issues to ensure a more testable product (backdoors to easily reach some states, ways to inject failures, etc.)

Big picture testing. That every developer should care about testing it own slice is a fact, particularly in an organisation where there is only a QA. But testing integrations and “end to end” is usually something tedious that no one wants to deal with, but having a smoke verification before and after every release brings A LOT of value. That’s why I’ve taken the responsibility of building a “simple scenarios” big picture testing suite for both backend and frontend, serving me as valuable documentation as well. It’s the green light everyone needs before rolling a change.

Reviewing code (particularly tests). I wish I had more time to review more bits of code, but I usually just check some of the new stuff who I haven’t understand yet; and the code for the functional tests, especially the first batch of tests for a new feature. My main concern are readability, scalability and detecting flakiness. Avoiding integrations when possible (using static versions of webs vs live version), some random waits, retries policy that would masquerade problems, modularize for reusability…

Pair whenever possible. Ok, after all of this, I know I don’t have much time left. But, when I have, my first list is pairing as much as possible. It teaches me new technologies and ways to develop, I get some understanding about the solution, I can give early feedback and it bonds me with my colleagues. I’ll make a post about the benefits (and cons) I see in pairing soon!

And this, guys, are some activities I’m trying to infect the team with the love about Quality. Some of them may end as worthless, and many will be evolved and change, but right now I’m learning something new every time I do them. That sounds like enough reason to keep them for me!

May the force be with you,

Gino

Quality Assistance: Why we tried it

As I mentioned before, Quality Assistance is a testing methodology that focuses on empowering and support the developer on Testing tasks so the team can deliver at speed. In this post, I’m going to talk about the reasons why we tried this approach with our team, as well as some challenges we faced.

I joined the startup right after our QA lead did. It was in interesting timing, all the decisions regarding Quality were still pending. I was lucky that she counted on me for the change, and one of the approaches we wanted to try was Quality Assistance.

Why? Well, I think that the main three reasons were: resource limitation, we were only two to deal with every time, so traditional methodologies based on manual sign off or us automating every check was out of the table; we sincerely believe this is the step forward the industry needs, especially in Agile environments where speed is a key factor; and personally, I’m always willing to try something that would allow me to spend more time doing the activities I love of my role.

It wasn’t the only option we considered, but the one we wanted to try the most. As any other similar change we had a list of challenges to face in order to success in it, and some of them needed to be addressed before working on it.

As always, the first step was convincing the stakeholders. Luckily for me, most of this work were done by my manager, and they were really keen to try the change because they knew the previous approach wasn’t working and agree that Quality ownership should be shared in the team. But, if we want to really convince stakeholders we needed to start gathering metrics to show the results and start using facts as arguments instead of opinions or beliefs. It makes most conversations easier. We struggled on this and I believe if we had focused more on metrics, embracing Quality Assistance would have been smoother. Although I don’t have facts on that…

Convincing the developers at that stage was easy, but when we started to work together on what were the expectations and how we needed to change our way to work… Well, I can ensure you that it’ll take me a whole post talking about some clashes we had during the process. The lack of documentation made it more difficult. So far, it seems that only Atlassian is trying it, so we didn’t have many documentation or examples to show them, and most of the information was focused on QA practitioners out stakeholders. It is also an approach that requires high doses of customization as it depends on each team. We had to focus on a team by team solution, difficulting us to use one solution to convince other teams.

Personally, I also had the challenge of improving my exploratory testing techniques. Previously in my career, I just focused on automated checkings, tooling and performance testing. And this is a crucial part of quality assistance. It was an enlightening experience, learning a lot in such a small period of time, and helping me with the next challenge: thinking how to teach testing. To empower developers with the ability of testing you need to learn how to express all the tacit knowledge you’ve been building for years. It completely changes the perspective because every time I performed a testing task I was actively thinking what is happening inside my head and trying to translate it into words. Challenging for sure, but fun nevertheless.

These were the main walls we had to face before trying Quality Assistance on your organisation. On following posts, I’ll try to focus on different steps that we tried and what was their impact.

May the force be with you,
Gino

P.S. Feel free to use Quality Assistance category to visit the related posts!