Three steps to improve the quality of your development process in a startup

I’ve been working in the IT industry as a QA engineer since the start of my career. Sometimes, my position was just hands-on testing in a waterfall-like process; other times I worked closely with the developers to write all the automation needed before the feature was considered done. Every company offers a different approach on how to increase the quality of their product, and I’ve also seen companies trying the same thing with completely different results. But there is always something that kept being constant: when a company reaches enough user-mass or the codebase gets big enough, the conversation about ensuring quality gets more and more important. Speed and productivity matter less if you alienate the users with buggy releases, and your engineering team gets afraid of areas in your code which changes _always carry side effects_. That is the moment when the way of _just getting things done_ that has worked till now starts getting on the way of a healthy product. That’s the point when the organization thinks on increasing the QA headcount.

Currently, I’m facing this situation again, with the difference that I feel way more prepared. Of course, every company has a different way of working, but starting to be a veteran in the industry allows me to recognize common troupes and analyze the situation with frameworks which worked for me on previous cases. For some of these steps, I highly recommend finding a person who will bring the expertise on automation or testing principles in your team, but a lot of these steps can be performed without by anyone, especially if you already count with a Quality-crusader in your ranks. And that’s what I’m going to help you with right now, giving you an example of the steps you can take to start improving the quality of your product and processes by yourself.

Before starting, I would like to share the first step I take when joining a new startup, particularly if there’s not a defined Quality process in place. I always start finding the Quality Crusaders currently working with you, those bright knights who, regardless of their work title, push for a quality-first approach of working and cares about following best practices. I even ask for them during my interviews when considering a new job. Having allies who will share their view on how to improve the current situation is invaluable. So, find them, and involve them in these first steps!

Step one. Describe the development process.

Ok, so we are all making software here. And the process seems pretty straightforward, doesn’t it? Some people sit in a corner. They write some code. They push that code somewhere. And our users enjoy it. Pretty simple. So let’s add a little bit more flavor to it, shall we?

I like to start naming the different the different states that a task (bug fix, feature, etc.) goes through before our users enjoy it to pieces. Of course, this is more of a guideline than a cooking recipe, but I’ll describe an example as broad as possible so hopefully, you can get inspired. In this case, a task can come from a bug report, a new feature requested on some work that we want to do internally (tech debt, etc.). At some point, an engineer will pick it up and start working on it. It will land the codebase. It will be deployed to some testing environment. And finally, it’ll be in our user’s hands. So, without caring about actions and transitions, the states would be:

Coding – Code on the repository – Testing environment – Production

Now, let’s start the fun part. If we know that our code will go through these steps, how can we improve the quality adding actions or extra transitions?

Step two. How can we inject that quality?

This will drastically change depending on the previous experiences of your team, as well as what are they currently doing. Involve different people in this step, as they’ll offer valuable insight from their previous works of things that worked and didn’t. This is an example of what I’ve proposed in previous occasions. I usually include a short description of the different steps.

Copy of Quality Process

For instance, involving people with different expertise on a Design Document before starting to work on a feature has worked with great success for me. That document can take any shape, but having a little discussion on what is the problem that we’re trying to solve, where does it interact with other modules, what is the threshold needed to feel the implementation is good-enough helps A LOT to most engineers when working on it. Just having “what might go wrong” in the back of your head while you’re coding a solution can work wonders.

Another key element, apart from the description, is figuring out the owners of those actions. As I said, many don’t require any new technical expertise in your organization like Design Document, or Demo Testing where the developer runs through the solution with a peer or a product person to spot errors and understand how others verify that feature. Others really benefit from a specific skillset. That’s why, for instance, I took the ownership of working on the automation that will provide a smooth and reliable way to assess a solution. I’d work on building and improving the frameworks (Unit, Integration, End to end, Performance testing…), as well as working with the team so we can all embrace quality.

I also recommend that you don’t try to tackle all solutions at the same time. Set priorities, where can you get more bang for your buck, which skillsets do you already have, etc. All of these solutions will go through different cycles, and spending enough time and energy to set a nice foundation will help with the resistance that every organization faces when introducing change. Focus on feeling a constant momentum of change an improvement, instead of the speed of that change. Personally, the workplaces where I could focus on a steady pace, end up having a way more impactful change in the end.

Step three. Measuring the impact, reviewing and improving.

This is, in my experience, the hardest part. It’s really difficult to measure the impact of shifting to a quality-focused development process. Are you having more bugs because the engineers are working on a harder problem? Or did we get better at finding those bugs? As a quality engineer, part of my job is also finding ways to measure the impact of any change, so we can iterate through them and find a better approach. We all share a common goal: develop a product that will bring more value to our users, and doing it in an environment that we enjoy working at.

Spend some time profiling the process so you can get data on it. Sometimes, it’s as easy as holding periodical meetings where people discuss how it worked or didn’t work for them; or you can find entry points of measurements to take a data-driven decision. Be mindful of how hard this task is. If people feel that they would be reviewed by these metrics, they would be reluctant of iterating and embracing them. This is not about pointing fingers, but devising a way to improve the process for everyone.

As in most tasks I usually work on have more an internal impact than an external one, I like to measure my success on the perception of the codebase by our engineers. Of course, I hope that changes in frameworks and automation will impact the end user experience; but I like to focus on our team first. Examples on measurements that I use are: periodical surveys to the engineering team on how much they like working in our codebase, or how safe they feel when doing a refactor or change based on our automation; pairing with different developers so I can see how they work and their interactions with the frameworks; workshops on how to approach different bugs or problems; etc.

Another thing to be mindful of is that, at this stage in a startup, Quality is a buzzword used as an escape goat quite often. Most teams had to focus on quick and “dirty” deliveries on their first steps as an organization. When the conversation about Quality starts arising, it’s usually because that speed has started to produce difficult releases, and “the lack of a QA in your team” is frequently used as an excuse. Having people with different expertise always bring value, but it’s everyone’s responsibility in following the best possible practices to their knowledge, and voicing their complains about the current process. This situation can also create a toxic situation where, after hiring someone who that expertise, it’d be required from them to just fix everything.

Final notes

To wrap it up, I want to finish that this is not a magical tool that will increase your productivity and ensure the quality of your product by itself. This is a progressive change of mindset. It’s about changing how we work to spend more time on “how we can ensure and verify the solution”, and less on “how can we get done with this as soon as possible”. I personally believe (and I’m backed by plenty of evidence) that an increase of the efforts regarding quality will also bring speed on the long run, as it will require less of the VERY COSTLY redoing and fixing, that can also undermine the spirit of our engineer team.

If you’re starting to ask these questions in your organization: congratulations, you’re going through a change that your users will thank, and will produce less churn on your engineering team. But, I’m also really curious about your approach. What is the situation that brought you to start talking about Quality? What are the current solutions that your team is working with? Which are the points are you most struggling with?

Advertisements

The book of the five rings: Why

I’m enjoying The book of the five rings (Miyamoto Mushashi) way more than what I thought. It’s not only teaching m about martial arts and war but all the lessons are easily applicable to professional and personal situations. Even if I wish to never have to apply this knowledge about sword mastering in my real life (although with my passion of role games, I know they’re going to be handy!), revisiting some experiences in my head as they were a duel, running through all the points Mushashi explain in this amazing book.

What hooked me to continue reading (on top of my inner nerd feeling like a ninja!) was how surprisingly relatable were the lessons and statements to my career. And we’re talking about a half a century old manuscript about surviving duels to the death.

For instance, Mr Mushashi talks about how a bad rhythm can kill you. Moving too fast in a fight is as bad as being slow, as you might make a mistake. That’s why he always keep a steady pace: fighting, walking and living. It allows you to study your opponent and plan a strategy.

He also says that repeating a technique that previously failed will kill you. You might be tempted thinking that the failure was because a bad performance, but the chance of another failure is really high. If you tried it and manage to survive, your adversary will be ready for the third time. And you’ll die.

He stays the importance of your environment. Every one of its details might be an advantage, and you have to avoid that they become an adversity. You should also have expertise with a large variety of weapons, not just mastering a few of them like other duelling schools. Adaptability is required if you plan to keep duelling and survive. People will bring new weapons and tools, and they’ll focus on countering the common techniques.

As he says: no technique is invalid if it makes you survive. I want to end talking about the book with some if its quotes:

“Do nothing that is of no use”

“It is difficult to understand the universe if you only study one planet”

“You can only fight the way you practice”

“You must understand that there is more than one path to the top of the mountain”

― Miyamoto Musashi, The Book of Five Rings

May the force be with you,

Gino

Subconcious testing

I’ve finished reading Subliminal recently and it confirmed an idea I always had: guts in testing make a huge different. I’ve already talked about great testers being born instead of trained. Experience, study and failing to teach you new skills to achieve better efficacy, but most of the decisions I take during testing sessions are gut-driven (and experience driven as well, always remember previous shit!).

The best (or worst, depending on your side of the game) bug discoveries I’ve made were trying something off record purely based on my instincts. Building that load test focusing on the weird interaction between components because several parts use the same database, or just letting my inner annoying user take control of the keyboard and mouse. Something tells me that the calendar widget may easily fail, and the payment app is not going to correctly handle high load.

This makes some conversation with developers tense because they want a reason why we can’t go live get, and waiting for me up build a test that will check something that we don’t have any evidence yet that it might fail is not a valid reason. With time, they end up trusting my guts as much as me; but I find wise to hold it while you’re still building that trust, as it’s not wise to have your new team against you during the first weeks. When they see its value, they stop thinking that is a waste of time.

It also makes Quality Assistance methodologies trickier because teaching reasonings that are not fully conscious requires a deeper knowledge in the field. I like analysing my actions after doing them in order to understand the reasons and having an easier time explaining them.

But, if testing is about guts, how can we train it? For me, analysing my reasonings helps me identify what I correctly did and where to focus my efforts; and observing my colleagues asking for reasons make me understand new ways and approaches. It’s hard to learn from other’s experiences, but it’s still better than nothing. That’s why I prefer reading about testing cases and stories rather than studying. But there’s still a long road ahead so, give me a hint, how do you do?

May the force be with you

Gino

Cool automation tools

Few months ago I started gathering all testing articles and examples that I came across. I have a really awful memory, and discussing with my QA Lead about different approaches I could just remember some of the details, which led to awkward conversations; so starting to build my own “Testing Bible” sounded like an amazing idea. And so I did.

It’s actually a Google Doc where you can find chapters about specific testing methodologies (Rapid Software Testing, Quality Assistance), automation tricks, arguments to convince stakeholders and developers, a bullet point list of my achievements and some cool automation examples. Today I want to talk about the later!

I really love hearing about tools or techniques that organisation has developed to meet their really weird and niche testing needs. Google Testing Automation Conference is an amazing place to find some of them, as well as companies with tech blogs (although in some cases they think testing is not an enough appealing subject to talk about). So let’s talk about some of the ones I felt in love with!

Netflix Simian Army

This is bringing fault injection (https://en.wikipedia.org/wiki/Fault_injection) to the next level. In order to build a resilient system, Netflix realised that they need to be ready for errors. The best way the found to probe it was injecting errors in the pieces that are out of their control, so they understand what’s going to happen when those genuinely occurs. The talk is centred about the army of simians that they have causing AWS errors in production.

Things, like killing AWS instances or faking an entire region downtime on Amazon, are the duties of this monkey gang. They periodically trigger them on production to see the progress and be ready for when shit happens. Because we know that shit always happens. Part of the source can be found here.

Facebook’s test killer

As I’ve already mentioned, Facebook is known for not having Testing roles. They heavily rely on automation, although we know that flakiness and inconsistency is one of the biggest pains automating tests. That’s why they’ve come with a system where tests have to gain trust before the system will consider them as necessary; as well as a way to identify, heal and fix tests that cause more noise than benefits.

The system automatically marks the failed tests as “unknown value“, passing them through various states and identifying the responsible of it. It helps to review the code, and the system is the one disabling someone else’s test when it starts feeling flaky, so it’s not a “me against you” argument.

Generate tests from logs

Creating a complete suite of unit testing can be a daunting task. particularly when you scale up the product size. With this solution, they rely on an extensive logging and machine learning techniques to generate stateless and contextual assertions, building the skeleton of the functional tests.

You’ll still need some human intervention, but this is undoubtedly an amazing starting point, leveraging the most tedious part of the process. This is proper automation: something that assists us to achieve our goals!

Spotify’s Rapid check

Testing all the range of values in a unit test is impossible, that’s why Spotify has used the RapidCheck idea to help them build better coverage of the range. It randomly tests a broader set of values, setting a skeleton of important cases. Every tool that helps us get confidence on the solution faster is welcomed!

Do you know about another interesting testing tool? I’ve left some of them unsaid, but only because I love having pending subjects to talk about in future posts!

May the force be with you,
Gino

It’s always worse elsewhere

Yesterday I share a couple of beers with some old friends who also work in the Software industry. They’re working on a small Consultancy agency building websites using the same core, cutting most of the production costs as most of them use fairly similar implementations. Everything sounds right until they get surprised when I asked about their testing framework. “Well, before sending them to the client, we navigate through the site checking that anything breaks”. Fair enough, I don’t expect everyone to use Selenium. “The problem is when a client finds a bug. Most of them are part of the core so that bug is probably on every one of our projects, and we’re afraid of refactoring anything”. If the core is not a volatile piece of code, probably it’ll be a good investment to build some functional tests verification on top of your unit test. “Unit what?”. Bum. And then I remembered.

I was also part of a Consultancy brand (it was my first job in the industry). I remember the “we can’t afford to write the unit tests, our problem is really complex and we don’t have time, we’ll just verify it at the end”. I bought it. I truly believed that we were good enough that unit test was only a plus, and the projects were difficult and fast paced… NO WAY we could afford wasting time on tests! I remember some senior colleagues explaining that to me. Why would I don’t follow their example? They had way more experience than me, and they developed way faster. I wanted to be like them.

And I remember the struggles. I remember that every single bug fix carried weird and unexpected regressions. How the clients complained every week, and the constant fight about who is the responsible for the fix (Was it a bug, or a failure in the requirements?). The hellish deployments, the tedious manual verifications of just what was needed, the numerous problems with our version control tool as we always realised too late that the release didn’t contain that change.

I also remember the change. When I left the company and learnt something new in the next one. And then the next one. On my first day, one of my new colleagues was disturbed because I wasn’t writing unit tests. He didn’t even ask a reason, but I tried to convince him that they just slow my momentum. I remember his face. He just taught me how to make it easy, and handled me a copy of Clean Code. I had to pair with him and… Oh God, that was beautiful. He showed to me what TDD feels. And I started realising that, actually, the team didn’t spend that much time dealing with the horrors I was used to. It was so simple and beautiful.

That’s why I spent some time yesterday trying to show them the importance of this practices. Explaining to them why most of the companies they want to work in ask for TDD practices or writing functional tests. Exampling to them how they’ve been important during my projects. Giving them resources, guidelines and the chance to ask any question. But, obviously, I’m just a QA, so what the heck I’d know about coding.

Yeah, dear readers, it’s always worse elsewhere. That’s no excuse to not try to be the very best you, including the best professional you can be; but sometimes appreciating what you’ve learnt and why you’re doing it is needed. It is also important to identify who would use some help, and offer it. For me, life is how you feel sharing with the people you choose your knowledge, resources and smiles. I’ll talk in further posts about some of the

I’ll talk in further posts about some of the anti-patterns I’ve seen (and do) during my career, and some lessons I’ve learnt fighting them. But, for now,

May the force be with you,

Gino

Why being a tester spoiled my life

I truly believe that a great tester is born, as well as most other professions. Testing requires a strongly critical thinking, as well as perseverance and lateral thinking. There are some technical and products details that undoubtedly help in your daily work, but in my opinion, those skills don’t make a great tester on their own.

I’m not a great tester, and that’s what is driving me to understand, learn and practice how you become one. But I’m a critical thinker. I love breaking down situations too, in the calmest possible way, and identifying what is happening, which are the potential outcomes and how can it go wrong. I’ve been doing it my entire life, and probably that’s why I enjoy being a testing professional. But I can’t stop being a critical thinker in my personal life.

There are times, MANY times believe me, when spoiling yourself thinking what can go wrong will ruin an experience. There are times when understanding why you ended in some situation make it way less enjoyable. And, if you’re sharing it with someone, people will hate you when you try to anticipate the issues and prepare a plan B in advance. Some just want to do things, regardless of the result. A simple example is my peculiar relationship with food.

My mum in endocrinologist, and I grew up learning the calories of the food, which are their main nutrient and what is missing in any meal. And that has spoiled my life. I obviously eat junk food, and I enjoy eating colossal portions, but I can clearly point you how it will affect me, and what I’m doing wrong. So when I see someone eating a really big budget with doughnuts as buns, I just can think “SERIOUSLY??”, instead of enjoying it.

But peeps, if you’re cursed like me, remember that you can use it and become part of the testing community. Your colleagues will praise your ability to anticipate fires, and they’ll learn it after throwing away your recommendations; you’ll help people understanding why things didn’t work and you’ll show people that some details really matter.

This is why I’m still a tester. I spent part of my career pretending to be a developer, dreaming of being a designer and even thinking how would be producing. But I’m cursed, I was born to be a tester, and breaking things is SO funny.

May the force be with you,

Gino

The no-QA way

We’re still finding out sweet spot regarding Quality and there are always some developers that think no “formal tester” is needed in our process right now. This approach even gains more followers when you have an ex-Facebook in your team.

Don’t get me wrong, I never take the bait to jump yelling “You NEED me, this will be a MESS without me!”, mainly because I think this approach is as valid as any other if you focus on the right things, and meet some conditions. And I really enjoy discussing testing, especially when it allows me to understand my peers’ point of view.

So we started a conversation, but it didn’t lead to anything because we were just pointing the pointless parts of our current process, instead of really talking about how to make it work. Afterwards, I schedule some time on my calendar to sit in one of the meetings rooms and use the Whiteboard to clear my ideas. Would it be possible? What are we missing to get there? What would it offer us? How would it impact our team and process? Which are other solutions?

Then I felt way more ready to have that conversation again, but I decide it will improve with a broader audience, and sharing the questions in advance so everyone’s is better prepared. And, this time, I really enjoyed the chat. We still complained about what is currently failing, but we managed to come with tentative solutions. As well as some answers to the previous questions.

Let’s start with motivation. What would it offer us? The answer was almost unanimous: less friction in the process. As developers will be more autonomous and the QA layer is not going to be annoying, coders can focus on the code. It also carries some challenges, but we’ll talk about them later on. The goal is having a simpler and smoother process, and also, you don’t have to spend on QA itself. Win / win situation!

But, obviously, there are some requirements to get there. So, what are we missing to get there? First and more important, by far, is the culture of owning quality. If there won’t be a “quality gatekeeper”, everyone should be responsible for their deliverables quality as well as ensuring that everyone’s working on it. We’ve been moving towards that (as well as most of  the Agile testing approaches), but we’re not there yet and it’s really needed making this huge step.

This is especially tricky when you think about testing the “big picture”. It’s quite straight forward that when you deliver your solution, it should contain unit and functional automated tests verifying its behaviour. Hopefully, you have a culture regarding integration and component testing as well. But what about ensuring that the application works as a whole? Who will be responsible for that? Who will create, maintain and run those tests? What about the strange and complex cases that are almost impossible to automate? When and who should explore it?

One of the huge differences between the Facebook and our case is the amount of dogfooding that they do. We don’t have beta users, we don’t do canary releases, we don’t even use our tool that much internally. That’s one of the biggest challenges that we’d need to solve before taking this approach. It can be done, things like getting used to exploratory testing the application after when the development is done, but no developer is used to do it. One of the reasons why Quality Assistance keeps a testing expert available for the team is to help in this transition.

Then… how would it impact our team and process? Well, spreading out the responsibility about quality need some process changes, like defining some steps or techniques to review the solutions in pairs, not only reviewing the code and the tests. Get some understanding of the application that we’re building, and we need to play with it if we want to verify the changes. We’ll need more involvement with the product in order to find dependencies and better testable solutions from the beginning, although it will improve after the team gets better knowledge about it.

So, would it be possible? Well, one of the Agile principles I TRULY believe and share is “Fail fast, fail often”. As long as we timebox the try, set some retrospective meetings and are aware of the potential impacts; I think is stupid to answer the question instead of trying it out for a while, analysing and understanding the outcome afterwards. For sure, the first version won’t be the good one, but will teach us some tricks for the next one.

And which are the other solutions? I’m still keen on giving it a go, so it helps us identify what are we missing from the current approach and what we love from the change. But it’s always good keeping other solutions in mind. We can always go back to the old model of a QA wall where we throw everything, having someone to help us embracing quality and building tools to aim us (Quality Assistance), heavily invest in automation, or whatever pop your mind that might work.

There are thousands of different approaches to find Quality at Agile speed, and none of them works flawlessly. They neither work in every organisation. And I know a thing for sure: spending time and resources on finding a sweet spot for your process is always a good investment. Period.

Do you have experiences with a non-QA environment? I’d love to hear them!

May the force be with you,
Gino