Quality Assistance: Lessons learned

This morning, during my 10 idea routine, I push myself to list techniques and activities that I’ve tried to change the Quality mindset around the company. You already know that we’ve been doing modelling some of the process following the Quality Assistance approach. I know that the outcome of these activities will drastically vary for each team, but hopefully knowing some experiences will help you try something similar. And, as Scott Berkun explained amazingly here, it’s more important to learn from the failures than to just share the final idea. And certainly writing this will help me remember some details. I don’t trust my memory that much…

I’ll try to give you as much background as possible, although reading about how the organisation is working now and how I’m facing it know may help. Now.. Let’s focus on some things I’ve tried!

Three amigos. This is a pretty standard activity, although we needed some twist mainly because of resource issues (just me as QA and one product owner). We started picking some complex tasks per sprint and trying a standard Three Amigos, although at some point we decided that the engineers should do the exercise of listing the scenarios before implementing a solution, leveraging our work to double check and enhance with some “edgy but important” ones.

QA demo. We do it after finishing the development of some sprint stories where me (or Product) showed in 10 minutes at their machine how users are most likely to run through it, as well as trying to break it. There’s no better way to understand how to test than looking someone testing (and trying you communicate the mental process). There are some developers that, after this, embraced some testing activities as they saw me enjoying it, although others are reluctant.

Defining metrics to understand the current status. Things like amount of errors in each version, times, grouping types of errors, clearing non-useful data or even finding ways to generate it (like replicating production load on testing environments). We’re all engineers here, and no matter how much you try: data wins discussions. That’s a fact. I don’t have any data to back it, but you have to believe me.

Testing hour. I can’t think that I have too much time to run many experiments, but one of the activities I love doing is setting voluntary exploratory testing sessions. I’ve done it mainly for big new releases, and I try to guide them to focus on a different aspect each time, although it gets wild most of the times. Unfortunately, we don’t do enough dogfooding, so this activity helps a lot spreading the knowledge about the tool and getting feedback. In my experience, teams that are usually siloed and less involved with the products are the ones enjoying it more, and giving you fresh feedback (R&D, Ops, etc.).

Getting involve with the community. We’re quite small, and I love allocating some time helping out the customer support guys giving not technical answers in the “I think I found a bug” section of our forum. It gives me loads of insight about what they care about, and some valuable examples to back my feedback. And they actually find some really weird bugs!

Proactively spying production. Gathering metrics, we’ve come around with some filters to get weird errors and behaviours, and we can use our tool to understand how the a user can get there. We’ve found edgy bugs because of it. We also use Inspectlet to understand how the users react to new features, helping us understanding further user cases and finding strange behaviours!

Participate in technical design meetings. I’m used to joining product meetings to help define some new features or make the product more resilient, but being involved in the technical discussion gives me more insight into the implementation and the challenges, and allow me to raise issues to ensure a more testable product (backdoors to easily reach some states, ways to inject failures, etc.)

Big picture testing. That every developer should care about testing it own slice is a fact, particularly in an organisation where there is only a QA. But testing integrations and “end to end” is usually something tedious that no one wants to deal with, but having a smoke verification before and after every release brings A LOT of value. That’s why I’ve taken the responsibility of building a “simple scenarios” big picture testing suite for both backend and frontend, serving me as valuable documentation as well. It’s the green light everyone needs before rolling a change.

Reviewing code (particularly tests). I wish I had more time to review more bits of code, but I usually just check some of the new stuff who I haven’t understand yet; and the code for the functional tests, especially the first batch of tests for a new feature. My main concern are readability, scalability and detecting flakiness. Avoiding integrations when possible (using static versions of webs vs live version), some random waits, retries policy that would masquerade problems, modularize for reusability…

Pair whenever possible. Ok, after all of this, I know I don’t have much time left. But, when I have, my first list is pairing as much as possible. It teaches me new technologies and ways to develop, I get some understanding about the solution, I can give early feedback and it bonds me with my colleagues. I’ll make a post about the benefits (and cons) I see in pairing soon!

And this, guys, are some activities I’m trying to infect the team with the love about Quality. Some of them may end as worthless, and many will be evolved and change, but right now I’m learning something new every time I do them. That sounds like enough reason to keep them for me!

May the force be with you,

Gino

Lessons learnt from Clean Code

Before start talking about how Clean Code:A Handbook of Agile Software Craftsmanship by Robert C. Martin (Uncle Bob) affected me, I want to start recommending this book. It’s well known inside the Software Development industry and, even if you don’t see eye to eye with Uncle Bob, it has really good reflexion about how to structure your code. I was lucky that during my career I joined a company where reading this book was part of your first-year commitments, and we use it a lot while code reviewing as a common ground for arguments. But now, let’s focus on what I learnt reading this book!

Being able to solve complex problems is not what defines a professional software developer. It’s absolutely a good skill and, at some point, you’re going to need analytical thinking and problem-solving abilities but a professional developer also focuses on the readability of the code as well and building scalable, maintainable and simple solutions.

This book made me realise that wording, naming and modularizing the code are more important than I thought to start my career. I started to spend some time finding the best variable, method and class names, for example.

Learn how to better use hierarchy. I’ve been working with object oriented languages during the majority of my professional life, and learning how to master this powerful tool makes a huge difference.

Refactoring to achieve readable code usually takes longer than coding the solution itself. It’s a good practice to just focus on getting the job done, and then start iteratively refactoring until the code is readable enough.

Always leave the campground cleaner than you found it.

Having a guideline during code reviews make them less harsh. When a clash happens, you can use the book to specify why a change should be done, instead of just using your “I know better than you” argument.

Professional programmers care about testing. Mr. Martin thinks that TDD is the only way for a professional programmer because no code is complete without enough testing verifying that it works.

I should not abuse commenting. During my degree, teachers told me that I have to comment as much as possible to achieve a readable code. Then, you discover that no one updates the comments when refactoring, so it ends like a misleading piece of information. Instead, if you focus on a readable code per se, and you forbid yourself to comment; you’ll build an understandable solution that doesn’t require them. And, for documentation, nothing explains better than an extensive testing suite. And if you don’t update them… they’ll break!

Programming literature can teach me more than I thought. I assumed that every you have to learn from coding is online, and books are outdated and useless by definition. How wrong I was. After reading this, I picked books about Testing, Programming, and Design patterns. And, without any doubt, my code got better after doing so!

Here was the first time I heard about principles like Don’t repeat yourself, Keep it simple stupid, You aren’t gotta need it. And I realised how little I was following them!

Those are some of the lessons I learnt reading this enlightening book. In a future post, I’d love to talk about what I learnt from Clean Coder:A Code of Conduct for Professional Programmers (Robert C. Martin), part of the same series but focusing on the relationships between software professionals.

May the force be with you,

Gino

How The Game improved the relationship with my colleagues

“Any action or experience contain a lesson” is one of the mottos that rules my life. I truly believe it. It’s probably due to spending too much time thinking about anything, but I can extract a lesson from any of my experiences, as well as most of the games I’ve played, books I’ve read or movies I’ve enjoyed. That doesn’t mean that every lesson has the same value, as it varies dramatically depending on my context at that moment. But that’s why I try to always meet new people (from their stories is really easy to learn something), consume a variety of media and read from a diverse book pool. One of the examples is described here.

And that’s why I ended reading The Game. The book talks about a pickup artist society and how much the life of the writer changed after learning how to seduce a woman (that’s a really vague summary). I love learning about human behaviour, particularly when it focuses on the subliminal realm, so this book gave me loads of insight about how people react to certain patterns, as well as formalising in words some thoughts I already had (which helps a lot, knowing something and being able to express it are different levels of understanding a subject).

But we’re here to discuss what The Game taught me that was applicable in my life, not talking about the cheesy pickup techniques that are probably outdated and culturally dependent. I really think it helped me stepping up my workplace relationships. And no, I’m not saying I started flirting with me colleagues.

Your working environment is ruled by human relationships, that’s a fact. Even if you’re working on a high technical field where people try to encapsulate themselves to get into “the zone”; teams are the one achieving amazing projects, not bright individuals. So, if the way you talk with your colleagues, you set expectations with your manager, you manage some problems and you react to some interruptions are going to drastically change the moral and energy of your team; every single lesson that teaches me how to understand and optimise this interaction makes me a better professional.

With optimising iterations I’m not meaning trying to pursue and lie someone so things are going the path you want because that collides with my “do not be evil” motto. Optimising for me is being able to identify those “naughty tricks” some professionals use to reach their goals, learning when to express admiration, size when to take the leadership and how can show them your value. The Game gives you examples of situations where saying something completely changed the outcome, either ruining the game or delivering success. And, being honest, if you know how to overcome yourself and keep trying in the game, you’ll be way more prepared for any awkward situation. It also focuses on realising your environment and find the best technique for each context.

Being more specific, I’m going to share some quotes that have been helpful in my life. not all of them are from Neil Strauss but, for me, they related to what I learnt from the book.

What you look like doesn’t matter. But how you present yourself does.

You miss 100 percent of the shots you never take

The cat string theory is the most accurate theory of all time: As humans, we don’t appreciate things that just fall into our laps. We find more value when we have to bust through personal boundaries, overcome obstacles, and do things we originally thought we could not.

Be a closer: Most people are not closers and never finish what they start. I’m definitely guilty of this on occasion, I get overly excited, commit to everything, and often never finish projects I start.

Some of you may even sabotage your own progress because you’re afraid you won’t find what you seek. I don’t know about you but I’d rather find adventure in the quest than finding comfort in sitting idle.

Everything you do matters: In the end it all counts, it’s cumulative, LIVE!

May the force be with you,

Gino

Automation is not always the best way

After years of being a Software Developer in Test who primarily focuses on automated tests being the one defending why automated tests are worth the investment; it feels really strange to be the one convincing stakeholders that, right now, automated tests aren’t the best solution we have to raise quality with the given situation. That’s why I want to talk with you about why I came with this, how I dealt with stakeholders and what was the outcome.

Let’s start with some background. We’ve been recently forced to restructure the company, and ending as the one QA is one of the results. That’s one of the reasons I had to start spending more time learning about methodologies, testing techniques and managing expectations. It’s not the first time I deal with these things, but I’ve always had someone to give my advice before committing to the decisions. That’s why it was my responsibility to understand when to apply other techniques (outside my expertise with automated tools to support testing), as well as good ways to implement them.

Another deciding factor is the tight deadline we’ve committed to, as it forces us to deliver a huge amount of work, setting up a situation where raising quality concerns is seen as an annoyance. We had some processes that were rolling out which stopped because of this change, as well as some of the teams decided that they don’t have any time to deal with automated tests; although others focused on them from the beginning, doing TDD while pairing so it helps them working with a half-defined technical design. After some tries trying to convince them like explaining how automated tests will help us preventing regressions while iterating, handing some working frameworks inside their to them so the transition would be easier (I don’t have expertise on JavaScript, so it wasn’t probably the cleanest code) or giving them written scenarios to leverage the work; I decided that maybe it wasn’t the moment to tackle this problem. Also, I got a message from the managers that we should keep noise to the minimum, so developers don’t lose their momentum. I raised my concern to them (now I feel free to say something when I think things aren’t working), and started weighting the different approaches we could take.

I spent the first sprints gathering possible processes and sharing them with our product manager. I also spent a fair amount of time working with the designers, learning from the user testing and reviewing the tests and scenarios from the teams which decided to pair (realising how much they were already caring about Quality allowed me to narrow my focus). When I started seeing and playing with the deliverables, it was obvious that we needed to tackle somehow the Quality of the product.

I started reporting bugs and odd behaviours to a spreadsheet shared with our product owner so the noise to the developers was way lower (as he could discuss with me what were our priorities). I did short exploratory testing sessions per feature and iterate through them. Afterwards, I paired with the designers and product owners to run more exploratory sessions, allowing me to centralise the reports reducing related errors. I wanted to do it with the developers so the feedback loop was shorter (this is a common practice and, in my opinion, one of the highest value testing activities), but I was not able to do so. As the release date was approaching, I involved and encourage anyone in the company to test the current state of the product to identify things like usability issues, user experience feedback, etc. I asked them to use low noise channels to give me the feedback so I could again act as a hub gathering and cleaning it.

I won’t talk about all the challenges, failures and problems we ran through. I wanted to write this because of the lesson I learnt: no matter how much expertise you have in one field, there are some situations when you need to jump out of your comfort zone because it is a clearly better solution, even if you make a poor implementation due to the lack of experience. Personally, I feel lucky because I’ve been changing industry and methodologies during my short career, making me feel that I’m not biased yet. But I feel particularly proud of myself because I didn’t try to find a solution using my automation knowledge. Have you been in a similar situation?

May the force be with you,
Gino

Cool testing roles

Fortunately, the Testing industry is not only formed by Automation, Manual QA and QA Leads. Models are shifting (particularly due to Agile methodologies) and new approaches require new (and more interesting, in my opinion) roles. Here I’ll talk about some of these roles, either real examples and some theoretical ones.

  • Test jumper. Described my James Bach, is one solution for Testing in Agile methodologies. Its duties are supporting and supervising multiple Agile teams, ensuring that nothing blocks proper testing at speed. It’s all about aiding developers and testers to have whatever tool before they need it, helping to create testable products and raising testing concerns as soon as possible.
  • Software Engineer, Tools & Infrastructure. Google’s evolution of the Software Engineer in Test, this role focuses on tooling the Software Engineers to make their life easier. It’s a technical position and some of the tasks are described in this Reddit post, but it has been one of the trends in the recent time to replace testers with software engineers who focus on making testing easier.
  • Quality Assitance. As I described in this post, it’s the approach Atlassian took to deliver quality at Agile speed. The efforts are focused on empowering the developer and training him in testing tasks, so the majority of the tasks can be done really early on the development process. Other companies, like Spotify, are creating similar roles.
  • Automation toolsmith. Similar to Google’s approach and theorized by the Rapid Software Testing team, this role focuses on developing tools to unlock testers potential. They should work closely with the testers understanding what are going to be their needs in the future, and with the developers creating more testable products and learning the system to build those tools.

I am sure that they’re more cool Testing roles out there, and I’ll try to find them. I love seeing how is moving shifting our community, and learning how companies are trying to adapt testing to today’s needs.

Some honorable mentions are:

 

May the force be with you,

Gino

BDD: not by cucumber alone does man live

Don’t get me wrong, I know how useful BDD is cleaning test reports, grouping different frameworks and giving the ability to scale as it allows non-technical people to expand the feature file. But it has loads of painful points which makes it a powerful tool bridging the gap but most of the complex scenarios are impossible to handle.

That’s why, after some time using it as the default BDD as the default option, I understood its value and now I only use it for the simple and repetitive workflows, allowing anyone to easily understand what’s the current coverage of the “pre-commit” verifications. It also brings a common language for developers, testers and business people to specify the product’s behaviour, as explained by the Three Amigos approach, making it an awesome tool for clarifying misconceptions.

But I’ve found some difficulties implementing it. Since it’s foundation it heavily relies on TDD practices, so implementing it on teams not used to TDD techniques carries an extra challenge. I haven’t found proper documentation for the test implementation, as most of the articles online only state how empowering it is for business people. It requires a rigorous delimitation of what should contain every scenario, breaking them into really small modular ones which goes against our common belief.

Most of the implementation problems happened when we tried to reuse our BDD framework (made with Python and Behave) which was fitting flawlessly one of our small REST API testing suite, into a more complex project. We started lacking some context management, sharing variables between steps, dealing with multiple users… So we try to evolve our framework to the perfect machine that would handle it all. And it became a beast. We started to deal with “worlds” per test suite (a world for the billing system, another different one for user management), that way we wanted to keep things as modular as possible. So it became an ugly beast.

So, at that point, and being coeval with my “I don’t trust in anything I know anymore” moment, I decided to start focusing on “is BDD really what fitting us?” instead of just “how can I make BDD fit us”. Is it delivering more value than a traditional automated test? Are we using it correctly? What are the alternatives?

That’s how we realised that in some projects it wasn’t the way to go. For instance, when we deal with complex user scenarios and we’re not going to scale the test adding new repetitive steps, it didn’t make any sense to spend most of the time adapting the BDD framework. In those cases, we kept the structure for a consistent reporting, and we dealt with the test on the old fashion way.

Did you find any limitations on BDD as well? Is it fitting your needs seamlessly? In a future post, I want to talk about some examples that caused us problems. In the meantime,

May the force be with you,
Gino

P.S: If you want to dive deeper into the BDD world, you may enjoy these resources:

Retrospective: Hacking my time scheduling skills

One of the pieces of feedback I’ve reiteratively got from my managers is my difficulty to say no, as well as how easy is for me to focus on new challenges sometimes setting aside key pieces of our roadmap. I want to believe that I’ve made some progress on the subject, although there’s always room for improvement. I’m here to talk about one of the techniques which have been really successful in my case.

I’ve been trying several techniques during the past years. I’m in love with Pomodoro timers, but I find really hard implementing them in a team that’s not used to it. Also, I think they shine in heavily intellectual activities (such as programming or designing), but most of my daily activities involve meeting, pairing and discussing. I’ve kept the concept of taking some chilling out minutes after long periods of focused work. For example, I still use something similar when I’m coding or performing exploratory testing sessions (see how Rapid Software Testing applies chartering in Exploratory Testing).

The moment when everything changed was when my QA lead, among other colleagues, got redundant and I lost her help prioritizing and tracking or roadmap. So I needed to take even more seriously my time scheduling skills. I started applying the quadrant technique to my daily tasks, identifying their categories and their time c consumption. As I’ve explained before we were following the Quality Assistance approach, so the four categories I identified were:

  • Automation: All time spent writing testing frameworks, tools, and scripts. Also, all the effort maintaining our current builds, upgrading, etc.
  • Exploratory: Time to run through the new features, understanding our product and assessing the Quality. It also involves reporting and verifying.
  • Teaching: Working with my colleagues ensuring that everyone’s on the same page regarding quality. Meetings, redesigning something to make it more testable, reviewing their automated tests, pairing, etc.
  • Learning: As most of my daily practices are new to me, and we’re always looking for improvements, I spend a fair amount of time documenting myself, learning new methodologies and building prototypes.

As you may assume, it wouldn’t make any sense to have an even distribution of my time, but it helped me understanding where I spent most of my time, and aligning it to our goals. Different periods need different approaches and having the data help me realising if I need to change.

As I’ve already said, one of my top priorities right now is spending more time doing the tasks I enjoy the most (or the ones where I learn the most). These categories make it simple, allowing you to invest more time in other quadrants if you get stuck on activities that you hate.

Also, I have to admit that reading this post about a similar technique has helped me retune it to get more value or of it. This is an iterative tool which is helping me a lot understanding the time spent during my working hours. I highly recommend you to step up your time managing techniques, it makes my days way more productive, it helps with our goals and my days are funnier. Knowing what you love about your work allows you to invest more time on it.

May the force be with you,
Gino