Quality Assistance: Lessons learned

This morning, during my 10 idea routine, I push myself to list techniques and activities that I’ve tried to change the Quality mindset around the company. You already know that we’ve been doing modelling some of the process following the Quality Assistance approach. I know that the outcome of these activities will drastically vary for each team, but hopefully knowing some experiences will help you try something similar. And, as Scott Berkun explained amazingly here, it’s more important to learn from the failures than to just share the final idea. And certainly writing this will help me remember some details. I don’t trust my memory that much…

I’ll try to give you as much background as possible, although reading about how the organisation is working now and how I’m facing it know may help. Now.. Let’s focus on some things I’ve tried!

Three amigos. This is a pretty standard activity, although we needed some twist mainly because of resource issues (just me as QA and one product owner). We started picking some complex tasks per sprint and trying a standard Three Amigos, although at some point we decided that the engineers should do the exercise of listing the scenarios before implementing a solution, leveraging our work to double check and enhance with some “edgy but important” ones.

QA demo. We do it after finishing the development of some sprint stories where me (or Product) showed in 10 minutes at their machine how users are most likely to run through it, as well as trying to break it. There’s no better way to understand how to test than looking someone testing (and trying you communicate the mental process). There are some developers that, after this, embraced some testing activities as they saw me enjoying it, although others are reluctant.

Defining metrics to understand the current status. Things like amount of errors in each version, times, grouping types of errors, clearing non-useful data or even finding ways to generate it (like replicating production load on testing environments). We’re all engineers here, and no matter how much you try: data wins discussions. That’s a fact. I don’t have any data to back it, but you have to believe me.

Testing hour. I can’t think that I have too much time to run many experiments, but one of the activities I love doing is setting voluntary exploratory testing sessions. I’ve done it mainly for big new releases, and I try to guide them to focus on a different aspect each time, although it gets wild most of the times. Unfortunately, we don’t do enough dogfooding, so this activity helps a lot spreading the knowledge about the tool and getting feedback. In my experience, teams that are usually siloed and less involved with the products are the ones enjoying it more, and giving you fresh feedback (R&D, Ops, etc.).

Getting involve with the community. We’re quite small, and I love allocating some time helping out the customer support guys giving not technical answers in the “I think I found a bug” section of our forum. It gives me loads of insight about what they care about, and some valuable examples to back my feedback. And they actually find some really weird bugs!

Proactively spying production. Gathering metrics, we’ve come around with some filters to get weird errors and behaviours, and we can use our tool to understand how the a user can get there. We’ve found edgy bugs because of it. We also use Inspectlet to understand how the users react to new features, helping us understanding further user cases and finding strange behaviours!

Participate in technical design meetings. I’m used to joining product meetings to help define some new features or make the product more resilient, but being involved in the technical discussion gives me more insight into the implementation and the challenges, and allow me to raise issues to ensure a more testable product (backdoors to easily reach some states, ways to inject failures, etc.)

Big picture testing. That every developer should care about testing it own slice is a fact, particularly in an organisation where there is only a QA. But testing integrations and “end to end” is usually something tedious that no one wants to deal with, but having a smoke verification before and after every release brings A LOT of value. That’s why I’ve taken the responsibility of building a “simple scenarios” big picture testing suite for both backend and frontend, serving me as valuable documentation as well. It’s the green light everyone needs before rolling a change.

Reviewing code (particularly tests). I wish I had more time to review more bits of code, but I usually just check some of the new stuff who I haven’t understand yet; and the code for the functional tests, especially the first batch of tests for a new feature. My main concern are readability, scalability and detecting flakiness. Avoiding integrations when possible (using static versions of webs vs live version), some random waits, retries policy that would masquerade problems, modularize for reusability…

Pair whenever possible. Ok, after all of this, I know I don’t have much time left. But, when I have, my first list is pairing as much as possible. It teaches me new technologies and ways to develop, I get some understanding about the solution, I can give early feedback and it bonds me with my colleagues. I’ll make a post about the benefits (and cons) I see in pairing soon!

And this, guys, are some activities I’m trying to infect the team with the love about Quality. Some of them may end as worthless, and many will be evolved and change, but right now I’m learning something new every time I do them. That sounds like enough reason to keep them for me!

May the force be with you,

Gino

Lessons learnt from Clean Code

Before start talking about how Clean Code:A Handbook of Agile Software Craftsmanship by Robert C. Martin (Uncle Bob) affected me, I want to start recommending this book. It’s well known inside the Software Development industry and, even if you don’t see eye to eye with Uncle Bob, it has really good reflexion about how to structure your code. I was lucky that during my career I joined a company where reading this book was part of your first-year commitments, and we use it a lot while code reviewing as a common ground for arguments. But now, let’s focus on what I learnt reading this book!

Being able to solve complex problems is not what defines a professional software developer. It’s absolutely a good skill and, at some point, you’re going to need analytical thinking and problem-solving abilities but a professional developer also focuses on the readability of the code as well and building scalable, maintainable and simple solutions.

This book made me realise that wording, naming and modularizing the code are more important than I thought to start my career. I started to spend some time finding the best variable, method and class names, for example.

Learn how to better use hierarchy. I’ve been working with object oriented languages during the majority of my professional life, and learning how to master this powerful tool makes a huge difference.

Refactoring to achieve readable code usually takes longer than coding the solution itself. It’s a good practice to just focus on getting the job done, and then start iteratively refactoring until the code is readable enough.

Always leave the campground cleaner than you found it.

Having a guideline during code reviews make them less harsh. When a clash happens, you can use the book to specify why a change should be done, instead of just using your “I know better than you” argument.

Professional programmers care about testing. Mr. Martin thinks that TDD is the only way for a professional programmer because no code is complete without enough testing verifying that it works.

I should not abuse commenting. During my degree, teachers told me that I have to comment as much as possible to achieve a readable code. Then, you discover that no one updates the comments when refactoring, so it ends like a misleading piece of information. Instead, if you focus on a readable code per se, and you forbid yourself to comment; you’ll build an understandable solution that doesn’t require them. And, for documentation, nothing explains better than an extensive testing suite. And if you don’t update them… they’ll break!

Programming literature can teach me more than I thought. I assumed that every you have to learn from coding is online, and books are outdated and useless by definition. How wrong I was. After reading this, I picked books about Testing, Programming, and Design patterns. And, without any doubt, my code got better after doing so!

Here was the first time I heard about principles like Don’t repeat yourself, Keep it simple stupid, You aren’t gotta need it. And I realised how little I was following them!

Those are some of the lessons I learnt reading this enlightening book. In a future post, I’d love to talk about what I learnt from Clean Coder:A Code of Conduct for Professional Programmers (Robert C. Martin), part of the same series but focusing on the relationships between software professionals.

May the force be with you,

Gino

How The Game improved the relationship with my colleagues

“Any action or experience contain a lesson” is one of the mottos that rules my life. I truly believe it. It’s probably due to spending too much time thinking about anything, but I can extract a lesson from any of my experiences, as well as most of the games I’ve played, books I’ve read or movies I’ve enjoyed. That doesn’t mean that every lesson has the same value, as it varies dramatically depending on my context at that moment. But that’s why I try to always meet new people (from their stories is really easy to learn something), consume a variety of media and read from a diverse book pool. One of the examples is described here.

And that’s why I ended reading The Game. The book talks about a pickup artist society and how much the life of the writer changed after learning how to seduce a woman (that’s a really vague summary). I love learning about human behaviour, particularly when it focuses on the subliminal realm, so this book gave me loads of insight about how people react to certain patterns, as well as formalising in words some thoughts I already had (which helps a lot, knowing something and being able to express it are different levels of understanding a subject).

But we’re here to discuss what The Game taught me that was applicable in my life, not talking about the cheesy pickup techniques that are probably outdated and culturally dependent. I really think it helped me stepping up my workplace relationships. And no, I’m not saying I started flirting with me colleagues.

Your working environment is ruled by human relationships, that’s a fact. Even if you’re working on a high technical field where people try to encapsulate themselves to get into “the zone”; teams are the one achieving amazing projects, not bright individuals. So, if the way you talk with your colleagues, you set expectations with your manager, you manage some problems and you react to some interruptions are going to drastically change the moral and energy of your team; every single lesson that teaches me how to understand and optimise this interaction makes me a better professional.

With optimising iterations I’m not meaning trying to pursue and lie someone so things are going the path you want because that collides with my “do not be evil” motto. Optimising for me is being able to identify those “naughty tricks” some professionals use to reach their goals, learning when to express admiration, size when to take the leadership and how can show them your value. The Game gives you examples of situations where saying something completely changed the outcome, either ruining the game or delivering success. And, being honest, if you know how to overcome yourself and keep trying in the game, you’ll be way more prepared for any awkward situation. It also focuses on realising your environment and find the best technique for each context.

Being more specific, I’m going to share some quotes that have been helpful in my life. not all of them are from Neil Strauss but, for me, they related to what I learnt from the book.

What you look like doesn’t matter. But how you present yourself does.

You miss 100 percent of the shots you never take

The cat string theory is the most accurate theory of all time: As humans, we don’t appreciate things that just fall into our laps. We find more value when we have to bust through personal boundaries, overcome obstacles, and do things we originally thought we could not.

Be a closer: Most people are not closers and never finish what they start. I’m definitely guilty of this on occasion, I get overly excited, commit to everything, and often never finish projects I start.

Some of you may even sabotage your own progress because you’re afraid you won’t find what you seek. I don’t know about you but I’d rather find adventure in the quest than finding comfort in sitting idle.

Everything you do matters: In the end it all counts, it’s cumulative, LIVE!

May the force be with you,

Gino

Automation is not always the best way

After years of being a Software Developer in Test who primarily focuses on automated tests being the one defending why automated tests are worth the investment; it feels really strange to be the one convincing stakeholders that, right now, automated tests aren’t the best solution we have to raise quality with the given situation. That’s why I want to talk with you about why I came with this, how I dealt with stakeholders and what was the outcome.

Let’s start with some background. We’ve been recently forced to restructure the company, and ending as the one QA is one of the results. That’s one of the reasons I had to start spending more time learning about methodologies, testing techniques and managing expectations. It’s not the first time I deal with these things, but I’ve always had someone to give my advice before committing to the decisions. That’s why it was my responsibility to understand when to apply other techniques (outside my expertise with automated tools to support testing), as well as good ways to implement them.

Another deciding factor is the tight deadline we’ve committed to, as it forces us to deliver a huge amount of work, setting up a situation where raising quality concerns is seen as an annoyance. We had some processes that were rolling out which stopped because of this change, as well as some of the teams decided that they don’t have any time to deal with automated tests; although others focused on them from the beginning, doing TDD while pairing so it helps them working with a half-defined technical design. After some tries trying to convince them like explaining how automated tests will help us preventing regressions while iterating, handing some working frameworks inside their to them so the transition would be easier (I don’t have expertise on JavaScript, so it wasn’t probably the cleanest code) or giving them written scenarios to leverage the work; I decided that maybe it wasn’t the moment to tackle this problem. Also, I got a message from the managers that we should keep noise to the minimum, so developers don’t lose their momentum. I raised my concern to them (now I feel free to say something when I think things aren’t working), and started weighting the different approaches we could take.

I spent the first sprints gathering possible processes and sharing them with our product manager. I also spent a fair amount of time working with the designers, learning from the user testing and reviewing the tests and scenarios from the teams which decided to pair (realising how much they were already caring about Quality allowed me to narrow my focus). When I started seeing and playing with the deliverables, it was obvious that we needed to tackle somehow the Quality of the product.

I started reporting bugs and odd behaviours to a spreadsheet shared with our product owner so the noise to the developers was way lower (as he could discuss with me what were our priorities). I did short exploratory testing sessions per feature and iterate through them. Afterwards, I paired with the designers and product owners to run more exploratory sessions, allowing me to centralise the reports reducing related errors. I wanted to do it with the developers so the feedback loop was shorter (this is a common practice and, in my opinion, one of the highest value testing activities), but I was not able to do so. As the release date was approaching, I involved and encourage anyone in the company to test the current state of the product to identify things like usability issues, user experience feedback, etc. I asked them to use low noise channels to give me the feedback so I could again act as a hub gathering and cleaning it.

I won’t talk about all the challenges, failures and problems we ran through. I wanted to write this because of the lesson I learnt: no matter how much expertise you have in one field, there are some situations when you need to jump out of your comfort zone because it is a clearly better solution, even if you make a poor implementation due to the lack of experience. Personally, I feel lucky because I’ve been changing industry and methodologies during my short career, making me feel that I’m not biased yet. But I feel particularly proud of myself because I didn’t try to find a solution using my automation knowledge. Have you been in a similar situation?

May the force be with you,
Gino

Modernise your testing roles

Fortunately, the Testing industry is not only formed by Automation, Manual QA and QA Leads. Models are shifting (particularly due to Agile methodologies) and new approaches require new (and more interesting, in my opinion) roles. Here I’ll talk about some of these roles, either real examples and some theoretical ones.

  • Test jumper. Described my James Bach, is one solution for Testing in Agile methodologies. Its duties are supporting and supervising multiple Agile teams, ensuring that nothing blocks proper testing at speed. It’s all about aiding developers and testers to have whatever tool before they need it, helping to create testable products and raising testing concerns as soon as possible.
  • Software Engineer, Tools & Infrastructure. Google’s evolution of the Software Engineer in Test, this role focuses on tooling the Software Engineers to make their life easier. It’s a technical position and some of the tasks are described in this Reddit post, but it has been one of the trends in the recent time to replace testers with software engineers who focus on making testing easier.
  • Quality Assitance. As I described in this post, it’s the approach Atlassian took to deliver quality at Agile speed. The efforts are focused on empowering the developer and training him in testing tasks, so the majority of the tasks can be done really early on the development process. Other companies, like Spotify, are creating similar roles.
  • Automation toolsmith. Similar to Google’s approach and theorized by the Rapid Software Testing team, this role focuses on developing tools to unlock testers potential. They should work closely with the testers understanding what are going to be their needs in the future, and with the developers creating more testable products and learning the system to build those tools.

I am sure that they’re more cool Testing roles out there, and I’ll try to find them. I love seeing how is moving shifting our community, and learning how companies are trying to adapt testing to today’s needs.

Some honorable mentions are:

 

May the force be with you,

Gino

BDD: not by cucumber alone does man live

Don’t get me wrong, I know how useful BDD is cleaning test reports, grouping different frameworks and giving the ability to scale as it allows non-technical people to expand the feature file. But it has loads of painful points which makes it a powerful tool bridging the gap but most of the complex scenarios are impossible to handle.

That’s why, after some time using it as the default BDD as the default option, I understood its value and now I only use it for the simple and repetitive workflows, allowing anyone to easily understand what’s the current coverage of the “pre-commit” verifications. It also brings a common language for developers, testers and business people to specify the product’s behaviour, as explained by the Three Amigos approach, making it an awesome tool for clarifying misconceptions.

But I’ve found some difficulties implementing it. Since it’s foundation it heavily relies on TDD practices, so implementing it on teams not used to TDD techniques carries an extra challenge. I haven’t found proper documentation for the test implementation, as most of the articles online only state how empowering it is for business people. It requires a rigorous delimitation of what should contain every scenario, breaking them into really small modular ones which goes against our common belief.

Most of the implementation problems happened when we tried to reuse our BDD framework (made with Python and Behave) which was fitting flawlessly one of our small REST API testing suite, into a more complex project. We started lacking some context management, sharing variables between steps, dealing with multiple users… So we try to evolve our framework to the perfect machine that would handle it all. And it became a beast. We started to deal with “worlds” per test suite (a world for the billing system, another different one for user management), that way we wanted to keep things as modular as possible. So it became an ugly beast.

So, at that point, and being coeval with my “I don’t trust in anything I know anymore” moment, I decided to start focusing on “is BDD really what fitting us?” instead of just “how can I make BDD fit us”. Is it delivering more value than a traditional automated test? Are we using it correctly? What are the alternatives?

That’s how we realised that in some projects it wasn’t the way to go. For instance, when we deal with complex user scenarios and we’re not going to scale the test adding new repetitive steps, it didn’t make any sense to spend most of the time adapting the BDD framework. In those cases, we kept the structure for a consistent reporting, and we dealt with the test on the old fashion way.

Did you find any limitations on BDD as well? Is it fitting your needs seamlessly? In a future post, I want to talk about some examples that caused us problems. In the meantime,

May the force be with you,
Gino

P.S: If you want to dive deeper into the BDD world, you may enjoy these resources:

Retrospective: Hacking my time scheduling skills

One of the pieces of feedback I’ve reiteratively got from my managers is my difficulty to say no, as well as how easy is for me to focus on new challenges sometimes setting aside key pieces of our roadmap. I want to believe that I’ve made some progress on the subject, although there’s always room for improvement. I’m here to talk about one of the techniques which have been really successful in my case.

I’ve been trying several techniques during the past years. I’m in love with Pomodoro timers, but I find really hard implementing them in a team that’s not used to it. Also, I think they shine in heavily intellectual activities (such as programming or designing), but most of my daily activities involve meeting, pairing and discussing. I’ve kept the concept of taking some chilling out minutes after long periods of focused work. For example, I still use something similar when I’m coding or performing exploratory testing sessions (see how Rapid Software Testing applies chartering in Exploratory Testing).

The moment when everything changed was when my QA lead, among other colleagues, got redundant and I lost her help prioritizing and tracking or roadmap. So I needed to take even more seriously my time scheduling skills. I started applying the quadrant technique to my daily tasks, identifying their categories and their time c consumption. As I’ve explained before we were following the Quality Assistance approach, so the four categories I identified were:

  • Automation: All time spent writing testing frameworks, tools, and scripts. Also, all the effort maintaining our current builds, upgrading, etc.
  • Exploratory: Time to run through the new features, understanding our product and assessing the Quality. It also involves reporting and verifying.
  • Teaching: Working with my colleagues ensuring that everyone’s on the same page regarding quality. Meetings, redesigning something to make it more testable, reviewing their automated tests, pairing, etc.
  • Learning: As most of my daily practices are new to me, and we’re always looking for improvements, I spend a fair amount of time documenting myself, learning new methodologies and building prototypes.

As you may assume, it wouldn’t make any sense to have an even distribution of my time, but it helped me understanding where I spent most of my time, and aligning it to our goals. Different periods need different approaches and having the data help me realising if I need to change.

As I’ve already said, one of my top priorities right now is spending more time doing the tasks I enjoy the most (or the ones where I learn the most). These categories make it simple, allowing you to invest more time in other quadrants if you get stuck on activities that you hate.

Also, I have to admit that reading this post about a similar technique has helped me retune it to get more value or of it. This is an iterative tool which is helping me a lot understanding the time spent during my working hours. I highly recommend you to step up your time managing techniques, it makes my days way more productive, it helps with our goals and my days are funnier. Knowing what you love about your work allows you to invest more time on it.

May the force be with you,
Gino

Quality Assistance: Why we tried it

As I mentioned before, Quality Assistance is a testing methodology that focuses on empowering and support the developer on Testing tasks so the team can deliver at speed. In this post, I’m going to talk about the reasons why we tried this approach with our team, as well as some challenges we faced.

I joined the startup right after our QA lead did. It was in interesting timing, all the decisions regarding Quality were still pending. I was lucky that she counted on me for the change, and one of the approaches we wanted to try was Quality Assistance.

Why? Well, I think that the main three reasons were: resource limitation, we were only two to deal with every time, so traditional methodologies based on manual sign off or us automating every check was out of the table; we sincerely believe this is the step forward the industry needs, especially in Agile environments where speed is a key factor; and personally, I’m always willing to try something that would allow me to spend more time doing the activities I love of my role.

It wasn’t the only option we considered, but the one we wanted to try the most. As any other similar change we had a list of challenges to face in order to success in it, and some of them needed to be addressed before working on it.

As always, the first step was convincing the stakeholders. Luckily for me, most of this work were done by my manager, and they were really keen to try the change because they knew the previous approach wasn’t working and agree that Quality ownership should be shared in the team. But, if we want to really convince stakeholders we needed to start gathering metrics to show the results and start using facts as arguments instead of opinions or beliefs. It makes most conversations easier. We struggled on this and I believe if we had focused more on metrics, embracing Quality Assistance would have been smoother. Although I don’t have facts on that…

Convincing the developers at that stage was easy, but when we started to work together on what were the expectations and how we needed to change our way to work… Well, I can ensure you that it’ll take me a whole post talking about some clashes we had during the process. The lack of documentation made it more difficult. So far, it seems that only Atlassian is trying it, so we didn’t have many documentation or examples to show them, and most of the information was focused on QA practitioners out stakeholders. It is also an approach that requires high doses of customization as it depends on each team. We had to focus on a team by team solution, difficulting us to use one solution to convince other teams.

Personally, I also had the challenge of improving my exploratory testing techniques. Previously in my career, I just focused on automated checkings, tooling and performance testing. And this is a crucial part of quality assistance. It was an enlightening experience, learning a lot in such a small period of time, and helping me with the next challenge: thinking how to teach testing. To empower developers with the ability of testing you need to learn how to express all the tacit knowledge you’ve been building for years. It completely changes the perspective because every time I performed a testing task I was actively thinking what is happening inside my head and trying to translate it into words. Challenging for sure, but fun nevertheless.

These were the main walls we had to face before trying Quality Assistance on your organisation. On following posts, I’ll try to focus on different steps that we tried and what was their impact.

May the force be with you,
Gino

P.S. Feel free to use Quality Assistance category to visit the related posts!

Why a startup

I consider myself a lucky guy for numerous reasons, one of them is the luxury of have been working on energetic companies where most people want to change things and the stakeholders give you enough freedom to try your experiments. That gives I ended joining a startup, which is the final representation of this responsibility and freedom. Don’t get me wrong, I like to say that startups give you as many perks than difficulties, but at this stage of my life and career it’s the situation that best suits my experimentation craving.

To be more precise, I have to point that this startup is between series A and B, the moment when you have to refine or redo your model to verify its viability. You have enough money to keep going, and the first investors reviewing you. That constraint you to act as bold as previously, but you still need to innovate and find solutions for problems that no one in the organisation has ever solved.

That given say I want to talk today about what is the key feature that seduces me about startups, the one keeps me addicted. Because let’s be honest, some of the is perks can be found no nowadays in some amazing companies, and there you won’t have most of the downsides. But there’s something I haven’t found anywhere else, and I’m quite sure that there’s not in the list you’re thinking right now.

That doesn’t mean that I don’t love the blank canvas that a startup represents for most of the processes and practices, allowing you to experiment and refining one while learning in the process; even if it carries loads of responsibility and commitment. It neither means that I don’t enjoy the challenge of being the only one of my kind, constantly pushing me or of the comfort zone to try a new solution; even if I miss some feedback from a peer to learn better solutions combining different approaches.

But what fills in this experience is the family feeling that the situation produce. We’re most of us on the same boat, facing equivalent challenges and committing to the same insane degree. We know each other, spend as much time as possible and bond way stronger than many corporations that spends thousands on team building activities. A sad face will always fire someone’s alarm, and they will try to cheer you up. Any laugh will be contagious, and any clash obvious for everyone. That’s why friction can easily destroy organisations, and culture can achieve miracles that only startups deliver.

This has become more obvious to me now that we’ve moved to our own office. Instead of an incubator with some commodities (and some problems), we have our space to shape and customise, where we share any activities like cleaning, organising or dealing with lunch. Now after I meeting I’ll be washing some mugs, and every single time I’m going to hear someone asking if I need a hand. Now a colleague will immediately wake up when I’m carrying lunch, and it’s unusual seeing one of us coming back from a walk without bringing something for someone. It’s a workspace manage like a home, evolving your colleagues to family members.

Obviously, this is not everyone’s cup of tea, and this situation makes even easier to spot the people who don’t want to bond with the family. There’s no hard feeling, I understand that people doesn’t need to be like me, and every situation and context is completely unique, but for me, this is the key benefit of working in a startup. This is what makes the difference. And that’s why, when it dies (and, as everything, the lack of commitment or investment would kill it), the organisation stops sounding appealing for me, and the other perks won’t probably even out for me.

And with this, guys, I want to thank you for being part of my family. It doesn’t matter if we share a lunch, a month or a life; you’ve contributed to shaping me and you have undoubtedly thought me something. So thank you for making me the one who I am now because I really rock.

May the force be with you,
Gino

Rapid Software Testing: Heuristics

As I mentioned before, Rapid Software Testing is a testing methodology defined by James Bach, Michael Bolton, and Paul Holland. This methodology focuses on the skillset and experience of the tester alongside with some rules and techniques to achieve a cost efficient quality assessment of a product.

This post will only focus on the Heuristics proposed by the methodology as a guideline for Exploratory Testing sessions although I’ve been using them as a source of inspiration for testing purposes. You can find the original document among other RST appendices. Let’s start the show!

I agree with RST approach that Testing is a context-driven activity, so you shouldn’t expect a cookbook explaining you step by step what you have to test. These represent “general techniques” which are generic enough to be relevant for a wide spectrum of contexts. Personally, these examples have helped me defining specific heuristics for a given product. These are the groups of test heuristics with some examples:

  • Functional testing. Determine the things the product can do, or this feature can offer. See that each function does what it’s supposed to do, and no what isn’t.
    • C’mon, you’re used to this testing! You don’t need help thinking about how to test functionality, do you? :p
  • Claims testing. Challenge every claim you find about the product (either explicit and implicit). Test the veracity and clarity of the claims.
    • Read the specification, is the product align with it?
    • Is the manual relevant to the current version of the product?
    • Are the descriptions, onboarding tips and help statements inside the product aligned with it?
  • Domain testing. Segment the data the product can process into tiers (invalid values, boundary values, best representatives, etc.). Consider combinations of data.
    • Is that numerical operation running the same logic with positive, negative or colossally long numbers?
    • Is the producing recognizing the language that String is using?
    • Does the product process differently Geo points from different countries?
  • User testing. Learn about user roles and categories, determining what are the differences in their needs and permissions.  Get real user data and involve the users (from different categories) in the test (or learn how to simulate one).
    • Is the “techie loner” persona  part of our user base?
    • Are we testing the admin side of the product?
  • Stress testing. Look for part of the product vulnerable to being overloaded, and generate challenging data to test its limits.
    • Extend the usage for a very long period of time with a steady load, or maybe try with peaks.
    • Test the limits on the input and output of our product.
    • Check concurrency is being handled correctly with some more data than the expected one.
  • Risk testing. Imagine a problem, then look for it. Focus on the kind of problems matters most. How would you detect problems? Your experience and knowledge about the product and the team will shine here!
    • Distributed applications struggle with concurrence. Exploit it!
    • Transactions have to deal with consistency. Exploit it!
    • This team has been failing to deliver cross-browser solutions, focus on it!
  • Flow testing. Perform multiple activities connected, don’t reset the system between user paths and vary timing and sequencing.
    • Run overlapped actions in a parallel way.
    • Verify that the product transits correctly between states, even on the non-happy sequences.
  • Scenario testing. Tell a compelling story. Design tests that involve meaningful and complex interactions with the product.
    • Reflect a scenario simulating someone who matters doing something that matters.
    • Think about the different applications of the product and how the scenarios would differ.
  • Automatic checking. Check a million different facts, hopefully getting coverage in the process and using oracles. Tools that empower the tester are always welcomed!
    • Automatic change detectors. Catch those regressions!
    • Automatic data generators. Help the tester with the easily automatable parts of the job.
    • Check permutations and combinations easily.
    • Tools that support the tester.

In my experience, this is an amazing starting point when you face a new product. These heuristic rapidly evolve into context-customized during your first exploratory testing session, serving as a baseline for the next ones.

I have to admit that the main reason to write this post is serving as documentation to me, as well as helping me interiorise it. But I hope you find it useful as well!

May the force be with you,
Gino

P.S. Feel free to use Rapid Software Testing category to visit the related posts!