When I started to write this blog, I thought of doing a transcript of the discourse at the “WeAreDevelopers World Congress 2019.“ Half a day later, and five pages into this article, I thought to myself: “No one will ever read such a giant wall of text!” So, I decided to split it into a blog series of three.
Software quality without testing? Is it possible!?
Before we dive into this, let me say the title of this article series is a blatant clickbait. As usual, Betteridge's law of headlines applies: Any headline that ends in a question mark can be answered by the word “No”. Right from the start, I want to apologize for any confusion as there is nothing like software quality without testing. It would be similar to talking about implementing software without translating the requirements into an executable form (code or otherwise), which is usually not possible.
In the general perspective, when we talk about quality, it means testing. Either low-level tests like unit and integration tests, or high-level tests like (user) acceptance test (primarily UI based, but other interfaces are also possible). But quality should be (and is) more than just testing your software. Quality should be an organization-wide unified mindset; each member should have a “quality first” attitude, even if their perspectives and opinions on this topic might differ.
The truth about quality
A developer’s work is done only after his/her software is tested properly. We understand that TDD is not always possible. However, thinking about writing unit tests later mostly equals to never actually doing so. Writing unit test that tests your code properly with all its complexity a couple of days or even weeks later seems to not align with the priorities in most teams, and it also appears just not be human nature. Implement unit test while the train of thought is still fresh so you could cover most of the complexity of your code.
Estimate your stories and tasks - including the implementation of unit tests. Don't ask, just do. We often hear the stakeholder don't want to pay for unit tests. The truth is, the stakeholder often doesn't know what that means. In our experience, a stakeholder will not be mad at you if you deliver reliable and maintainable software.
Did you ever hear the statement that this part of code is not testable? Start thinking about redesigning this part. Software, which is not testable can be more hassle than it is worth (not to mention the usually astronomical cost of maintaining it or, worse, working around its limitations).
The reason you should write meaningful unit tests is obvious if you think about it with quality in mind. Unit tests will help you maintain, extend, and improve your software. Refactoring will not be such a huge pain. With the help of a good unit test set you will easily see where your changes break something, and where software behaves in an unexpected way. Your unit test set should cover at least 60-80% of your code. However, these numbers are only relevant if your unit tests test the software in a meaningful way that actually covers and reduces the risks.
So, what is “quality”? Sales and delivery often use it as a buzzword, like we deliver high-quality software—but what exactly does it mean?
The definition of quality and software quality by ISTQB is as follows:
- Quality: The degree to which a component, system, or process meets specified requirements and/or user/customer needs and expectations.
- Software quality: The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs.
Like any other conceptual thought, we need something to base our theories on, so that we can build on a solid foundation. Take a tree, for example. You can throw around some seeds hoping you will get a tree someday, which, in a city like Vienna, is quite unlikely. It is more likely that the next pigeon will be snacking away happily on the seeds which you so thoughtfully provided for it!
However, if you plant the seeds in good soil, and water and fertilize them, you will get a tree, just as you had expected. It is not instantaneous but requires care and effort. The same goes for quality. Throwing around some test automation, or some part of a CI/CD pipeline will not lead to much except frustration. Things might work individually, but they won’t play well together, and overall, you will engage an engineer who would spend hours maintaining and "improving" a shaky construct that was never built on a solid foundation. Most likely, the result turns out to be worthless and expensive. Like the tree, quality also needs the right structure and order to grow.
Our first building block is obvious—it's the team and how it works and interacts. A forming and storming team will not produce overall good quality. To build a team, you need trust, team players, a shared vision, and attainable goals. Don't set your goals too low; it will bore the team. Also, don't set unattainable goals either; it will demotivate the team, and the performance will drop, from not having any sense of achievement.
- These measurements give us the possibility to define quality gates in a CI/CD pipeline
- Static code analyzer and its KPIs help us to identify architectural and code-smells including, but not limited to:
- Code duplications
- Circular references
- Coding guideline violations
- We gain a better understanding of how the software performs during:
- Performance tests
- Load tests
- Mean time to recovery/repair (MTTR)
- They also provide us the KPIs which can certify that our software is well-tested:
- Code coverage (unit tests)
- Path coverage (unit, integration test)
- Number of defects/bugs found
These metrics might look quite appealing when displayed on a monitoring system, but they can also give us:
- a false sense of security
- an incorrect assessment about the actual state of the software, if-
- the tests considered for code coverage are not meaningful
- the tests are only testing the happy path
So yes, objective quality is important. But its meaning and value must be critically scrutinized to make the most of it.
Human beings are emotionally driven. We tend to “feel” that something is good or bad. Obviously, this has a lot to do with demographics, age, origin, childhood, and social situation. Just think about "iOS vs. Android" which is almost a religiously-polarizing discussion. Take a step back, you'll see that both the operating systems perform the same tasks, they even run the same applications, and both are of high quality.
The same goes for roles in a project. The stakeholder has a different perspective of what is important, and therefore, defines “good quality” differently than an end-user or a developer. All this needs to be considered when we ask them about what they think (rather, feel) is “high quality.” This has a huge bearing on our perceived value of the system we are looking at. Since no sane human being is above these emotions, subjective quality is something we must deal with as well as possible, daily. (btw, Android rocks! 😀)
Quality is a mindset. Sure, there is a technical implementation of a certain quality, through all the tests a QA engineer may provide, and all the metrics from which we can capture and generate informative KPIs. However, at an overall level, we need to shape our thought process towards quality right from the beginning of every single project. What flight attendants always tell us also applies to a quality-oriented mindset: “Put on your masks before helping others.” As part of a team which implements and delivers software, it is your task and responsibility to form a quality first mindset, and to mold your team members too, to have the same approach.
In Part 2, we will cover the responsibilities of each team member. Where should quality for each member start? Where should it end (or should it end at all?). We will introduce you to the quality hopper, which is a visual aid of how quality could be structured, and we’ll also cover the topics of requirements, user stories, and acceptance criteria.