Nowadays, every software company has a strategy to test their code before releasing it to the customer. This can be manual testing done by someone who is less of a techy, by a co-developer, or even by the developer who wrote the initial code. Manual testing can be done in many different ways, ranging from exploratory testing – just click here and there a bit – up to strictly following use cases that have been made up by a test manager. This form of manual testing already goes around for quite some time and is still subject to change.
More and more people are adding automated tests to this. These tests have all kinds of interesting names: unit tests, acceptance tests, functional tests, stress tests, integration tests … And behind all these names, there are some interesting theories, but sometimes also multiple meanings.
With all these ideas, principles and theories, it is easy to concoct a way to guarantee quality to our customers. The simplest – maybe also the least smart – way is to just do everything. But only very rarely does software require such an overkill of testing. A smarter approach is to be selective in how you combine all these techniques, ideas and tools.
How We Test
Our team setup
Before talking about how we test, let me explain how our teams are set up. In every project, there is a Product Owner, a technical lead and a multidisciplinary team. The Product Owner combines the responsibilities of a Functional Analyst and a Project Manager from a more classical setting. The Project Owner is the bridge between the customer and the rest of the team and warrants that the customer gets the product he wants. The multidisciplinary team can exist of backend developers, frontend developers, web designers ... In short: the technical experts who know all the bits and bytes. It is this team that is responsible for delivering value-adding software to the customer.
Only about Testing during Execution Phase
This will only be about testing the implementation (the execution phase), not about getting the requirements right (the expectation phase). Making sure you get the expectations right is crucial of course. Any software dude can tell you that the earlier you find a problem AND fix it, the easier it is. So, correcting mistakes in the expectation phase – before you have even built anything – is definitely the best approach. But that's not our focus today.
Acceptance tests with Cucumber Web Bridge
So, we know what our customer wants and we get started ...
Our Product Owner writes up user stories that describe the new or changed functionality. These user stories contain more than just a description of what the customer wants. It also contains (explicitly or sometimes implicitly) a "definition of done". This means that we know when something is done from a user's point of view. This definition of done can later be translated in an acceptance test.
We use Cucumber Web Bridge to do this, as we believe that the gap between business and IT should be as narrow as possible. Very succinctly: Cucumber Web Bridge makes it possible to write functional or acceptance tests in a high-level language that can almost be read like English. This means that almost anyone can write these tests. Such a test is written together with the production code and runs later on to test if what the code promised to do, it also actually does.
During one project or over a sequence of projects, a set of acceptance tests accumulate. These tests can be used as regression tests so we can run all the tests together to see if the user is still getting value for his money and everything he ever asked for is still working. After some time, the challenge becomes to decide when to write new tests, when to change existing tests and maybe even when to delete some tests.
Unit Tests in xUnit
Next to writing functional tests in Cucumber Web Bridge, there are also unit tests. Every language has it's own unit testing framework: Java has JUnit, .Net has NUnit ... These unit tests are more technical in nature and can only be written by developers. These unit tests test the integrity of the code and whether on a code-level it is doing what is expected. Just like acceptance tests, unit tests accumulate over time and can be used as regression tests. So if you build something now and you didn't break any unit tests, you know that you didn't break the logic behind the existing code.
Even with all the acceptance tests and the unit tests, there is some manual testing done by the Project Owner, when the new functionality is already implemented, but before it is delivered. Now... Why would you want to do that? Remember, our Project Owners are the bridge between the customer and the rest of the team. So they have the best idea of how users will go through the application when they receive it. They also have a pretty good idea of what the user wants to achieve with the application. They might come up with some functional insights, or try out a functionality in an unconventional way. If necessary, they will also translate this in new acceptance tests.
Combination and Frequency
How do we combine these three kinds of tests? And how frequently? The manual testing is only done for new or changed functionality. The technical and functional tests are automated, which means they can be run very frequently. They merely consume processing power and some hard disk space. We run our unit tests after each check-in of new code. The functional tests are run at least once every day. We are moving towards running the functional tests even more frequentyl, but that will take us some time. This frequency gives us the certainty that we are ready to deliver new features very quickly and increasing the frequency would increase our ability to respond even faster to new demands.
A little side-note on having people dedicated to testing
Assigning the responsibility for testing can be a difficult matter. Having people assigned to a specific task makes ownership a lot more clear. For us, this is not about the responsibility for testing, but a responsibility for quality. Assigning the responsibility for testing to a dedicated Tester bears the risk that everyone else becomes explicitly not responsible for the quality of the product.
We believe that delivering quality is a shared responsibility and that is why testing is distributed throughout the different members of a team. The technical people test the quality of their code with unit tests. The Project Owner is the gatekeeper who validates before delivering to the customer by means of manual testing. But the responsibility for automated functional tests is explicitly shared between the technical and functional people within a team and Cucumber Web Bridge allows non-technical people to write functional tests.
There are many different strategies to test software before releasing it to the world and an efficient team combines some methodologies into a clear and consistent test approach. We have a strong belief that quality should be a shared responsibility of both the technical and the functional people in a software development team. That is why we are applying three different ways of testing in every project we build: unit tests, automated functional tests and manual validation. Everyone in the team has a clear role in at least one of the testing methodologies we use. Functional and technical members of a team additionally share the responsibility for automated functional testing and both write new functional tests using Cucumber Web Bridge. This approach has demonstrated that it ensures involvement of the entire team in the quality of the product delivered to the customer.