Some years back I attended a talk on testing practices where
the presenter asked the audience how much test automation have they achieved in
their projects. I was the only one who answered almost 100%. He advised me to
keep this a secret from my management if I did not want my team to be downsized :). Good
advice, but that got me thinking whether test automation actually makes testers
Having spent a good part of my experience in testing, I have
seen this discipline mature over the years.
My opinion is that automation does not take away tester jobs; on the
contrary it makes the job more interesting and effective.
In my mind I classify automation progression in an
organization into 4 stages as shown below.
In stage 1, testing is a completely manual activity. Tester
responsibilities are writing test case documents based on the requirements and
functional specification, executing tests manually following the documents, and
creating bug reports for any deviations. As the product grows in size and
functionality, more test cases are added and more resources will be needed to
manually execute them. Sounds like a good recipe for growing your test team?
Well, not exactly. This model will not make either your test team or management
happy. Here are a few reasons why:
- Manual test execution is
error prone: Human beings are prone to make mistakes especially when they
are doing a job that is repetitive and boring. Some invalid bugs may be
raised or even worse, some valid bugs may be missed due to errors in the
- Impossible to test
everything manually: Testing is all about simulating conditions that are as
close as possible to real production behavior. For e.g. consider testing a website for ticket
booking. Some of the typical uses cases will be hundreds of users
accessing the website at the same time, 2 or more users trying to book the
same flight etc. How do you test these manually? You cannot possibly line
up hundreds of testers and ask them to access the website at the sound of
- Manual testing cannot
scale: With each release of the product new features are added and number
of test cases will grow exponentially. It is not possible to add testers
at the same pace and as a result the time taken for test execution grows
out of control.
- Unhappy testers: A good
tester is a creative person who loves to explore new features and new
methods to break existing and new functionality. In stage 1 they get
trapped in endless cycles of manual repetitive regression test runs. This
makes good testers unhappy and they will not stick to the job for long.
- Unhappy Management: With a
large test suite regression testing will take several days or weeks. It
almost becomes impossible to deliver a product on time. This will make the
management unhappy with their test team.
These problems will force all organizations to move to stage
2 sooner or later.
Automation can be introduced when the product has stabilized
and does not have frequent changes in functionality. The first step in automation
is the identification of an appropriate automation tool. There are several
tools available in the market and an appropriate one has to be chosen based on
the needs of the project. Very often none of them meet all the requirements and
you may have to build an in-house automation tool. For most of the projects that
I have worked on we have developed our own tools.
In the initial phase the cost of automation will be more
than manual test execution. The organization would have built up a huge backlog
of test cases over some time and these cannot be automated overnight.
Automation itself will be slow until the testers gain sufficient experience in
the tool and framework. More over until a good percentage of tests are automated
manual test execution has to continue in parallel with the automation activity.
The real benefits start showing only in the long run when a good percentage of
tests are automated. It is important that the management support the testing
organization in this ramp up phase.
As the experience with tools and techniques increases
automation starts getting pushed into earlier stages of the development process.
Today we have reached a state where it is possible to develop automated tests
in parallel with the product so that testing can start as soon as the product
is ready. In most projects I have worked on in the recent years we have
achieved close to 100% test automation.
In this stage the organization starts thinking about how to
bring in more automation. The tests are automated, but the test runs still had
to be triggered manually for each build. Many days passed between test
execution cycles and lots of new code was added in this time. As a result with
each test run new problems are discovered. This is a very inefficient way of
finding and fixing problems.
Solution to this comes in the form of continuous
testing (CIT) tools. These can trigger test runs automatically
at a frequency that is configurable. There are many CIT tools in the market. We
are using Hudson
which is a very popular
open source tool.
With the availability of CIT tools, there will be a
temptation to run all the tests for every check in. Remember that this also
comes at a cost. You need hardware to run tests and running everything
everywhere always might be overkill. In our team we have followed a tiered
approach. A small suite that can finish
execution in less than an hour is run for every check-in, a larger suite that
finishes in 8 hours is run every night and the complete suite is run every
week. The weekly test can run for 24 hours all 7 days of the week. The outcome is
that bugs are discovered as soon as they are introduced and it became easy to
isolate the root cause and fix. As a result we reduced not only the testing
time, but also the time taken to find and fix bugs.
Now all tests are automated and they are running without any
manual intervention. Is it time to fire the testers? No, we still need someone
to analyze the test results. This takes us to the next level, which is
automated test failure analysis. A signature is identified for each failure and
is associated with the corresponding bug in the bug database. Scripts are put
in place to compare the failure with the known signatures and tag with the
matching bug. Now we have reached the state where no manual intervention is
needed for test execution or failure analysis.
What do the testers do if everything is automated? They do what they should really do. Spend
their time on creative tasks like designing test cases for new features,
exploring new ways of testing the product, improving test coverage etc. The
repetitive task of manual test execution is best left to the machines :).
To conclude, automation does not make testers redundant,
rather it frees up their time to be spent on more interesting and useful tasks.
A good management will identify the value provided by the testing team and will
never consider reducing resources just because tests are automated. Testers are
happy in a completely automated regime because their time is spent on test
automation and not in repetitive manual test execution. I would say it is a
win-win situation for testers and management.