Introduction to automated software testing
Test automation, also referred to as automated testing, is a testing method in software development involving the use of automated tooling to perform testing and quality assurance.
Test automation replaces many of the manual requirements within the testing life cycle. Most often, developers write tests and create testing flows to analyze code changes and automatically compile passing and failing tests. At the highest performing organizations, engineering teams perform tests throughout the entire development flow, from local checks to CI/CD pipelines to production environments.
Test automation provides faster feedback to developers so they can find and fix problems earlier in the development process. It also reduces the manual workload for development teams and increases test coverage, saving development time and resources.
As a result, development teams that adopt more rigorous automated testing practices can benefit from both higher quality software and faster time-to-market.
Manual vs automated testing
Unlike automated testing, manual testing requires human resources to write, run, and analyze tests. It requires developers or other members in an engineering organization, such as QA, to manually verify that code changes meet product specifications, avoid introducing new security vulnerabilities, and meet performance requirements. They run their code and analyze its flows and outputs to see exactly how it behaves under various conditions.
Compared to automated testing, manual testing can be more expensive because it requires significant human intervention. Worse, manual testing can lead to costly problems with code or product quality by introducing the potential for human error into the testing life cycle. As testing requirements ramp, quality assurance and testing teams can become overwhelmed with highly repetitive, low-priority tests.
Manual testing can also be time-consuming and resource-intensive. It introduces delays into the development pipeline, forcing developers to wait during long testing cycles. As a result, developers receive slower feedback, bogging down the flow of work from code to production.
In worst case scenarios, such time constraints lead teams to skip testing requirements, further decreasing product quality in the long-term.
Even so, manual testing can be a powerful complement to automated testing. Manual testing provides engineering teams with an additional layer of quality assurance when implemented for the right types of tests. Teams should ultimately strive to strike the right balance between automated and manual testing.
For example, manual testing works well for usability testing, which requires the tester to think like the user and evaluate an app based on the user experience. Usability testing helps teams uncover features that seem reasonable to machines, but appear or feel incorrect to users.
Manual testing is often suitable for exploratory and ad-hoc testing, or testing that requires deep domain expertise. For example, HackerOne crowdsources security testing using its community of hackers and developers, providing companies with valuable feedback.
Benefits of automated testing
Modern software development is faster and more complex than ever before. To stay competitive and adapt to industry changes, teams need to release high-quality software more frequently.
Moreover, engineering teams build products using more complex stacks and with more demanding requirements. Developers must integrate third-party tools, manage APIs, and juggle gigabytes, or even terabytes, of user data. Consumers now expect apps to be made available across all of their devices, like web, mobile, and desktop.
Test automation helps engineering teams thrive in todays’ world of complex development stacks and ambitious product goals. It helps them improve daily work and increase development velocity, while also improving code quality.
Faster feedback
Test automation provides developers with faster feedback by enabling them to check their code against test suites early and often. Faster feedback helps developers detect issues earlier when they are cheaper and easier to fix.
When testing is highly automated and lightweight, it also enables developers to avoid context switching and perform fewer handoffs. With self-service testing, they spend less time waiting for test results and minimize friction between development and QA.
Over time, consistent and timely feedback improves organizational learning, educating development teams about testing practices, user feedback, security issues, and more.
Improved product quality
Test automation can improve product quality by running more tests in less time. Teams can achieve higher test coverage by incrementally expanding the number and scope of tests each development cycle. They can also unlock new testing capabilities with powerful test automation tooling. For example, teams can create tests to simulate user behavior that is difficult to manually recreate, such as thousands of simultaneous users or abnormally high data usage.
Test automation also enables manual testing to focus on high-value and high-risk issues. Together, automated and manual testing can increase test coverage by focusing on different parts of the product and integrating at different points in the development pipeline.
Importantly, test automation helps developers follow their changes from creation to release. Seeing their changes move through the value stream helps align team incentives across the development pipeline and reduces handoffs—which often create friction, delays, and frustration.
Improved cost efficiency
Automated testing can improve the cost efficiency of the testing life cycle by removing some of the requirements for manual testing, which can be costlier and more difficult to scale. When developers write automated tests, they save other developers’ time, both now and in the future. Tests can be reused cheaply and frequently, further reducing the cost of testing over the long-term.
Test automation can also free up developer resources. Developers spend less time performing their own manual tests or waiting on test results, freeing up their time to work on more valuable projects.
Reduced risk of human error
Complex manual tests open up the testing process to human errors as well. Automated tests can help teams scale more complex tests with rigor and predictability. They also create highly reusable and standardized testing suites, helping teams avoid human errors during the testing process.
Decreased time-to-market
With an overall reduction in time required for the testing cycle, teams can release new features and product updates more often. Moreover, by identifying more code issues earlier, teams can spend less time restoring service, patching issues in production, or debugging new changes.
What are the different types of test automation?
In his book Succeeding with Agile, Mike Cohn puts forward the idea of the test pyramid, a hierarchy of test types and their requirements.
The test pyramid simplifies testing to three levels: unit tests, service tests, and UI tests. Although many people in the software development industry consider the test pyramid overly simplistic, it's a useful heuristic for understanding the tradeoffs between speed and integration.
According to the pyramid, development teams should write lots of small and fast unit tests. They should write fewer complex tests that invoke functionality across an application.
Importantly, the testing cycle should involve tests of varying granularity. Great testing will have both fast, isolated tests and slow, integrated tests.
Unit testing
Unit testing is the most basic form of testing and often occurs first in the testing process. It is helpful for understanding how individual blocks of code work.
Unit tests isolate and test the logic of single components of a larger system. They cannot rely on external systems, like APIs and databases. As a result, they emphasize granular, low-level logic over interactions between components. Many programming languages provide isolated components to test, such as functions, subroutines, and methods.
Integration testing
Integration testing ensures the smooth integration of individual components. Integration testing typically occurs after unit testing, once developers validate the logic of each individual component.
During integration testing, teams test components and modules as a group to confirm they communicate and interact as intended. It is more complex than unit testing, requiring multiple parts of a system to function in unison.
Functional testing
Similar to integration testing, functional testing focuses on the behavior of multiple components interacting within a system. Functional testing, however, emphasizes business and product requirements to determine passing or failing tests. The application being tested must produce the correct results according to product specifications.
Regression testing
Regression testing ensures new changes do not affect the application’s existing functionality. Regression testing prevents new code from altering the behavior of previously implemented and tested functionality.
In agile development, when developers are frequently integrating new code, regression testing can introduce significant overhead. As a result, regression testing often happens later in the testing life cycle, after unit testing and integration testing.
Smoke testing
Smoke testing, also known as confidence testing and sanity testing, protects an application’s most important functionality and rejects changes causing obvious failures. Smoke testing rejects bad changes as fast as possible.
Smoke tests execute quickly and provide fast feedback to developers, helping them avoid unnecessary testing. They are cost effective and save teams both time and effort. Smoke tests can be functional tests or unit tests. For example, smoke testing can check to see if the application runs, interface loads properly, or queries execute.
Black-box testing and white-box testing
Black-box testing focuses only on inputs and outputs. It does not analyze how internal code works.
White-box testing tests the internal structure of a system by writing tests for specific paths through the code. It requires in-depth knowledge of the source code.
Both black-box testing and white-box testing can be applied at the unit, integration, and system levels of the testing life cycle. Depending on the complexity of testing, block-box testing and white-box testing can be automated or manual testing.
Code analysis
Code analysis tools analyze how code is written to identify potential security flaws, performance issues, or unnecessary complexity. They can also check for style and form.
Code analysis is almost always automated. Code analysis often happens early in the testing cycle, with developers checking their code in their local environments or automatically after pushing changes to version control.
Other testing
Depending on the product requirements and scale of an organization, teams can introduce additional forms of testing into the development life cycle.
Similar to black-box and white-box testing, end-to-end (E2E) testing checks that an application works throughout the user journey, from start to finish. User-acceptance testing (UAT) includes tests performed by the end user or the client to ensure the product meets their requirements.
What tests should you automate?
Introducing test automation into the software development life cycle should be incremental, guided by a long-term strategy. To maintain their progress, teams often introduce test coverage expectations for all new code changes.
If test coverage is below expectations, it’s important to provide teams with sufficient time to redress weak areas in the codebase. Lack of test automation is a form of technical debt that will slow down the flow of work if teams do not dedicate time to improving daily work and paying down testing debt.
There’s no simple calculation for deciding whether tests should be automated or manual. Teams should consider automating a test if doing so will:
- Avoid repetition. Write tests for tasks that will likely be run frequently, both at the individual and organizational level.
- Reduce test time. Search for tests that will complete faster with automated testing than with manual testing.
- Minimize wait time. Wait time can delay important work and lead to costly context switching. Automated tests should decrease the time developers spend waiting for test results.
- Reduce handoffs. Handoffs create friction, remove context, and reduce organizational learning. Worse, handoffs beget more handoffs. Eliminate handoffs where possible and shorten the longest series of handoffs.
- Improve visibility. Test automation can improve visibility into the codebase by expanding coverage where manual testing falls short.
- Avoid human error: Prioritize tests for tasks prone to human errors and in areas where results need to be highly accurate and predictable.
- Avoid boredom. Highly repetitive and unpleasant tasks lead to boredom and dissatisfaction. It’s important to make work enjoyable by offloading mundane tasks to automated tools.
As teams scale their automated testing, they should consider other factors that may influence which tests they prioritize:
- Test maintenance. Rewriting and maintaining tests requires developer resources. Consider test automation first for stable, predictable, and low-risk parts of the codebase.
- Tools. Some testing will be easier to automate with existing tools. Teams should consider first tackling quick wins, such as basic unit tests or code analysis.
- Organizational requirements. Different engineering teams experience different testing constraints, depending on industry, product, and scale.
Measuring test automation
To measure test automation and its impact, teams should improve their visibility into:
- Code coverage provides insight into how much source code was executed during testing. Code coverage can be measured in several ways, including statement coverage, condition coverage, and line coverage.
- Count of tests executed is a simple metric for the number of tests run for each build. It is better for analyzing long-term trends, helping teams visualize changes to their deployment pipelines.
- Count of tests passed and failed, or the percent of total tests passing or failing, helps teams understand their code coverage and pipeline maturity. The goal, however, should not be all passing tests; failed tests prevent bugs in production. Take note of long-term trends, such as an increase in failed tests that could be a sign of mounting technical debt, scaling issues, or codebase complexity.
- Test duration helps teams uncover tests that could be better optimized to reduce wait time. If test duration is short, it may indicate insufficient testing. Context matters and teams should balance both speed and coverage.
- Defects found makes it easy for teams to visualize the impact of more comprehensive testing.
It’s important to compare testing metrics against the number of defects found in production, and the severity of those defects. More comprehensive testing may lead to more failed tests during the testing cycle, but can ultimately improve product quality. On the other hand, too few tests can artificially inflate the percentage of passing tests, but can lead to serious issues slipping into production.
Recommended test automation tools
Test automation tools are programs that help development teams test software with minimal human interaction.
Engineering teams should consider whether they want to use open source or commercial testing tools. Open source tools, like Selenium and Capybara, can typically be integrated into the current testing cycle for free (or just the cost of infrastructure). Commercial testing tools are often built on top of open source projects, so it’s possible to start with an open source product and upgrade to a paid solution with added features. When deciding between open source and commercial tools, it’s important to consider time saved, improvements to product quality, and expanded capabilities, such as cloud-based environments for scale or automatic CI/CD integrations.
Teams should also consider codeless, code-based, and hybrid tools. Although many testing tools require coding skills, certain types of testing, such as UI testing or user acceptance testing, can be completed with no-code tools. Hybrid tools help engineers expand their tests and analyze test results by combining code-based and visual tools.
Lastly, engineering teams should consider their primary environments and platforms, such as desktop, mobile, web, or even APIs. Moreover, teams may wish to expand testing to their production environment using chaos engineering, feature flags, and performance/load testing. Production testing increases team and system resiliency to production issues.
Some of the most popular testing tools include:
- Selenium is a popular, open-source framework widely used for testing web applications. Selenium includes Selenium IDE, a tool to write and record tests; Selenium WebDriver, a collection of language specific bindings to drive a browser; and Selenium Grid, a service that enables distributing and running tests on multiple machines and environments.
- Cucumber is a testing tool focused on behavior-driven development, which allows expected software behavior to be written in a logical, customer-friendly language.
- Appium is an open source test automation framework for use with native, hybrid, and mobile web applications. It drives iOS, Android, and Windows apps using the WebDriver protocol and integrates with popular CI/CD tools.
- Katalon is an end-to-end testing solution providing web, mobile, desktop, and API testing.
- LambdaTest is a cloud-based cross browser testing tool built on Selenium and Cypress. It integrates with CI/CD tools like Jenkins, Travis CI, and CircleCI.
- SmartBear and Postman provide API testing solutions for teams that rely heavily on API-driven development.