Automated testing is an integral part of the continuous delivery pipeline. Despite its benefits, in reality most organizations still use outdated manual testing processes. In one survey by XBOsoft, an automated testing consultancy, only 28 percent of the respondents automate more than 50 percent of their test cases. In another survey by WorkSoft, an SAP partner, 60 percent of survey respondents (mostly SAP customers) noted that their testing was almost fully manual.
This is largely because teams want to avoid the perceived pain involved in setting up an automated testing process. But the effort is definitely worth it. By identifying the right tool, framework, and technical approach, organizations can adopt automation in a more seamless process.
This article considers automated testing, its value as part of the broader continuous integration and continuous delivery (CI/CD) pipeline, its benefits over manual testing, and best practices toward adoption.
The CI/CD pipeline
In their groundbreaking book, "Continuous Delivery," Jez Humble and David Farley state, “The most important problem that we face as software professionals is this: If somebody thinks of a good idea, how do we deliver it to users as quickly as possible?”
Because the traditional waterfall technique of software development is inadequate to meet this goal, organizations are adopting modern agile development techniques like continuous integration and continuous delivery as a means to produce better-quality software faster. The following are succinct definitions for both CI and CD:
- Continuous integration: "A development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early." -- ThoughtWorks
- Continuous delivery: "The practice of releasing every good build to users." -- Jez Humble
For Humble and Farley, the following image represents a typical deployment pipeline:
This pipeline is described as “an automated implementation of your application’s build, deploy, test, and release process.” The authors go on to recommend that in an ideal world, as much of this pipeline should be automated as possible.
Automated vs. manual testing
Despite the value of automation for CI/CD, the truth is that most organizations still test manually. Manual testing involves a human tester using a computer or mobile device trying various usage and input combinations, comparing the results to the expected behavior, and recording their observations.
In contrast, automated testing involves the use of tools to execute prescripted tests on a Web or mobile app, and recording of test data in detailed reports that can be reviewed by a human tester after tests are run.
For finding certain kinds of defects, manual testing by a real user is more effective than automated testing. But manual testing is not as consistent, repeatable, scalable, or reusable as automated testing. Further, scripted tests allow for more systematic improvement of effectiveness over time.
Elfriede Dustin, author of the landmark book "Implementing Automated Software Testing," cites a survey finding in her book that reveals “lack of time” as the No. 1 reason (37 percent) why developers say they have not implemented automated testing and instead continue to use a manual process. Considering that a key benefit of automated testing is quicker time to market, this reasoning offers quite the paradox.
While it is necessary to spend adequate time testing an app to maintain quality, the goal of a testing project should be to continually reduce the time spent in manual testing, and automate as much of the tests as possible. This is why automated testing makes better sense in the long run.
The right approach to automated testing
As can be expected with any change, test and QA will naturally endure a “hump of pain” (a phrase first attributed to Brian Marick) from the shift in workflows and in the overall team mindset. In order to make the transition as easy and painless as possible, it’s important to use the right technical approach. This involves knowing which tests to automate and which to continue manually.
In her book, Dustin states she has found that 40 to 60 percent of tests for most projects can and should be automated. The automated testing pyramid developed by Mike Cohn offers a guideline for automating more low-level unit tests than high-level end-to-end tests running through a GUI.
Unit and component testing
Unit tests are a way to test individual units of the source code to determine whether they are fit for use. They are a series of low-level tests that pinpoint exactly where to search for bugs very early on in the cycle. The vast majority of commit tests should be composed of unit tests.
API or Web services testing
API testing operates at the business logic layer of the software. Instead of using standard user inputs (keyboard) and outputs, you use software to send calls to the API, get output, and note the system’s response. One challenge with API testing is setting up the test environment. The database and server should be configured per the application requirements. Once installation is done, the API function should be called to check whether that API is working.
The goal of GUI testing is to ensure that the front-end interface of the application works as expected in all browsers and platforms. In the early days, this used to be done by testers manually rendering the app on every possible combination of browsers and platforms. When it became impossible to test every combination of browser version and system on a separate machine, virtual machines came to the rescue, enabling teams to test multiple combinations simultaneously on a single system.
Regression testing seeks to uncover new bugs, or regressions, after modifications such as enhancements and configuration changes have been made to the application. Before a new version of the application is released, the old test cases are run against the new version to make sure all the old capabilities still work.
Functional testing starts with a list of specifications or a design document. Based on the specifications, various functionalities of the app are tested by feeding them input and examining the output against expected output. Functional testing doesn’t consider the internal structure of the app. It is strictly focused on the functionality of the app from an outcomes point of view.
Mobile app and browser testing
While much of what’s been discussed so far applies to both Web and mobile app testing, automating tests for mobile apps is more complex than doing the same for Web apps. When automating mobile tests, it is imperative to use a combination of emulators and real devices to optimize costs and get your apps to market faster.
Choose your testing tools
Considering the varying needs of a test and QA team, it’s most likely that you’ll use a combination of tools to support your entire software testing lifecycle (STLC). That said, it helps to first focus on one platform that meets most of your automated testing needs, then find tools that perform specific tasks not covered by the primary tool.
When selecting the primary testing tool, the important decision is to choose between in-house, open source, and commercial tools. In-house tools give you the most control, but they require expertise to build, and they are often harder to maintain than either open source or commercial tools. They also turn out to be expensive if you factor in all internal resources required for ongoing maintenance.
Open source tools have proven to be very capable in handling the most advanced automation projects, and unlike commercial tools, they don’t lock you into one vendor. Thus, open source tools are preferred by more and more organizations. While they are easier to set up than building an in-house tool from scratch, they can become cumbersome to maintain. They require experts who specialize in open source tools. With no formal support available, the project can be delayed when complex issues arise.
Considering the importance of open source tools in automated testing, let’s look at two of the most popular open source automated testing frameworks, and consider how they may fit into your testing workflow.
Selenium: Automated testing for Web apps
Selenium is the most widely used open source automated testing tool. It gives you the freedom to test your Web app across browsers automatically, through test scripts. It is a technology that allows programmers to send commands to Web browsers to make them perform tasks as though they were being used by humans. In this sense, it is like a robot for Web browsing.
Jason Huggins, creator of Selenium, has said, “Selenium’s success came from its ability to test Firefox and IE on Windows or Mac or Linux and be able to drive it from Ruby or Python.” When thinking of automated testing for Web apps, a Selenium-based solution is the ideal choice.
Appium: Automated testing for mobile apps
Appium is the leading open source automated testing framework for use with native, hybrid, and mobile Web apps. It runs iOS and Android apps via WebDriver, the same API that controls the behavior of browsers with Selenium.
Investing in WebDriver means you are betting on a single, free, and open standard for testing. It also keeps you from being locked into proprietary tools and technologies.
In terms of commercial tools, plenty are available, and they require careful examination before adopting one. Typically, they are easier to set up and maintain, as the vendor would take on the load of keeping the tool up to date. They can be expensive compared to open source tools, but if saving time is a priority, they could be worth the extra dollar.
Commercial tools also come with support, which gives organizations new to automated testing a fair level of confidence. Yet it’s important to be wary of proprietary tools that could result in vendor lock-in and make it difficult for you to adapt your testing methodologies later as your organization evolves.
Keys to better testing
What does the ideal automated testing solution look like? It is cloud-based, eliminating the need to maintain a test grid internally and allowing for the testing of apps and the access of data securely behind firewalls. It uses open source standard frameworks (like Selenium and Appium) to avoid vendor lock-in. It starts a pristine new VM for every browser instance to make sure tests aren’t polluted by data from previous activity.
Most commercial solutions today run their VMs on public cloud infrastructure and can result in false positives associated with browsers that don’t close completely. Solutions that capture test screenshots and videos can make debugging a lot easier, and they're preferred over solutions that don’t offer these features.
Other desirable features include the ability to manage massive numbers of parallel tests, support for tests written in any programming language, and integration with your CI server. Finally, the ideal tool is easy to configure, has a large existing user base, and is backed by prompt customer support.
A tool meeting these requirements is ideal for automated testing, resulting in a smoother transition from manual testing, a more effective CI/CD workflow, and eventually, higher-quality software that reaches the market much faster.
An integral part of a successful CI/CD pipeline is automated (versus manual) testing. The execution of prescripted tests on a Web or mobile app saves considerable time, plus having test data accessible in detailed reports is valuable to development teams who can use this information to quickly identify issues. In short, automated testing is key to achieving a true CI/CD deployment.
The right technical approach involves knowing which tests to automate and which to continue manually. Using the right approach is key to mitigating any “humps of pain” inherent with any change in a team’s workflow and mind set.
Despite any temporary pain points, organizations that invest in automated testing efforts are poised to get the most out of their CI/CD workflows. The result is shipping higher-quality software faster, a necessary component of competitive advantage in an increasingly competitive business environment.
Lubos Parobek leads strategy and development of Sauce Labs’ Web and mobile application testing platform as VP of product. His previous experience includes leadership positions at organizations including Dell KACE, Sybase iAnywhere, AvantGo, and 3Com. Parobek holds an MBA from the University of California, Berkeley.
New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to firstname.lastname@example.org.