Learn how to choose a new test automation tool and the top considerations you need to keep in mind as you develop a proof-of-concept.
When people ask me about which tool they should use for their automation, I typically explain my view of the automation ecosystem to them. As I discuss in my Bad Spin blog post, this ecosystem is made up of strategy, audience, and environment, but as Alton Brown says, that’s a different show; I have a one-hour talk about the automation ecosystem and choosing a tool; you can contact me if you are interested in hearing it…. But I digress.
As part of the aforementioned talk, I recommend doing one or more proofs of concept or prototypes using the tools that you’ve decided are possible candidates. Yes, there is a subtle, or not so subtle, difference between a prototype and proof of concept, but for our purposes in this writing, we’ll call them the same thing. With that assumption in mind, here are some considerations that are usually appropriate for most automation prototypes; these thoughts have served me well over the years.
Prototype Against Your Application or Product
Creating automation prototypes against “test” websites such as The Internet by Dave Haeffner or Restful Booker by Mark Winteringham can be a good way to exercise an automation tool across multiple application constructs; I’m a big fan of these sites and I do use them from time to time. Nothing, however, compares to creating your prototype against your applications. You know where the “icky bits” of your app are, you know where the 3rd party components are used… or you can find out by asking the developers. There is no substitute for prototyping against your own app(s).
When doing this prototype, don’t shy away from the “hard to automate” portions of your app. These portions are very important because, depending on the frequency with which they are used, they might rule in or rule out specific tools.
Use a Free or a Trial License
As attractive as it may be to focus on “free”, i.e., open source software, you should not automatically discount vendor-sold software. If you find that a vendor-sold product might be a viable candidate for your automation tool, you should consider creating a prototype with it; if you don’t, you won’t know whether it is, in fact, an appropriate tool for you. In fact, it might be the most appropriate tool for you. To be clear, I’m not saying that you should buy a license for that tool just to do a prototype. Most vendors have free-with-limited-features versions or temporary trial licenses. Trial licenses usually have a time limit; trial durations of 7 – 30 days are common.
Run Against Your App’s APIs
Does the tool or framework you’re using for your prototype support testing web services? If yes, awesome! Make sure you prototype against your application’s API in addition to any GUI you provide. Note that most API-capable tools can handle your basic APIs, so make sure to automate against “more challenging” APIs.
Also, does the tool work with your squirrelly authentication and authorization scheme? Does it work with your 3rd party authentication provider? Is there some non-standard payload that your APIs deliver? If so, make sure you check the automation tool against that; to insert some concreteness, I’m living through a difficult authentication paradigm at the time of this writing.
Exercise Concurrency or Parallelization
Not all automation tools support concurrent (i.e., parallel) execution. Even the ones that do may have limitations with respect to your specific context. Try running test scripts in parallel to ensure you are getting the behavior you expect in addition to the performance you desire. Of note, are the logs and reports you get when running in parallel less helpful than those you get when running sequentially?
3rd Party Partners
Which 3rd party service providers does the tool support? More specifically, which managed browser grids and device farms does it support? Is this capability open-ended or does the tool only support specific 3rd parties? Be sure to automate against as many 3rd parties as is feasible to make a responsible decision.
Note that if a tool only supports specific 3rd party infrastructure that is not necessarily an issue. If, however, you do choose that tool, you must be willing to also work with the supported 3rd parties or avoid them completely, e.g., managing your own Selenium grid, device farm, etc.
Simulate a Major Change
One of the challenges in any automation endeavor, regardless of tool choice, is keeping maintenance effort to a minimum, so it’s important to understand a tool’s capability to handle a refactor or pervasive change. During your prototyping activities, try to simulate having to change values in, say, 500 or more test scripts. This simulation may not be easy to set up, but the information you’ll gain about your future maintainability with this tool will be invaluable.
Look at the Result, Log, and Report Files
Though we understand that test automation development is, in fact, software development, there is an important difference from general application development. For example, the result of buying a product on a website is not an email with an order number; the result is that the buyer receives the product that they ordered. In contrast, the result of a test automation script is not just a pass or fail, a yes or a no, a red or a green. The most valuable “products” of a test automation script are its log/report/result files. This is where we can determine the pass/fail status but also, we can determine what did and didn’t happen during a script run. When prototyping with a tool, evaluating the generated artifacts is essential to performing a responsible evaluation of the tool itself.
Some considerations when performing this part of an evaluation include:
- Are the logged steps sufficient for you to understand what did and did not occur during the script’s execution?
- If the script failed, is the failure reason sufficiently descriptive for you to debug the issue or report it to another team member?
- Can you add additional log messages or other execution artifacts to the test run to make it easier to debug?
Most assuredly, the considerations above are a subset of what you want to exercise during a prototyping activity. Every team, organization, application, company, etc. has different needs and requirements. In fact, some of the above may not apply to your specific context.
There is one other thing of which to be mindful. When we create code for a prototype or proof of concept, we are creating it to prove that a concept or an implementation is feasible and is a good candidate for our needs. The code we create during this process should be developed as quickly and economically as is responsible. This means taking shortcuts, “making” things work, driving to an “it works” or “it doesn’t work” for us as soon as is reasonable. Further, this means that we need to be prepared to throw away the code we created during these endeavors.
“Wait! No! We just spent weeks creating this and it’s working! We can’t just throw it away!”
Yes, you can, and you should; in some cases, you must. Because this code was created taking shortcuts, “making” things work and driving to an “it works” or “it doesn’t work” for us conclusion, this code is typically not in a supportable and future-thinking state. In many cases, it will be more economical to rewrite the code than to maintain it over the life of that code. For code that is sufficiently close to an appropriate state of supportability, a refactor of the existing code may be more appropriate than a complete rewrite but that decision is situationally dependent.
Like this? Catch me at an upcoming event!