In this post, I’ll be sharing how we were able to identify where our automation strategy overlooked much needed coverage, then found a simple, yet powerful, technology and service to compliment our existing functional automation framework.
This new strategy saved us time and resources, ultimately ensuring QA’s success in detecting regression failures, from localization updates, by shortening the execution time – while enhancing overall coverage, reliability, and increasing the confidence in our releases, by all our stakeholders.
Even if you, or your QA team, doesn’t cover localization testing, I think this post will still help you in considering alternate test automation strategies, with visual testing.
Background
Years ago, I was working for an IT healthcare provider, as a QA manager, spending a considerable amount of time on test automation design and strategies. This Fortune-100 company served millions, across 10 states, helping the under-served and under-privileged. They provided help to those in need of obtaining healthcare services.
Ironically, this is where I had a crash course on the importance of localization. In this respect, I’d like to give you my definition of localization testing:
Ensuring that all targeted and/or served parties, regardless of language, nationality, and culture are equally and fairly respected and presented with the same information and content.
As a member of QA, this meant that our production deployments must be concise and accurate, with the language translations, just like our functional features and work-flows, that would reach out to the millions of members. Our responsibility extended, not only to our program manager and businesses, but included parents seeking health coverage benefits and requirements, for their children’s welfare and health.
This wasn’t just a social responsibility, but mandated by federal and state law. This too was my first experience in understanding the complexities and real challenges of localization testing.
Challenges
Our QA automation strategy successfully leveraged commercial products and open source projects for our API, mid-tier, and UI/functional tests. However, why were we still getting defects, often regression failures, reported after deployments into the QA environment, and occasionally in production? Well, it turns out that a significant percentage were regression bugs, including easy one-offs. If you were wondering what these defects were, then you’ll be surprised to know that they were “localization” issues.
For example, a UI change, in a CSS/less file, or even an image file, that was used for one set of localizations would incorrectly impact other localizations.Imagine you’re logged into the system, with your preferred language, such as 日本語, then the link ‘Help’ is displayed incorrectly in 한국어, or Русский.
Another key reason was the lack of emphasis on testing the actual “rendered” page/content. Selenium is great for parsing the generated HTML/DOM, manipulating and injecting executable Javascript, and obtaining attributes, and their values to implement automated tests. However, Selenium lacks the ability to visually verify what the user actually “sees”, including what is “seen” across different resolutions and browsers!
Here, having a reliable and scalable automation framework, leveraging Selenium, wasn’t enough. After all, validating statically generated text, compiled from one localization to another, would cover generated “text”, but that was just a small piece of the overall process.
Here are just some of the issues that localization testing would present:
- Impact to existing functionality
- Incorrect translations from compiling localizations
- Impact to other localizations
- Layout:
- – Across multiple device resolutions
- – Across multiple devices (e.g. major browsers/iPhone 6 Plus)
- Text/word wrap
- Missing text
- Incorrect styling
- – Text font
- – Text style
- – Colors
- Incorrect or missing images
- Layout:
Limited Automation Strategy
Initially, localization testing on the surface looks simple and straight-forward. However, it is often times a very manual and arduous task, which isn’t likely to have an automation strategy, since it requires the expertise of understanding “language translations” and even cultural over tones.
The technical challenge often becomes “How is the UI impacted?” more so than “Did the right localization get translated?”.
The challenge to business and QA management was that for each supported locale, that localization must be subjected to thorough QA verification, including a localization expert (often times, specific to their locale of expertise).
This means lengthy days or weeks of manual validation. The dependency on needing multiple localization experts to help with testing was also a tax on the project resources.
Our Problem
I was excited, proud, and then overwhelmed, after joining Concur, a SAP company, that provides the number one corporate travel/expense software in the world, when I found that they support nearly 30+ languages. Also, the highly technical development/QA teams have the expertise to roll out updates quickly and efficiently!
My responsibilities, or passion, included working with the core QA/Test automation team, to find a viable solution to ensure reliable, efficient, and fast localization testing. This meant – finding an automation strategy for localization.
By the way, did I mention that our strategy also had to consider the multiple browsers and various resolutions that represented all our millions of users from around the world! How could this be done? I only speak three and half languages?
Localization testers know, that those tests are never limited to just ‘text’ verifications.
Here are just a few issues that could result from localization changes:
- Invalid currency symbols and names
- Invalid or inappropriate date and time formatting
- Missing or incorrect character sets
- Incorrect salutations based on locale (especially from Southeast Asian countries)
Typical UI regression tests, regarding localization updates, were composed of test suites and more test suites that were covering sets of assertions to ensure that features were within requirements. However, those assertions, due to the limitations of Selenium, actually had no bearing on what was really rendered (visually) on the target device.
In the past, applying visual testing was often times frustrating, complex, and unreliable, unless a lot of code ‘duct tape’ was constantly being applied. Nonetheless, visual testing was not an option.
Well, never say “never”.
Solution
During the Spring of 2015, I presented how to design an application model based test framework, at a QA conference, held in San Diego, CA. I also attended workshops and other sessions. One of the more popular sessions that created a buzz among the attendees was Applitools.
I attended the Applitools’ session and quickly realized that visual testing, with Applitools Eyes, had come a long ways since the days of ImageMagick and the limitations of Sikuli. This session had me reassess the possibility of automated visual testing … adapted for localization testing.
Now, with minimal effort, our Selenium based automation framework, integrated with Applitools Eyes, is used to automate our UI localization tests across all our supported browsers with varying resolutions! Here’s our high-level process used for executing those tests:
-
-
-
- Create a baseline of the target localizations prior to the deployment of the release candidate
- Re-execute the automated Eyes tests soon after the new deployment
- Analyze the Eyes’ results (on Applitools Eyes test results, and custom generated reports)
- By QA engineers and product owners
- By Localization experts
- Generate visual reports of any changes from the release candidate and baseline
- Groom and update baseline as needed
-
-
Benefits
The simplicity and power of Eyes helped to empower our localization experts to continue with their manual tests, with focus shifted on complex areas, while leveraging automated visual testing for improved general purpose coverage and test validation/execution. Freeing up QA resources, while automating tests across different resolutions and browsers, has added a new layer of quality that manual testing wouldn’t be able to deliver.
Practical Use Cases
Automated UI testing, especially for regression/smoke testing, to cover changes to the actual rendered content, using a reliable, simple, and powerful tool, is now a reality. Furthermore, applying this power for localization testing is a great fit and approach.
Extending the scope of your existing Selenium-based framework with visual automation of your localization tests, across all major browsers and resolutions, representing the majority of your users, is paramount to a successful QA test plan. Equally important, managing those generated results (e.g. images) and seeing the evolution of localization changes, over time from release to release, also provides metrics and archival support, effectively improving the collaboration between all the stakeholders. As an example, our automated tests will post all targeted test results from Applitools to their respective Slack channels – so now all stakeholders will immediately see those reports and visual results real-time!
Best Practices
Over the course of several months, running and managing our automated localization tests, we’ve assembled best practices to help us with reliability test results and scalability. Here are some of those best practices:
-
-
-
- Manage access and updates to your QA environment and applicable test-ware (e.g. user accounts)
- Create a visual baseline, with Applitools Eyes, as late as possible, or even moments prior to the new release (in your QA/Test environment), in order to cut to a minimum any additional “grooming” needed as the baseline changes over the course of time
- Use reliable Selenium locators that aren’t effected by localizations:
- — Avoid references to generated text (e.g. //*[text()=”Hello”] )
- If your web site supports bookmarks, then navigate to target pages via URLs over navigational links
- Always prefer full page screenshot validation over region validation where possible to avoid usage of region locators which must be maintained as the application evolved
- Empower the localization engineers to markup the baseline images
- Empower the localization engineers to execute the visual tests at their own choosing (either scheduled, manually kicked-off, or a combination thereof)
- Integrate Eyes with your existing test automation framework, chances are that Applitools already supports the technology that powers your UI automation
- Define a naming convention to quickly identify and categorize your Applitools Eyes test results (e.g. appName, title, build)
- Leverage “batches” to manage tests that cover the same, or similar, requirements (user stories)
-
-
In Conclusion…
In the past, automated visual testing was flaky and unreliable – especially for industrial strength test automation. However, in the past year, visual testing has now become a reality for our QA engineering teams, here at Concur/SAP. Our QA teams have been successfully leveraging Applitools Eyes to complement our functional tests, with little impact, to our existing Selenium based frameworks. The addition of an automated test strategy targeted for localization testing has reduced overall workload (manual testing) and improved the quality of our releases.
If your product supports localization, or global users and markets, and would benefit from a sound and proven automation strategy, for localization, you should strongly consider visual tests because it’s simple, fast, and powerful.
To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.
About Peter Kim – Sr. Software Design Engineer, Concur Technologies (a SAP company)
Peter Kim is an experienced senior software design engineer, with a strong background in designing new test automation strategies that leverage metadata/ dynamic programming designs.
Peter has led start-ups and fortune-10 companies with successful testing strategies throughout the SDLC/Agile processes. At Concur, as a member of the automation strategy best practices team, Peter and his automation engineering team have dramatically increased the scope and coverage of localization testing with visual testing, with a new twist on design a la “Lewis and Clark” and the Boys/Girls Scouts.