The post Power Up Your Test Automation with Playwright appeared first on Automated Visual Testing | Applitools.
]]>As a test automation engineer, finding the right tools and frameworks is crucial to building a successful test automation strategy. Playwright is an end-to-end testing framework that provides a robust set of features to create fast, reliable, and maintainable tests.
In a recent webinar, Playwright Ambassador and TAU instructor Renata Andrade shared several use cases and best practices for using the framework. Here are some of the most valuable takeaways for test automation engineers:
Use Playwright’s built-in locators for resilient tests.
Playwright recommends using attributes like “text”, “aria-label”, “alt”, and “placeholder” to find elements. These locators are less prone to breakage, leading to more robust tests.
Speed up test creation with the code generator.
The Playwright code generator can automatically generate test code for you. This is useful when you’re first creating tests to quickly get started. You can then tweak and build on the generated code.
Debug tests and view runs with UI mode and the trace viewer.
Playwright’s UI mode and VS Code extension provide visibility into your test runs. You can step through tests, pick locators, view failures, and optimize your tests. The trace viewer gives you a detailed trace of all steps in a test run, which is invaluable for troubleshooting.
Add visual testing with Applitools Eyes.
For complete validation, combine Playwright with Applitools for visual and UI testing. Applitools Eyes catches unintended changes in UI that can be missed by traditional test automation.
Handle dynamic elements with the right locators.
Use a combination of attributes like “text”, “aria-label”, “alt”, “placeholder”, CSS, and XPath to locate dynamic elements that frequently change. This enables you to test dynamic web pages.
Set cookies to test personalization.
You can set cookies in Playwright to handle scenarios like A/B testing where the web page or flow differs based on cookies. This is important for testing personalization on websites.
Playwright provides a robust set of features to build, run, debug, and maintain end-to-end web tests. By leveraging the use cases and best practices shared in the webinar, you can power up your test automation and build a successful testing strategy using Playwright. Watch the full recording and see the session materials.
The post Power Up Your Test Automation with Playwright appeared first on Automated Visual Testing | Applitools.
]]>The post Ultrafast Cross Browser Testing with Selenium Java appeared first on Automated Visual Testing | Applitools.
]]>Learn why cross-browser testing is so important and an approach you can take to make cross-browser testing with Selenium much faster.
Cross-browser testing is a form of functional testing in which an application is tested on multiple browsers (Chrome, Firefox, Edge, Safari, IE, etc.) to validate that functionality performs as expected.
In other words, it is designed to answer the question: Does your app work the way it’s supposed to on every browser your customers use?
While modern browsers generally conform to key web standards today, important problems remain. Differences in interpretations of web standards, varying support for new CSS or other design features, and rendering discrepancies between the different browsers can all yield a user experience that is different from one browser to the next.
A modern application needs to perform as expected across all major browsers. Not only is this a baseline user expectation these days, but it is critical to delivering a positive user experience and a successful app.
At the same time, the number of screen combinations (between screen sizes, devices and versions) is rising quickly. In recent years the number of screens required to test has exploded, rising to an industry average of 81,480 screens and reaching 681,296 for the top 30% of companies.
Ensuring complete coverage of each screen on every browser is a common challenge. Effective and fast cross-browser testing can help alleviate the bottleneck from all these screens that require testing.
Traditional approaches to cross-browser testing in Selenium have existed for a while, and while they still work, they have not scaled well to handle the challenge of complex modern applications. They can be time-consuming to build, slow to execute and challenging to maintain in the face of apps that change frequently.
Applitools Developer Advocate and Test Automation University Director Andrew Knight (AKA Pandy Knight) recently conducted a hands-on workshop where he explored the history of cross-browser testing, its evolution over time and the pros and cons of different approaches.
Andrew then explores a modern cross-browser testing solution with Selenium and Applitools. He walks you through a live demo (which you can replicate yourself by following his shared Github repo) and explains the benefits and how to get started. He also covers how you can accelerate test automation with integration into CI/CD to achieve Continuous Testing.
Check out the workshop below, and follow along with the Github repo here.
At Applitools we are dedicated to making software testing faster and easier so that testers can be more effective and apps can be visually perfect. That’s why we use our industry-leading Visual AI and built the Applitools Ultrafast Grid, a key component of the Applitools Test Cloud that enables ultrafast cross-browser testing. If you’re looking to do cross-browser testing better but don’t use Selenium, be sure to check out these links too for more info on how we can help:
The post Ultrafast Cross Browser Testing with Selenium Java appeared first on Automated Visual Testing | Applitools.
]]>The post Getting Started with Localization Testing appeared first on Automated Visual Testing | Applitools.
]]>Learn about common localization bugs, the traditional challenges involved in finding them, and solutions that can make localization testing far easier.
Localization is the process of customizing a software application that was originally designed for a domestic market so that it can be released in a specific foreign market.
Localization testing usually involves substantial changes of the application’s UI, including the translation of all texts to the target language, replacement of icons and images, and many other culture, language, and country-specific adjustments, that affect the presentation of data (e.g., date and time formats, alphabetical sorting order, etc.). Due to the lack of in-house language expertise, localization usually involves in-house personnel as well as outside contractors, and localization service providers.
Before a software application is localized for the first time, it must undergo a process of Internationalization.
Internationalization often involves an extensive development and re-engineering effort which goal is to allow the application to operate in localized environments and to correctly process and display localized data. In addition, locale-specific resources such as texts, images and documentation files, are isolated from the application code and placed in external resource files, so they can be easily replaced without requiring further development efforts.
Once an application is internationalized, the engineering effort required to localize it to a new language or culture is drastically reduced. However, the same is not true for UI localization testing.
Every time an application is localized to a new language, the application changes, or the resources of a supported localization change, the localized UI must be thoroughly tested for localization and internationalization (LI) bugs.
LI bugs which can be detected by testers that are not language experts include:
Other common LI bugs which can only be detected with the help of a language expert include:
An unfortunate characteristic of LI bugs, is that they require a lot of effort to find. To uncover such bugs, a tester (assisted by a language expert) must carefully inspect each and every window, dialog, tooltip, menu item, and any other UI state of the application. Since most of these bugs are sensitive to the size and layout of the application, tests must be repeated on a variety of execution environments (e.g., different operating systems, web browsers, devices, etc.) and screen resolutions. Furthermore, if the application window is resizable, tests should also be repeated for various window sizes.
There are several other factors that contribute to the complexity of UI Localization testing:
Due to these factors, maintaining multiple localized application versions and adding new ones, incurs a huge overhead on quality assurance teams.
Fortunately, there is a modern solution that can make localization testing significantly easier – Automated Visual Testing.
Visual test automation tools can be applied to UI localization testing to eliminate unnecessary manual involvement of testers and language experts, and drastically shorten test cycles.
To understand this, let’s first understand what visual testing is, and then how to apply visual testing to localization testing.
Visual testing is the process of validating the visual aspects of an application’s User Interface (UI).
In addition to validating that the UI displays the correct content or data, visual testing focuses on validating the layout and appearance of each visual element of the UI and of the UI as a whole. Layout correctness means that each visual element of the UI is properly positioned on the screen, is of the right shape and size, and doesn’t overlap or hide other visual elements. Appearance correctness means that the visual elements are of the correct font, color, or image.
Visual Test Automation tools can automate most of the activities involved in visual testing. They can easily detect many common UI localization bugs such as text overlap or overflow, layout corruptions, oversized windows and dialogs, etc. All a tester needs to do is to drive the Application Under Test (AUT) through its various UI states and submit UI screenshots to the tool for visual validation.
For simple websites, this can be as easy as directing a web browser to a set of URLs. For more complex applications, some buttons or links should be clicked, or some forms should be filled in order to reach certain screens. Driving the AUT through its different UI states can be easily automated using a variety of open-source and commercial tools (e.g., Selenium, Cypress, etc.). If the tool is properly configured to rely on internal UI object identifiers, the same automation script/program can be used to drive the AUT in all of its localized versions.
So, how can we use this to simplify UI localization testing?
Application localization is notoriously difficult and complex. Manually testing for UI localization bugs, during and between localization projects, is extremely time consuming, error-prone, and requires the involvement of external language experts.
Visual test automation tools are a modern breed of test automation tools that can effectively eliminate unnecessary manual involvement, drastically shorten the duration of localization projects, and increase the quality of localized applications.
Applitools has pioneered the use of Visual AI to deliver the best visual testing in the industry. You can learn more about how Applitools can help you with localization testing, or to get started with Applitools today, request a demo or sign up for a free Applitools account.
Editor’s Note: Parts of this post were originally published in two parts in 2017/2018, and have since been updated for accuracy and completeness.
The post Getting Started with Localization Testing appeared first on Automated Visual Testing | Applitools.
]]>The post What is Cross Browser Testing? Examples & Best Practices appeared first on Automated Visual Testing | Applitools.
]]>In this guide, learn everything you need to know about cross-browser testing, including examples, a comparison of different implementation options and how you can get started with cross-browser testing today.
Cross Browser Testing is a testing method for validating that the application under test works as expected on different browsers, at varying viewport sizes, and devices. It can be done manually or as part of a test automation strategy. The tooling required for this activity can be built in-house or provided by external vendors.
When I began in QA I didn’t understand why cross-browser testing was important. But it quickly became clear to me that applications frequently render differently at different viewport sizes and with different browser types. This can be a complex issue to test effectively, as the number of combinations required to achieve full coverage can become very large.
Here’s an example of what you might look for when performing cross-browser testing. Let’s say we’re working on an insurance application. I, as a user, should be able to view my insurance policy details on the website, using any browser on my laptop or desktop.
This should be possible while ensuring:
There are various aspects to consider while implementing your cross-browser testing strategy.
“Different devices and browsers: chrome, safari, firefox, edge”
Thankfully IE is not in the list anymore (for most)!
You should first figure out the important combinations of devices and browsers and viewport sizes your userbase is accessing your application from.
PS: Each team member should have access to the analytics data of the product to understand patterns of usage of the product. This data, which includes OS, browser details (type, version, viewport sizes) are essential to plan and test proactively, instead of later reacting to situations (= defects).
This will tell you the different browser types, browser versions, devices, viewport sizes you need to consider in your testing and test automation strategy.
There are various ways you can perform cross-browser testing. Let’s understand them.
We usually have multiple browsers on our laptop / desktops. While there are other ways to get started, it is probably simplest to start implementing your cross browser tests here. You also need a local setup to enable debugging and maintaining / updating the tests.
If mobile-web is part of the strategy, then you also need to have the relevant setup available on local machines to enable that.
While this may seem the easiest, it can get out of control very quickly.
Examples:
The choices can actually vary based on the requirements of the project and on a case by case basis.
As alternatives, we have the liberty to choose and create either an in-house testing solution, or go for a platform / license / third party tool to support our device farm needs.
You can set up a central infrastructure of browsers and emulators or real devices in your organization that can be leveraged by the teams. You will also need some software to manage the usage and allocation of these browsers and devices.
This infrastructure can potentially be used in the following ways:
You can also opt to run the tests against browsers / devices in a cloud-based solution. You can select different device / browser options offered by various providers in the market that give you the wide coverage as per your requirements, without having to build / maintain / manage the same. This can also be used to run tests triggered from local machines, or from CI.
It is important to understand the evolution of browsers in recent years.
We need to factor this change in our cross browser testing strategy.
In addition, AI-based cross-browser testing solutions are becoming quite popular, which use machine learning to help scale your automation execution and get deep insights into the results – from a functional, performance and user-experience perspective.
To get hands-on experience in this, I signed-up for a free Applitools account, which uses a powerful Visual AI, and implemented a few tests using this tutorial as a reference.
Integrating Applitools with your functional automation is extremely easy. Simply select the relevant Applitools SDK based on your functional automation tech stack from here, and follow the detailed tutorial to get started.
Now, at any place in your test execution where you need functional or visual validation, add methods like eyes.checkWindow()
, and you are set to run your test against any browser or device of your choice.
Reference: https://applitools.com/tutorials/overview/how-it-works.html
Now you have your tests ready and running against a specific browser or device, scaling for cross-browser testing is the next step.
What if I told you with just the addition of the different device combinations, you can leverage the same single script to give you the functional and visual test results on the variety of combinations specified, covering the cross browser testing aspect as well.
Seems too far-fetched?
It isn’t. That is exactly what Applitools Ultrafast Test Cloud does!
The addition of lines of code below will do the magic. You can also go about changing the configurations, as per your requirements.
(Below example is from the Selenium-Java SDK. Similar configuration can be supplied for the other SDKs.)
// Add browsers with different viewports
config.addBrowser(800, 600, BrowserType.Chrome);
config.addBrowser(700, 500, BrowserType.FIREFOX);
config.addBrowser(1600, 1200, BrowserType.IE_11);
config.addBrowser(1024, 768, BrowserType.EDGE_CHROMIUM);
config.addBrowser(800, 600, BrowserType.SAFARI);
// Add mobile emulation devices in Portrait mode
config.addDeviceEmulation(DeviceName.iPhone_X, ScreenOrientation.PORTRAIT;
config.addDeviceEmulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT;
// Set the configuration object to eyes
eyes.setConfiguration(config);
Now when you run the test again, say against Chrome browser on your laptop, in the Applitools dashboard, you will see results for all the browser and device combinations provided above.
You may be wondering, the test ran just once on the Chrome browser. How did the results from all other browsers and devices come up? And so fast?
This is what Applitools Ultrafast Grid (a part of the Ultrafast Test Cloud) does under the hood:
eyes.checkWindow
call, the information captured (DOM, CSS, etc.) is sent to the Ultrafast Grid.What I like about this AI-based solution, is that:
Here is the screenshot of the Applitools dashboard after I ran my sample tests:
The Ultrafast Test Grid and Applitools Visual AI can be integrated into many popular and free and open source test automation frameworks to easily supercharge their effectiveness as cross-browser testing tools.
As you saw above in my code sample, Ultrafast Grid is compatible with Selenium. Selenium is the most popular open source test automation framework. It is possible to perform cross browser testing with Selenium out of the box, but Ultrafast Grid offers some significant advantages. Check out this article for a full comparison of using an in-house Selenium Grid vs using Applitools.
Cypress is another very popular open source test automation framework. However, it can only natively run tests against a few browsers at the moment – Chrome, Edge and Firefox. The Applitools Ultrafast Grid allows you to expand this list to include all browsers. See this post on how to perform cross-browser tests with Cypress on all browsers.
Playwright is an open source test automation framework that is newer than both Cypress and Selenium, but it is growing quickly in popularity. Playwright has some limitations on doing cross-browser testing natively, because it tests “browser projects” and not full browsers. The Ultrafast Grid overcomes this limitation. You can read more about how to run cross-browser Playwright tests against any browser.
Local Setup | In-House Setup | Cloud Solution | AI-Based Solution (Applitools) | |
---|---|---|---|---|
Infrastructure | Pros: Fast feedback on local machine Cons: Needs to be repeated for each machine where the tests need to execute All configurations cannot be set up locally | Pros: No inbound / outbound connectivity required Cons: Needs considerable effort to set up, maintain and update the infrastructure on a continued basis | Pros: No efforts required build / maintain / update the infrastructure Cons: Needs inbound and outbound connectivity from internal network Latency issues may be seen as requests are going to cloud based browsers / devices | Pros: No effort required to setup |
Setup and Maintenance | To be taken care of by each team member from time to time; including OS/ Browser version updates | To be taken care of by the internal team from time to time; including OS/ Browser version updates | To be taken care of by the service provider | To be taken care of by the service provider |
Speed of Feedback | Slowest, as all dependencies to be taken care of, and test needs to be repeated for each browser / device combination | Depends on concurrent usage due to multiple test runs | Depends on network latency Network issues may cause intermittent failures Depends on reliability and connectivity of the service provider | Fast and seamless scaling |
Security | Best as in-house, using internal firewalls, vpns, network and data storage | Best as in-house, using internal firewalls, vpns, network and data storage | High Risk: Needs inbound network access from service provider to the internal test environments. Browsers / devices will have access to the data generated by running the test – cleanup is essential. No control who has access to the cloud service provider infrastructure, and if they access your internal resources. | Low risk. There is no inbound connection to your internal infrastructure. Tests are running on internal network – so no data on Applitools server (other than screenshots used for comparison with baseline) |
Depending on your project strategy, scope, manual or automation requirements and of course, the hardware or infrastructure combinations, you should make a choice that not only suits the requirements but gives you the best returns and results.
Based on my past experiences, I am very excited about the Applitools Ultrafast Test Cloud – a unique way to scale test automation seamlessly. In the process, I ended up writing less code, and got amazingly high test coverage, with very high accuracy. I recommend everyone to try this and experience it themselves!
Want to get started with Applitools today? Sign up for a free account and check out our docs to get up and running today, or schedule a demo and we’ll be happy to answer any questions you may have.
Editor’s Note: This post was originally published in January 2022, and has been updated for accuracy and completeness.
The post What is Cross Browser Testing? Examples & Best Practices appeared first on Automated Visual Testing | Applitools.
]]>The post What is Visual Regression Testing? appeared first on Automated Visual Testing | Applitools.
]]>In this guide, you’ll learn what visual regression testing is and why visual regression tests are important. We’ll go through a use case with an example and talk about how to get started and choose the best tool for your needs.
Visual regression testing is a method of validating that changes made to an application do not negatively affect the visual appearance of the application’s user interface (UI). By verifying that the layout and visual elements align with expectations, the goal of visual regression testing is to ensure the user experience is visually perfect.
Visual regression testing is a kind of regression testing. In regression testing, an application is tested to ensure that a new change to the code doesn’t break existing functionality. Visual regression testing specifically focuses on verifying the appearance and the usability of the UI after a code change.
In other words, visual regression testing (also called just visual testing or UI testing) is focused on validating the appearance of all the visual elements a user interacts with or sees. These visual validations include the location, brightness, contrast and color of buttons, menus, components, text and much more.
Visual regression tests are important to prevent costly visual bugs from escaping into production. Failure to visually validate can severely compromise the user experience and in many cases lead directly to lost sales. This is because traditional functional testing works by simply validating data input and output. This method of testing catches many bugs, but it can’t discover visual bugs. Without visual testing these bugs are prone to slipping through even on an otherwise well tested application.
As an example, here is a screenshot of a visual bug in production on the Southwest Airlines website:
This page would pass a typical suite of functional tests because all of the elements are present on the page and have loaded successfully. However, the visual bug is obvious. Not only that, but because the Terms and Conditions are inadvertently overlapping the button, the user literally cannot check out and complete their purchase. Visual regression testing would catch this kind of bug easily before it slipped into production.
Visual testing can also enhance functional testing practices and make them more efficient. Because visual tests can “see” the elements on a page they do not have to rely on individual coded assertions using unique selectors to validate each element. In a traditional functional testing suite, these assertions are often time-consuming to create and maintain as the application changes. Visual testing greatly simplifies that process.
At its core, visual regression testing works by capturing screenshots of the UI before a change is made and comparing it to a screenshot taken after. Differences are then highlighted for a test engineer to review. In practice, there are several different visual regression testing techniques available.
Getting started with automated visual regression testing takes only a few steps. Let’s walk through the typical visual regression testing process and then consider a brief example.
Let’s review a quick example of the four steps above with a basic use case, such as a login screen.
Choosing the best tool for your visual regression tests will depend on your needs, as there are many options available. Here are some questions you should be asking as you consider a new tool:
Automated visual testing tools can be paid or open source. Visual testing tools are typically paired with an automated testing tool to automatically handle interactions and take screenshots. Some popular open source automated testing tools compatible with visual testing include Selenium for web testing and Appium for mobile testing.
Applitools has pioneered the best Visual AI in the industry, and it’s able to automatically detect visual and functional bugs just as a human would. Our Visual AI has been trained on billions of images with 99.9999% accuracy and includes advanced features to reduce test flakiness and save time, even across the most complicated test suites.
The Applitools Ultrafast Test Cloud includes unique features like the Ultrafast Grid, which can run your functional & visual tests once locally and instantly render them across any combination of browsers, devices, and viewports. Our automated maintenance capabilities make use of Visual AI to identify and group similar differences found across your test suite, allowing you to verify multiple checkpoint images at once and to replicate maintenance actions you perform for one step in other relevant steps within a batch.
You can find out more about the power of Visual AI through our free report on the Impact of Visual AI on Test Automation. Check out the entire Applitools platform and sign up for your own free account today.
Happy Testing!
The post What is Visual Regression Testing? appeared first on Automated Visual Testing | Applitools.
]]>The post The Benefits of Visual AI over Pixel-Matching & DOM-Based Visual Testing Solutions appeared first on Automated Visual Testing | Applitools.
]]>The visual aspect of a website or an app is the first thing that end users will encounter when using the application. For businesses to deliver the best possible user experience, having appealing and responsive websites is an absolutely necessity.
More than ever, customers expect apps and sites to be intuitive, fast, and visually flawless. The number of screens across applications, websites, and devices is growing faster, with the cost of testing rising high. Managing visual quality effectively is now becoming a MUST.
Visual testing is the automated process of comparing the visible output of an app or website against an expected baseline image.
In its most basic form, visual testing, sometimes referred to as Visual UI testing, Visual diff testing or Snapshot testing, compares differences in a website page or device screen by looking at pixel variations. In other words, testing a web or native mobile application by looking at the fully rendered pages and screens as they appear before customers.
While visual testing has been a popular solution for validating UIs, there have been many flaws in the traditional methods of getting it done. In the past, there have been two traditional methods of visual testing: DOM Diffs and Pixel Diffs. These methods have led to an enormous amount of false positives and lack of confidence from the teams that have adopted them.
Applitools Eyes, the only visual testing solution to use Visual AI, solves for all the shortcomings of visual testing – vastly improves test creation, execution, and maintenance.
This refers to Pixel-by-pixel comparisons, in which the testing framework will flag literally any difference it sees between two images, regardless of whether the difference is visible to the human eye, or not.
While such comparisons provide an entry-level into visual testing, it tends to be flaky and can lead to a lot of false positives, which is time-consuming.
When working with the web, you must take into consideration that things tend to render slightly different between page loads and browser updates. If the browser renders the page off by 1 pixel due to a rendering change, your text cursor is showing, or an image renders differently, your release may be blocked due to these false positives.
Here are some examples of what this approach cannot handle:
Pixel-based comparisons exhibit the following deficiencies:
Take for instance these two examples:
In this approach, the tool captures the DOM of the page and compares it with the DOM captured of a previous version of the page.
Comparing DOM snapshots does not mean the output in the browser is visually identical. Your browser renders the page from the HTML, CSS and JavaScript, which comprises the DOM. Identical DOM structures can have different visual outputs and different DOM outputs can render identically.
Some differences that a DOM diff misses:
DOM comparators exhibit three clear deficiencies:
In short, DOM diffing ensures that the page structure remains the same from page to page. DOM comparisons on their own are insufficient for ensuring visual integrity.
A combination of Pixel and DOM diffs can mitigate some of these limitations (e.g. identify DOM differences that render identically) but are still suspect to many false-positive results.
Modern approaches have incorporated artificial intelligence, known as Visual AI, to view as a human eye would and avoid false positives.
Visual AI is a form of computer vision invented by Applitools in 2013 to help quality engineers test and monitor today’s modern apps at the speed of CI/CD. It is a combination of hundreds of AI and ML algorithms that help identify when things go wrong in your UI that actually matter. Visual AI inspects every page, screen, viewport, and browser combination for both web and native mobile apps and reports back any regression it sees. Visual AI looks at applications the same way the human eye and brain do, but without tiring or making mistakes. It helps teams greatly reduce false positives that arise from small, inconceivable differences in regressions, which has been the biggest challenge for teams adopting visual testing
Visual AI overcomes the problems of pixel and DOM for visual validations, and has 99.9999% accuracy to be used in production functional testing. Visual AI captures the screen image, breaks it into visual elements using AI, compares the visual elements with an older screen image broken into visual elements (using AI), and identifies visible differences.
Each given page renders as a visual image composed of visual elements. Visual AI treats elements as they appear:
QA Engineers can’t reasonably test the hundreds of UI elements on every page of a given app, they are usually forced to test a subset of these elements, leading to a lot of production bugs due to lack of coverage.
With Visual AI, you take a screenshot and validate the entire page. This limits the tester’s reliance on DOM locators, labels, and messages. Additionally, you can test all elements rather than having to pick and choose.
Visual AI identifies the layout at multiple levels – using thousands of data points between location and spacing. Within the layout, Visual AI identifies elements algorithmically. For any checkpoint image compared against a baseline, Visual AI identifies all the layout structures and all the visual elements and can test at different levels. Visual AI can swap between validating the snapshot from exact preciseness to focusing differences in the layout, as well as differences within the content contained within the layout.
Visual AI can intelligently test interfaces that have dynamic content like ads, news feeds, and more with the fidelity of the human eye. No more false positives due to a banner that constantly rotates or the newest sale pop-up your team is running.
Visual AI also understands the context of the browser and viewport for your UI so that it can accurately test across them at scale. Visual testing tools using traditional methods will get tripped up by the small, inconsistencies in browsers and your UIs elements. Visual AI understands them and can validate across hundreds of different browser combinations in minutes.
One of the unique and cool features of Applitools is the power of the automated maintenance capabilities that prevent the need to approve or reject the same change across different screens/devices. This significantly reduces the overhead involved with managing baselines from different browsers and device configurations.
When it comes to reviewing your test results, this is a major step towards saving team’s and testers time, as it will help to apply the same change on a large number of tests and will identify this same change for future tests as well. Reducing the amount of time required to accomplish these tasks translates to reducing the cost of the project.
ECommerce websites and applications are some of the best candidates for visual testing, as buyers are incredibly sensitive to poor UI/UX. But previously, eCommerce sites had too many moving parts to be practically tested by visual testing tools that use DOM Diffs or Pixel Diffs. Items that are constantly changing and going in and out of stock, sales that happening all the time, and the growth of personalization in digital commerce has made it impossible to validate with AI. Too many things get flagged on each change!
Using Visual AI, tests can omit entire sections of the UI from tripping up tests, validate only layouts, or dynamically assert changing data.
Dashboards can be incredibly difficult to test via traditional methods due to the large amount of customized data that can change in real-time.
Visual AI can help not only visually test around these dynamic regions of heavy data, but it can actually replace many of the repeated and customized assertions used on dashboards with a single line of code.
Let’s take the example of a simple bank dashboard below.
It has hundreds of different assertions, like the Name, Total Balance, Recent Transactions, Amount Due, and more. With visual AI, you can assign profiles to full-page screenshots meaning that the entire UI of “Jack Gomez’s” bank dashboard can be tested via a single assertion.
Design Systems are a common way to have design and development collaborate on building frontends in a fast, consistent manner. Design Systems output components, which are reusable pieces of UI, like a date-picker or a form entry, that can be mixed and matched together to build application screens and interfaces.
Visual AI can test these components across hundreds of different browsers and mobile devices in just seconds, making sure that they are visibly correct on any size screen.
PDFs are still a staple of many business and legal transactions between businesses of all sizes. Many PDFs get generated automatically and need to be manually tested for accuracy and correctness. Visual AI can scan through hundreds of pages of PDFs in just seconds making sure that they are pixel-perfect.
DOM-based tools don’t make visual evaluations. DOM-based tools identify DOM differences. These differences may or may not have visual implications. DOM-based tools result in false positives – differences that don’t matter but require human judgment to render a decision that the difference is unimportant. They also result in false negatives, which means they will pass something that is visually different.
Pixel-based tools don’t make evaluations, either. Pixel based tools highlight pixel differences. They are liable to report false positives due to pixel differences on a page. In some cases, all the pixels shift due to an enlarged element at the beginning – pixel technology cannot distinguish the elements as elements, this means pixel technology cannot see the forest from the trees.
Automated Visual Testing powered by Visual AI, can successfully work with the challenges of Digital Transformation and CI-CD by driving higher testing coverage while at the same time helping teams increase their release velocity and improve visual quality.
Be mindful when selecting the right tool for your team and/or project, and always take into consideration:
The post The Benefits of Visual AI over Pixel-Matching & DOM-Based Visual Testing Solutions appeared first on Automated Visual Testing | Applitools.
]]>The post Our Best Test Automation Videos of 2022 (So Far) appeared first on Automated Visual Testing | Applitools.
]]>We’re approaching the end of May, which means we’re just a handful of weeks the midpoint of 2022 already. If you’re like me, you’re wondering where the year has gone. Maybe it has to do with life in the northeastern US where I live, where we’ve really just had our first week of warm weather. Didn’t winter just end?
As always, the year is flying by, and it can be hard to keep up with all the great videos or events you might have wanted to watch or attend. To help you out, we’ve rounded up some of our most popular test automation videos of 2022 so far. These are all top-notch workshops or webinars with test automation experts sharing their knowledge and their stories – you’ll definitely want to check them out.
Cross-browser testing is a well-known challenge to test automation practitioners. Luckily, Andy Knight, AKA the Automation Panda, is here to walk you through a modern approach to getting it done. Whether you use Cypress, Playwright, or are testing Storybook components, we have something for you.
For more, see this blog post: How to Run Cross Browser Tests with Cypress on All Browsers (plus bonus post specifically covering the live Q&A from this workshop).
For more, see this blog post: Running Lightning-Fast Cross-Browser Playwright Tests Against any Browser.
For more, see this blog post: Testing Storybook Components in Any Browser – Without Writing Any New Tests!
GitHub and Chrome DevTools are both incredibly popular with the developer and testing communities – odds are if you’re reading this you use one or both on a regular basis. We recently spoke with developer advocates Rizel Scarlett of GitHub and Jecelyn Yeen of Google as they explained how you can leverage these extremely popular tools to become a better tester and improve your own testing experience. Click through for more info about each video and get watching.
For more, see this blog post: Using GitHub Copilot to Automate Tests.
For more, see this blog post: Creating Your First Test With Google Chrome DevTools Recorder.
When it comes to implementing and innovating around test automation, you’re never alone, even though it doesn’t always feel that way. Countless others are struggling with the same challenges that you are and coming up with solutions. Sometimes all it takes is hearing how someone else solved a similar problem to spark an idea or gain a better understanding of how to solve your own.
Nina Westenbrink, Software Engineer at a leading European telecom, talks about how the visual time to test the company’s design system was decreased and simplified, offering helpful tips and tricks along the way. Nina also speaks about her career as a woman in testing and how to empower women and overcome biases in software engineering.
Govind Ramachandran, Head of Testing and Quality Assurance for Asia Technology Services at Manulife Asia, discusses challenges around UI/UX testing for enterprise-wide digital programs. Check out his blueprint for continuous testing of the customer experience using Figma and Applitools.
This is just a taste of our favorite videos that we’ve shared with the community from 2022. What were yours? You can check out our full video library here, and let us know your own favorites @Applitools.
The post Our Best Test Automation Videos of 2022 (So Far) appeared first on Automated Visual Testing | Applitools.
]]>The post How to Avoid Split Batches When Running Applitools Tests in Parallel appeared first on Automated Visual Testing | Applitools.
]]>Parallel testing is a powerful tool you can use to speed up your Applitools tests, but ensuring test batches are grouped together and not split is a common issue. Here’s how to avoid it.
Visual testing with Applitools Eyes is an awesome way to supercharge your automated tests with visual checkpoints that catch more problems than traditional assertions. However, just like with any other kind of UI testing, test execution can be slow. The best way to shorten the total start-to-finish time for any automated test is parallelization. Applitools Ultrafast Grid performs ultrafast visual checkpoints concurrently in the cloud, but the functional tests that initially capture those snapshots can also be optimized with parallel execution. Frameworks like JUnit, SpecFlow, pytest, and Mocha all support parallel testing.
If you parallelize your automated test suite in addition to your visual snapshot analysis, then you might need to inject a custom batch ID to group all test results together. What? What’s a batch, and why does it need a special ID? I hit this problem recently while automating visual tests with Playwright. Let me show you the problem with batches for parallel tests, and then I’ll show you the right way to handle it.
If you haven’t already heard, Playwright is a relatively new web testing framework from Microsoft. I love it because it solves many of the problems with browser automation, like setup, waiting, and network control. Playwright also has implementations in JavaScript/TypeScript, Python, Java, and C#.
Typically, I program Playwright in Python, but today, I tried TypeScript. I wrote a small automated test suite to test the AppliFashion demo web app. You can find my code on GitHub here: https://github.com/AutomationPanda/applitools-holiday-hackathon-2020.
The file tests/hooks.ts
contains the Applitools setup:
import { test } from '@playwright/test';
import { Eyes, VisualGridRunner, Configuration, BatchInfo, BrowserType, DeviceName } from '@applitools/eyes-playwright';
export let Runner: VisualGridRunner;
export let Batch: BatchInfo;
export let Config: Configuration;
test.beforeAll(async () => {
Runner = new VisualGridRunner({ testConcurrency: 5 });
Batch = new BatchInfo({name: 'AppliFashion Tests'});
Config = new Configuration();
Config.setBatch(Batch);
Config.addBrowser(1200, 800, BrowserType.CHROME);
Config.addBrowser(1200, 800, BrowserType.FIREFOX);
Config.addBrowser(1200, 800, BrowserType.EDGE_CHROMIUM);
Config.addBrowser(1200, 800, BrowserType.SAFARI);
Config.addDeviceEmulation(DeviceName.iPhone_X);
});
Before all tests start, it sets up a batch named “AppliFashion Tests” to run the tests against five different browser configurations in the Ultrafast Grid. This is a one-time setup.
Among other pieces, this file also contains a function to build the Applitools Eyes object using Runner
and Config
:
export function buildEyes() {
return new Eyes(Runner, Config);
}
The file tests/applifashion.spec.ts
contains three tests, each with visual checks:
import { test } from '@playwright/test';
import { Eyes, Target } from '@applitools/eyes-playwright';
import { buildEyes, getAppliFashionUrl } from './hooks';
test.describe('AppliFashion', () => {
let eyes: Eyes;
let url: string;
test.beforeAll(async () => {
url = getAppliFashionUrl();
});
test.beforeEach(async ({ page }) => {
eyes = buildEyes();
await page.setViewportSize({width: 1600, height: 1200});
await page.goto(url);
});
test('should load the main page', async ({ page }) => {
await eyes.open(page, 'AppliFashion', '1. Main Page');
await eyes.check('Main page', Target.window().fully());
await eyes.close(false);
});
test('should filter by color', async ({ page }) => {
await eyes.open(page, 'AppliFashion', '2. Filtering');
await page.locator('id=SPAN__checkmark__107').click();
await page.locator('id=filterBtn').click();
await eyes.checkRegionBy('#product_grid', 'Filter by color')
await eyes.close(false);
});
test('should show product details', async ({ page }) => {
await eyes.open(page, 'AppliFashion', '3. Product Page');
await page.locator('text="Appli Air x Night"').click();
await page.locator('id=shoe_img').waitFor();
await eyes.check('Product details', Target.window().fully());
await eyes.close(false);
});
test.afterEach(async () => {
await eyes.abort();
});
});
By default, Playwright would run these three tests using one “worker,” meaning they would be run serially. We can run them in parallel by adding the following setting to playwright.config.ts
:
import type { PlaywrightTestConfig } from '@playwright/test';
import { devices } from '@playwright/test';
const config: PlaywrightTestConfig = {
//...
fullyParallel: true,
//...
};
export default config;
Now, Playwright will use one worker per processor or core on the machine running tests (unless you explicitly set the number of workers otherwise).
We can run these tests using the command npm test
. (Available scripts can be found under package.json
.) On my machine, they ran (and passed) with three workers. When we look at the visual checkpoints in the Applitools dashboard, we’d expect to see all results in one batch. However, we see this instead:
What in the world? There are three batches, one for each worker! All the results are there, but split batches will make it difficult to find all results, especially for large test suites. Imagine if this project had 300 or 3000 tests instead of only 3.
The docs on how Playwright Test handles parallel testing make it clear why the batch is split into three parts:
Note that parallel tests are executed in separate worker processes and cannot share any state or global variables.
Each test executes all relevant hooks just for itself, including beforeAll and afterAll.
So, each worker process essentially has its own “copy” of the automation objects. The BatchInfo
object is not shared between these tests, which causes there to be three separate batches.
Unfortunately, batch splits are a common problem for parallel testing. I hit this problem with Playwright, but I’m sure it happens with other test frameworks, too.
Thankfully, there’s an easy way to fix this problem: share a unique batch ID between all concurrent tests. Every batch has an ID. According to the docs, there are three ways to set this ID:
BatchInfo
object.APPLITOOLS_BATCH_ID
environment variable.My original code fell to option 3: I didn’t specify a batch ID, so each worker created its own BatchInfo
object with its own automatically generated ID. That’s why my test results were split into three batches.
Option 1 is the easiest solution. We could hardcode a batch ID like this:
Batch = new BatchInfo({name: 'AppliFashion Tests', id: 'applifashion'});
However, hardcoding IDs is not a good solution. This ID would be used for every batch this test suite ever runs. Applitools has features to automatically close batches, but if separate batches run too close together, then they could collide on this common ID and be reported as one batch. Ideally, each batch should have a unique ID. Unfortunately, we cannot generate a unique ID within Playwright code because objects cannot be shared across workers.
Therefore, option 2 is the best solution. Wecould set the APPLITOOLS_BATCH_ID
environment variable to a unique ID before each test run. For example, on macOS or Linux, we could use the uuidgen
command to generate UUIDs like this:
APPLITOOLS_BATCH_ID=$(uuidgen) npm test
The ID doesn’t need to be a UUID. It could be any string, like a timestamp. However, UUIDs are recommended because the chances of generating duplicate IDs is near-zero. Timestamps are more likely to have collisions. (If you’re on Windows, then you’ll need to come up with a different command for generating unique IDs than the one shown above.)
Now, when I run my test with this injected batch ID, all visual test results fall under one big batch:
That’s the way it should be! Much better.
I always recommend setting a concise, informative batch name for your visual tests. Setting a batch ID, however, is something you should do only when necessary – such as when tests run concurrently. If you run your tests in parallel and you see split batches, give the APPLITOOLS_BATCH_ID
environment variable a try!
The post How to Avoid Split Batches When Running Applitools Tests in Parallel appeared first on Automated Visual Testing | Applitools.
]]>The post Testing Storybook Components in Any Browser – Without Writing Any New Tests! appeared first on Automated Visual Testing | Applitools.
]]>Learn how to automatically do ultrafast cross-browser testing for Storybook components without needing to write any new test automation code.
Let’s face it: modern web apps are complex. If a team wants to provide a seamless user experience on a deadline, they need to squeeze the most out of the development resources they have. Component libraries help tremendously. Developers can build individual components for small things like buttons and big things like headers to be used anywhere in the frontend with a consistent look and feel.
Storybook is one of the most popular tools for building web components. It works with all the popular frameworks, like React, Angular, and Vue. With Storybook, you can view tweaks to components as you develop their “stories.” It’s awesome! However, manually inspecting components only works small-scale when you, as the developer, are actively working on any given component. How can a team test their Storybook components at scale? And how does that fit into a broader web app testing strategy?
What if I told you that you could automatically do cross-browser testing for Storybook components without needing to define any new tests or write any new automation code? And what if I told you that it could fit seamlessly into your existing development workflow? You can do this with the power of Applitools and your favorite CI tool! Let’s see how.
Historically, web app testing strategies divide functional testing into three main levels:
These three levels make up the classic Testing Pyramid. Each level of testing mitigates a unique type of risk. Unit tests pinpoint problems in code, integration tests catch problems where entities meet, and end-to-end tests exercise behaviors like a user.
The rise of frontend component libraries raises an interesting question: Where do components belong among these levels? Components are essentially units of the UI. In that sense, they should be tested individually as “UI units” to catch problems before they become widespread across multiple app views. One buggy component could unexpectedly break several pages. However, to test them properly, they should be rendered in a browser as if they were “live.” They might even call APIs indirectly. Thus, arguably, component testing should be sandwiched between traditional integration and end-to-end testing.
Wait, another level of testing? Nobody has time for that! It’s hard enough to test adequate coverage at the three other levels, let alone automate those tests. Believe me, I understand the frustration. Unfortunately, component libraries bring new risks that ought to be mitigated.
Thankfully, Applitools provides a way to visually test all the components in a Storybook library with the Applitools Eyes SDK for Storybook. All you need to do is install the @applitools/eyes-storybook
package into your web app project, configure a few settings, and run a short command to launch the tests. Applitools Eyes will turn each story for each component into a visual test case. On the first run, it will capture a visual snapshot for each story as a “baseline” image. Then, subsequent runs will capture “checkpoint” snapshots and use Visual AI to detect any changes. You don’t need to write any new test code – tests become a side effect of creating new components and stories!
In this sense, visual component testing with Applitools is like autonomous testing. Test generation and execution is completely automated, and humans review the results. Since testing can be done autonomously, component testing is easy to add to an existing testing strategy. It mitigates lots of risk for low effort. Since it covers components very well, it can also reduce the number of tests at other layers. Remember, the goal of a testing strategy is not to cover all the things but rather to use available resources to mitigate as much risk as possible. Covering a whole component library with an autonomous test run frees up folks to focus on other areas.
Let’s walk through how to set up visual component tests for a Storybook library. You can follow the steps below to add visual component tests to any web app that has a Storybook library. Give it a try on one of your own apps, or use my example React app that I’ll use as an example below. You’ll also need Node.js installed as a prerequisite.
To get started, you’ll need an Applitools account to run visual tests. If you don’t already have an Applitools account, you can register for free using your email or GitHub account. That will let you run visual tests with basic features.
Once you get your account, store your API key as an environment variable. On macOS or Linux, use this command:
export APPLITOOLS_API_KEY=<your-api-key>
On Windows:
set APPLITOOLS_API_KEY=<your-api-key>
Next, you need to add the eyes-storybook package to your project. To install this package into a new project, run:
npm install --save-dev @applitools/eyes-storybook
Finally, you’ll need to add a little configuration for the visual tests. Add a file named applitools.config.js
to the project’s root directory, and add the following contents:
module.exports = {
concurrency: 1,
batchName: "Visually Testing Storybook Components"
}
The concurrency
setting defines how many visual snapshot comparisons the Applitools Ultrafast Test Cloud will perform in parallel. (With a free account, you are limited to 1.) The batchName
setting defines a name for the batch of tests that will appear in the Applitools dashboard. You can learn about these settings and more under Advanced Configuration in the docs.
That’s it! Now, we’re ready to run some tests. Launch them with this command:
npx eyes-storybook
Note: If your components use static assets like image files, then you will need to append the
-s
option with the path to the directory for static files. In my example React app, this would be-s public
.
The command line will print progress as it tests each story. Once testing is done, you can see all the results in the Applitools dashboard:
Run the tests a second time for checkpoint comparisons:
If you change any of your components, then tests should identify the changes and report them as “Unresolved.” You can then visually compare differences side-by-side in the Applitools dashboard. Applitools Eyes will highlight the differences for you. Below is the result when I changed a button’s color in my React app:
You can give the changes a thumbs-up if they are “right” or a thumbs-down if they are due to a regression. Applitools makes it easy to pinpoint changes. It also provides auto-maintenance features to minimize the number of times you need to accept or reject changes.
When Applitools performs visual testing, it captures snapshots from tests running on your local machine, but it does everything else in the Ultrafast Test Cloud. It rerenders those snapshots – which contain everything on the page – against different browser configurations and uses Visual AI to detect any changes relative to baselines.
If no browsers are specified for Storybook components, Applitools will run visual component tests against Google Chrome running on Linux. However, you can explicitly tell Applitools to run your tests against any browser or mobile device.
You might not think you need to do cross-browser testing for components at first. They’re just small “UI units,” right? Well, however big or small, different browsers render components differently. For example, a button may have rectangular edges instead of round ones. Bigger components are more susceptible to cross-browser inconsistencies. Think about a navbar with responsive rendering based on viewport size. Cross-browser testing is just as applicable for components as it is for full pages.
Configuring cross-browser testing for Storybook components is easy. All you need to do is add a list of browser configs to your applitools.config.js
file like this:
module.exports = {
concurrency: 1,
batchName: "Visually Testing Storybook Components",
browser: [
// Desktop
{width: 800, height: 600, name: 'chrome'},
{width: 700, height: 500, name: 'firefox'},
{width: 1600, height: 1200, name: 'ie11'},
{width: 1024, height: 768, name: 'edgechromium'},
{width: 800, height: 600, name: 'safari'},
// Mobile
{deviceName: 'iPhone X', screenOrientation: 'portrait'},
{deviceName: 'Pixel 2', screenOrientation: 'portrait'},
{deviceName: 'Galaxy S5', screenOrientation: 'portrait'},
{deviceName: 'Nexus 10', screenOrientation: 'portrait'},
{deviceName: 'iPad Pro', screenOrientation: 'landscape'},
]
}
This declaration includes ten unique browser configurations: five desktop browsers with different viewport sizes, and five mobile devices with both portrait and landscape orientations. Every story will run against every specified browser. If you run the test suite again, there will be ten times as many results!
As shown above, my batch included 90 unique test instances. Even though that’s a high number of tests, Applitools Ultrafast Test Cloud ran them in only 32 seconds! That really is ultrafast for UI tests.
Applitools Eyes makes it easy to run visual component tests, but to become truly autonomous, these tests should be triggered automatically as part of regular development workflows. Any time someone makes a change to these components, tests should run, and the team should receive feedback.
We can configure Continuous Integration (CI) tools like Jenkins, CircleCI, and others for this purpose. Personally, I like to use GitHub Actions because they work right within your GitHub repository. Here’s a GitHub Action I created to run visual component tests against my example app every time a change is pushed or a pull request is opened for the main
branch:
name: Run Visual Component Tests
on:
push:
pull_request:
branches:
- main
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Node.js
uses: actions/setup-node@v2
- name: Install dependencies
run: npm install
- name: Run visual component tests
run: npx eyes-storybook -s public
env:
APPLITOOLS_API_KEY: ${{ secrets.APPLITOOLS_API_KEY }}
The only extra configuration needed was to add my Applitools API key as a repository secret.
Components are just one layer of complex modern web apps. A robust testing strategy should include adequate testing at all levels. Thankfully, visual testing with Applitools can take care of the component layer with minimal effort. Unit tests can cover how the code works, such as a component’s play
method. Integration tests can cover API requests, and end-to-end tests can cover user-centric behaviors. Tests at all these levels together provide great protection for your app. Don’t neglect any one of them!
The post Testing Storybook Components in Any Browser – Without Writing Any New Tests! appeared first on Automated Visual Testing | Applitools.
]]>The post How to Visually Test a Remix App with Applitools and Cypress appeared first on Automated Visual Testing | Applitools.
]]>Is Remix too new to be visually tested? Let’s find out with Applitools and Cypress.
In this blog post, we answer a single question: how to best visually test a Remix-based app?
We walk through Remix and build a demo app to best showcase the framework. Then, we take a deep dive into visual testing with Applitools and Cypress. We close on scaling our test coverage with the Ultrafast Test Cloud to perform cross-browser validation of the app.
So let our exciting journey begin in pursuit of learning how to visually test the Remix-based app.
Web development is an ever-growing space with almost as many ways to build web apps as there are stars in the sky. And it ultimately translates into just how many different User Interface (UI) frameworks and libraries there are. One such library is React, which most people in the web app space have heard about, or even used to build a website or two.
For those unfamiliar with React, it’s a declarative, component-based library that developers can use to build web apps across different platforms. While React is a great way to develop robust and responsive UIs, many moving pieces still happen behind the scenes. Things like data loading, routing, and more complex work like Server-Side Rendering are what a new framework called Remix can handle for React apps.
Remix is a full-stack web framework that optimizes data loading and routing, making pages load faster and improving overall User Experience (UX). The days are long past when our customers would wait minutes while a website reloads, while moving from one page to another, or expecting an update on their feed. Features like Server-Side Rendering, effective routing, and data loading have become the must for getting our users the experience they want and need. The Remix framework is an excellent open-source solution for delivering these features to our audience and improving their UX.
Our end-users shouldn’t care what framework we used to build a website. What matters to our users is that the app works and lets them achieve their goals as fast as possible. In the same way, the testing principles always remain the same, so UI testing shouldn’t be impacted by the frameworks used to create an app. The basics of how we test stay the same although some testing aspects could change. For example, in the case of an Angular app, we might need to adjust how we wait for the site to fully load by using a specialized test framework like Protractor.
Most tests follow a straightforward pattern of Arrange, Act, and Assert. Whether you are writing a unit test, an integration test, or an end-to-end test, everything follows this cycle of setting up the data, running through a set of actions and validating the end state.
When writing these end-to-end tests, we need to put ourselves in the shoes of our users. What matters most in this type of testing is replicating a set of core use-cases that our end-users go through. It could be logging into an app, writing a new post, or navigating to a new page. That’s why UI test automation frameworks like Applitools and Cypress are fantastic for testing – they are largely agnostic of the platform they are testing. With these tools in hand, we can quickly check Remix-based apps the same way we would test any other web application.
The main goal of testing is to confirm the app’s behavior that our users see and go through. This reason is why simply loading UI elements and validating inner text or styling is not enough. Our customers are not interested in HTML or CSS. What they care about is what they can see and interact with on our site, not the code behind it. It’s not enough for a robust coverage of the complex UI that modern web apps have. We can close this gap with visual testing.
Visual testing allows us to see our app from our customers’ point of view. And that’s where the Applitools Eyes SDK comes in! This visual testing tool can enhance the existing end-to-end test coverage to ensure our app is pixel-perfect.
To simplify, what Applitools does for us is that it allows developers to effectively compare visual elements across various screens to find visible defects. Applitools can record our UI elements in their platform and then monitor any visual regressions that our customers might encounter. More specifically, this testing framework exposes the visible differences between baseline snapshots and future snapshots.
Applitools has integrations with numerous testing platforms like Cypress, WebdriverIO, Selenium, and many others. For this article, we will showcase Applitools with Cypress to add visual test coverage to our Remix app.
We can’t talk about a framework like Remix without seeing it in practice. That’s why we put together a demo app to best showcase Remix and later test it with Applitools and Cypress.
We based this app on the Remix Developer Blog app that highlights the core functionalities of Remix: data loading, actions, redirects, and more. We shared this demo app and all the tests we cover in this article in this repository so that our readers can follow along.
Before diving into writing tests, we must ensure that our Remix demo application is running.
To start, we need to clone a project from this repository:
git clone https://github.com/dmitryvinn/remix-demo-app-applitools
Then, we navigate into the project’s root directory and install all dependencies:
cd remix-demo-app-applitools
npm install
After we install the necessary dependencies, our app is ready to start:
npm run dev
After we launch the app, it should be available at http://localhost:3000/
, unless the port is already taken. With our Remix demo app fully functional, we can transition into testing Remix with Applitools and Cypress.
There is this great quote from a famous American economist, Richard Thaler: “If you want people to do something, make it easy.” That’s what Applitools and Cypress did by making testing easy for developers, so people don’t see it as a chore anymore.
To run our visual test automation using Applitools, we first need to set up Cypress, which will play the role of test runner. We can think about Cypress as a car’s body, whereas Applitools is an engine that powers the vehicle and ultimately gets us to our destination: a well-tested Remix web app.
Cypress is an open-source JavaScript end-to-end testing framework developers can use to write fast, reliable, and maintainable tests. But rather than reinventing the wheel and talking about the basics of Cypress, we invite our readers to learn more about using this automation framework on the official site, or from this course at Test Automation University.
To install Cypress, we only need to run a single command:
npm install cypress
Then, we need to initialize the cypress
folder to write our tests. The easiest way to do it is by running the following:
npx cypress open
This command will open Cypress Studio, which we will cover later in the article, but for now we can safely close it. We also recommend deleting sample test suites that Cypress created for us under cypress/integration
.
Note: If npx
is missing on the local machine, follow these steps on how to update the Node package manager, or run ./node_modules/.bin/cypress open
instead.
Installing the Applitools Eyes SDK with Cypress is a very smooth process. In our case, because we already had Cypress installed, we only need to run the following:
npm install @applitools/eyes-cypress --save-dev
To run Applitools tests, we need to get the Applitools API key, so our test automation can use the Eyes platform, including recording the UI elements, validating any changes on the screen, and more. This page outlines how to get this APPLITOOLS_API_KEY
from the platform.
After getting the API key, we have two options on how to add the key to our tests suite: using a CLI or an Applitools configuration file. Later in this post, we explore how to scale Applitools tests, and the configuration file will play a significant role in that effort. Hence, we continue by creating applitools.config.js
in our root directory.
Our configuration file will begin with the most basic setup of running a single test thread (testConcurrency
) for one browser (browser
field). We also need to add our APPLITOOLS_API_KEY
under the `apiKey’ field that will look something like this:
module.exports = {
testConcurrency: 1,
apiKey: "DONT_SHARE_OUR_APPLITOOLS_API_KEY",
browser: [
// Add browsers with different viewports
{ width: 800, height: 600, name: "chrome" },
],
// set batch name to the configuration
batchName: "Remix Demo App",
};
Now, we are ready to move onto the next stage of writing our visual tests with Applitools and Cypress.
One of the best things about Applitools is that it nicely integrates with our existing tests with straightforward API.
For this example, we visually test a simple form on the Actions page of our Remix app.
To begin writing our tests, we need to create a new file named actions-page.spec.js
in the cypress/integration
folder:
Since we rely on Cypress as our test runner, we will continue using its API for writing the tests. For the basic Actions page tests where we validate that the page renders visually correctly, we start with this code snippet:
describe("Actions page form", () => {
it("Visually confirms action form renders", () => {
// Arrange
// ...
// Act
// ..
// Assert
// ..
// Cleanup
// ..
});
});
We continue following the same pattern of Arrange-Act-Assert, but now we also want to ensure that we close all the resources we used while performing the visual testing. To begin our test case, we need to visit the Action page:
describe("Actions page form", () => {
it("Visually confirms action form renders", () => {
// Arrange
cy.visit("http://localhost:3000/demos/actions");
// Act
// ..
// Assert
// ..
// Cleanup
// ..
});
});
Now, we can begin the visual validation by using the Applitools Eyes framework. We need to “open our eyes,” so-to-speak by calling cy.eyesOpen()
. It initializes our test runner for Applitools to capture critical visual elements just like we would with our own eyes:
describe("Actions page form", () => {
it("Visually confirms action form renders", () => {
// Arrange
cy.visit("http://localhost:3000/demos/actions");
// Act
cy.eyesOpen({
appName: "Remix Demo App",
testName: "Validate Action Form",
});
// Assert
// ..
// Cleanup
// ..
});
});
Note: Technically speaking, cy.eyesOpen()
should be a part of the Arrange step of writing the test, but for educational purposes, we are moving it under the Act portion of the test case.
Now, to move to the validation phase, we need Applitools to take a screenshot and match it against the existing version of the same UI elements. These screenshots are saved on our Applitools account, and unless we are running the test case for the first time, the Applitools framework will match these UI elements against the version that we previously saved:
describe("Actions page form", () => {
it("Visually confirms action form renders", () => {
// Arrange
cy.visit("http://localhost:3000/demos/actions");
// Act
cy.eyesOpen({
appName: "Remi Demo App",
testName: "Validate Action Form",
});
// Assert
cy.eyesCheckWindow("Action Page");
// Cleanup
// ..
});
});
Lastly, we need to close our test runner for Applitools by calling cy.closeEyes()
. With this step, we now have a complete Applitools test case for our Actions page:
describe("Actions page form", () => {
it("Visually confirms action form renders", () => {
// Arrange
cy.visit("http://localhost:3000/demos/actions");
// Act
cy.eyesOpen({
appName: "Remi Demo App",
testName: "Validate Action Form",
});
// Assert
cy.eyesCheckWindow("Action Page");
// Cleanup
cy.eyesClose();
});
});
Note: Although we added a cleanup-stage with cy.eyesClose()
in the test case itself, we highly recommend moving this method outside of the it()
function into the afterEach()
that will run for every test, avoiding code duplication.
After the hard work of planning and then writing our test suite, we can finally start running our tests. And it couldn’t be easier than with Applitools and Cypress!
We have two options of either executing our tests by using Cypress CLI or Cypress Studio.
Cypress Studio is a great option when we first write our tests because we can walk through every case, stop the process at any point, or replay any failures. These reasons are why we should use Cypress Studio to demonstrate best how these tests function.
We begin running our cases by invoking the following from the project’s root directory:
npm run cypress-open
This operation opens Cypress Studio, where we can select what test suite to run:
To validate the result, we need to visit our Applitools dashboard:
To make it interesting, we can cause this test to fail by changing the text on the Actions page. We could change the heading to say “Failed Actions!” instead of the original “Actions!” and re-run our test.
This change will cause our original test case to fail because it will catch a difference in the UI (in our case, it’s because of the intentional renaming of the heading). This error message is what we will see in the Cypress Studio:
To further deal with this failure, we need to visit the Applitools dashboard:
As we can see, the latest test run is shown as Unresolved, and we might need to resolve the failure. To see what the difference in the newest test run is, we only need to click on the image in question:
A great thing about Applitools is that their visual AI algorithm is so advanced that it can test our application on different levels to detect content changes as well as layout or color updates. What’s especially important is that Applitools’ algorithm prevents false positives with built-in functionalities like ignoring content changes for apps with dynamic content.
In our case, the test correctly shows that the heading changed, and it’s now up to us to either accept the new UI or reject it and call this failure a legitimate bug. Applitools makes it easy to choose the correct course of action as we only need to press thumbs up to accept the test result or thumbs down to decline it.
Accepting or Rejecting Test Run in Applitools Dashboard
In our case, the test case failed due to a visual bug that we introduced by “unintentionally” updating the heading.
After finishing our work in the Applitools Dashboard, we can bring the test results back to the developers and file a bug on whoever made the UI change.
But are we done? What about testing our web app on different browsers and devices? Fortunately, Applitools has a solution to quickly scale the tests automation and add cross-browser coverage.
Testing an application against one browser is great, but what about all others? We have checked our Remix app on Chrome, but we didn’t see how the app performs on Firefox, Microsoft Edge, and so on. We haven’t even started looking into mobile platforms and our web app on Android or iOS. Introducing this additional test coverage can get out of hand quickly, but not with Applitools and their Ultrafast Test Cloud. It’s just one configuration change away!
With this cloud solution from Applitools, we can test our app across different browsers without any additional code. We only have to update our Applitools configuration file, applitools.config.js
.
Below is an example of how to add coverage for desktop browsers like Chrome, Firefox, Safari and E11, plus two extra test cases for different models of mobile phones:
module.exports = {
testConcurrency: 1,
apiKey: "DONT_SHARE_YOUR_APPLITOOLS_API_KEY",
browser: [
// Add browsers with different viewports
{ width: 800, height: 600, name: "chrome" },
{ width: 700, height: 500, name: "firefox" },
{ width: 1600, height: 1200, name: "ie11" },
{ width: 800, height: 600, name: "safari" },
// Add mobile emulation devices in Portrait or Landscape mode
{ deviceName: "iPhone X", screenOrientation: "landscape" },
{ deviceName: "Pixel 2", screenOrientation: "portrait" },
],
// set batch name to the configuration
batchName: "Remix Demo App",
};
It’s important to note that when specifying the configuration for different browsers, we need to define their width
and height
, with an additional property for screenOrientation
to cover non-desktop devices. These settings are critical for testing responsive apps because many modern websites visually differ depending on the devices our customers use.
After updating the configuration file, we need to re-run our test suite with npm test
. Fortunately, with the Applitools Ultrafast Test Cloud, it only takes a few seconds to finish running our tests on all browsers, so we can visit our Applitools Dashboard to view the results right away:
As we can see, with only a few lines in the configuration file, we scaled our visual tests across multiple devices and browsers. We save ourselves time and money whenever we can get extra test coverage without explicitly writing new cases. Maintaining test automation that we write is one of the most resource-consuming steps of the Software Development Life Cycle. With solutions like Applitools Ultrafast Test Cloud, we can write fewer tests while increasing our test coverage for the entire app.
Hopefully, this article showed that the answer is yes; we can successfully visually test Remix-based apps with Applitools and Cypress!
Remix is a fantastic framework to take User Experience to the next level, and we invite you to learn more about it during the webinar by Kent C. Dodds “Building Excellent User Experiences with Remix”.
For more information about Applitools, visit their website, blog and YouTube channel. They also provide free courses through Test Automation University that can help take anyone’s testing skills to the next level.
The post How to Visually Test a Remix App with Applitools and Cypress appeared first on Automated Visual Testing | Applitools.
]]>The post Running Lightning-Fast Cross-Browser Playwright Tests Against any Browser appeared first on Automated Visual Testing | Applitools.
]]>Learn how you can run cross browser tests against any stock browser using Playwright – not just the browser projects like Chromium, Firefox, and WebKit, and not just Chrome and Edge.
These days, there are a plethora of great web test automation tools. Although Selenium WebDriver seems to retain its top spot in popularity, alternatives like Playwright are quickly growing their market share. Playwright is an open source test framework developed by Microsoft by the same folks who worked on Puppeteer. It is notable for its concise syntax, execution speed, and advanced features. Things like automatic waiting and carefully-designed assertions protect tests against flakiness. And like Selenium, Playwright has bindings for multiple languages: TypeScript, JavaScript, Python, .NET, and Java.
However, Playwright has one oddity that sets it apart from other frameworks: Instead of testing browser applications, Playwright tests browser projects. What does this mean? Major modern browser applications like Chrome, Edge, and Safari are based on browser projects that they use internally as their bases. For example, Google Chrome is based on the Chromium project. Typically, these internal projects are open source and provide a rendering engine for web pages.
The table below shows the browser projects used by major browser apps:
Browser project | Browser app |
---|---|
Chromium | Google Chrome, Microsoft Edge, Opera |
Firefox (Gecko) | Mozilla Firefox |
WebKit | Apple Safari |
Browser projects offer Playwright unique advantages. Setup is super easy, and tests are faster using browser contexts. However, some folks need to test full browser applications, not just browser projects. Some teams are required to test specific configurations for compliance or regulations. Other teams may feel like testing projects instead of “stock” browsers is too risky. Playwright can run tests directly against Google Chrome and Microsoft Edge with a little extra configuration, but it can’t hit Firefox, Safari, or IE, and in my anecdotal experience, tests against Chrome and Edge run many times slower than the same tests against Chromium. Playwright’s focus on browser projects over browser apps is a double-edged sword: While it arguably helps most testers, it inherently precludes others.
Thankfully, there is a way to run Playwright tests against full browser apps, not just browser projects: using Applitools Visual AI with the Ultrafast Test Cloud. With the help of Applitools, you can achieve true cross-browser testing with Playwright at lightning speed, even for large test suites. Let’s see how it’s done. We’ll start with a basic Playwright test JavaScript, and then we’ll add visual snapshots that can be rendered using any browser in the Applitools cloud.
Let’s define a basic web app login test for the Applitools demo site. The site mimics a basic banking app. The first page is a login screen:
You can enter any username or password to login. Then, the main page appears:
Nothing fancy here. The steps for our test case are straightforward:
Scenario: Successful login
Given the login page is displayed
When the user enters their username and password
And the user clicks the login button
Then the main page is displayed
These steps would be the same for the login behavior of any other application.
Let’s automate our login test in JavaScript using Playwright. We could automate our test in TypeScript (which is arguably better), but I’ll use JavaScript for this example to keep the code plain and simple.
Create a new project, and install Playwright. Under the tests folder, create a new file named login.spec.js
, and add the following test stub:
const { test, expect } = require('@playwright/test');
test.describe.configure({ mode: 'parallel' })
test.describe('Login', () => {
test.beforeEach(async ({ page }) => {
await page.setViewportSize({width: 1600, height: 1200});
});
test('should log into the demo app', async ({ page }) => {
// Load login page
// ...
// Verify login page
// ...
// Perform login
// ...
// Verify main page
// ...
});
})
Playwright uses a Mocha-like structure for test cases. The test.beforeEach(...)
call sets an explicit viewport size for testing to make sure the responsive layout renders as expected. The test(...)
call includes sections for each step.
Let’s implement the steps using Playwright calls. Here’s the first step to load the login page:
// Load login page
await page.goto('https://demo.applitools.com');
The second step verifies that elements like username and password fields appear on the login page. Playwright’s assertions automatically wait for the elements to appear:
// Verify login page
await expect(page.locator('div.logo-w')).toBeVisible();
await expect(page.locator('id=username')).toBeVisible();
await expect(page.locator('id=password')).toBeVisible();
await expect(page.locator('id=log-in')).toBeVisible();
await expect(page.locator('input.form-check-input')).toBeVisible();
The third step actually logs into the site like a human user:
// Perform login
await page.fill('id=username', 'andy')
await page.fill('id=password', 'i<3pandas')
await page.click('id=log-in')
The fourth and final step makes sure the main page loads correctly. Again, assertions automatically wait for elements to appear:
// Verify main page
// Check various page elements
await expect.soft(page.locator('div.logo-w')).toBeVisible();
await expect.soft(page.locator('ul.main-menu')).toBeVisible();
await expect.soft(page.locator('div.avatar-w img')).toHaveCount(2);
await expect.soft(page.locator('text=Add Account')).toBeVisible();
await expect.soft(page.locator('text=Make Payment')).toBeVisible();
await expect.soft(page.locator('text=View Statement')).toBeVisible();
await expect.soft(page.locator('text=Request Increase')).toBeVisible();
await expect.soft(page.locator('text=Pay Now')).toBeVisible();
await expect.soft(page.locator(
'div.element-search.autosuggest-search-activator > input'
)).toBeVisible();
// Check time message
await expect.soft(page.locator('id=time')).toContainText(
/Your nearest branch closes in:( \d+[hms])+/);
// Check menu element names
await expect.soft(page.locator('ul.main-menu li span')).toHaveText([
'Card types',
'Credit cards',
'Debit cards',
'Lending',
'Loans',
'Mortgages'
]);
// Check transaction statuses
let statuses =
await page.locator('span.status-pill + span').allTextContents();
statuses.forEach(item => {
expect.soft(['Complete', 'Pending', 'Declined']).toContain(item);
});
The first three steps are nice and concise, but the code for the fourth step is quite long. Despite making several assertions for various page elements, there are still things left unchecked!
Run the test locally to make sure it works:
$ npx playwright test
This command will run the test against all three Playwright browsers – Chromium, Firefox, and WebKit – in headless mode and in parallel. You can append the “--headed
” option to see the browsers open and render the pages. The tests should take only a few short seconds to complete, and they should all pass.
You could run this login test on your local machine or from your Continuous Integration (CI) service, but in its present form, it can’t run against certain “stock” browsers like Apple Safari or Internet Explorer. If you attempt to use a browser channel to test stock Chrome or Edge browsers, tests would probably run much slower compared to Chromium. To run against any browser at lightning speed, we need the help of visual testing techniques using Applitools Visual AI and the Ultrafast Test Cloud.
Visual testing is the practice of inspecting visual differences between snapshots of screens in the app you are testing. You start by capturing a “baseline” snapshot of, say, the login page to consider as “right” or “expected.” Then, every time you run the tests, you capture a new snapshot of the same page and compare it to the baseline. By comparing the two snapshots side-by-side, you can detect any visual differences. Did a button go missing? Did the layout shift to the left? Did the colors change? If nothing changes, then the test passes. However, if there are changes, a human tester should review the differences to decide if the change is good or bad.
Manual testers have done visual testing since the dawn of computer screens. Applitools Visual AI simply automates the process. It highlights differences in side-by-side snapshots so you don’t miss them. Furthermore, Visual AI focuses on meaningful changes that human eyes would notice. If an element shifts one pixel to the right, that’s not a problem. Visual AI won’t bother you with that noise.
If a picture is worth a thousand words, then a visual snapshot is worth a thousand assertions. We could update our login test to take visual snapshots using Applitools Eyes SDK in place of lengthy assertions. Visual snapshots provide stronger coverage than the previous assertions. Remember how our login test made several checks but still didn’t cover all the elements on the page? A visual snapshot would implicitly capture everything with only one line of code. Visual testing like this enables more effective functional testing than traditional assertions.
But back to the original problem: how does this enable us to run Playwright tests against any stock browser? That’s the magic of snapshots. Notice how I said “snapshot” and not “screenshot.” A screenshot is merely a grid of static pixels. A snapshot, however, captures full page content – HTML, CSS, and JavaScript – that can be re-rendered in any browser configuration. If we update our Playwright test to take visual snapshots of the login page and the main page, then we could run our test one time locally to capture the snapshots, Then, the Applitools Eyes SDK would upload the snapshots to the Applitools Ultrafast Test Cloud to render them in any target browser – including browsers not natively supported by Playwright – and compare them against baselines. All the heavy work for visual checkpoints would be done by the Applitools Ultrafast Test Cloud, not by the local machine. It also works fast, since re-rendering snapshots takes much less time than re-running full tests.
Let’s turn our login test into a visual test. First, make sure you have an Applitools account. You can register for a free account to get started.
Next, install the Applitools Eyes SDK for Playwright into your project:
$ npm install -D @applitools/eyes-playwright
Add the following import statement to login.spec.js
:
const {
VisualGridRunner,
Eyes,
Configuration,
BatchInfo,
BrowserType,
DeviceName,
ScreenOrientation,
Target,
MatchLevel
} = require('@applitools/eyes-playwright');
Next, we need to specify which browser configurations to run in Applitools Ultrafast Grid. Update the test.beforeEach(...)
call to look like this:
test.describe('Login', () => {
let eyes, runner;
test.beforeEach(async ({ page }) => {
await page.setViewportSize({width: 1600, height: 1200});
runner = new VisualGridRunner({ testConcurrency: 5 });
eyes = new Eyes(runner);
const configuration = new Configuration();
configuration.setBatch(new BatchInfo('Modern Cross Browser Testing Workshop'));
configuration.addBrowser(800, 600, BrowserType.CHROME);
configuration.addBrowser(700, 500, BrowserType.FIREFOX);
configuration.addBrowser(1600, 1200, BrowserType.IE_11);
configuration.addBrowser(1024, 768, BrowserType.EDGE_CHROMIUM);
configuration.addBrowser(800, 600, BrowserType.SAFARI);
configuration.addDeviceEmulation(DeviceName.iPhone_X, ScreenOrientation.PORTRAIT);
configuration.addDeviceEmulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT);
configuration.addDeviceEmulation(DeviceName.Galaxy_S5, ScreenOrientation.PORTRAIT);
configuration.addDeviceEmulation(DeviceName.Nexus_10, ScreenOrientation.PORTRAIT);
configuration.addDeviceEmulation(DeviceName.iPad_Pro, ScreenOrientation.LANDSCAPE);
eyes.setConfiguration(configuration);
});
})
That’s a lot of new code! Let’s break it down:
page.setViewportSize(...)
call remains unchanged. It will set the viewport only for the local test run.runner
object points visual tests to the Ultrafast Grid.testConcurrency
setting controls how many visual tests will run in parallel in the Ultrafast Grid. A higher concurrency means shorter overall execution time. (Warning: if you have a free account, your concurrency limit will be 1.)eyes
object watches the browser for taking visual snapshots.configuration
object sets the test batch name and the various browser configurations to test in the Ultrafast Grid.This configuration will run our visual login test against 10 different browsers: 5 desktop browsers of various viewports, and 5 mobile browsers of various orientations.
Time to update the test case. We must “open” Applitools Eyes at the beginning of the test to capture screenshots, and we must “close” Eyes at the end:
test('should log into the demo app', async ({ page }) => {
// Open Applitools Eyes
await eyes.open(page, 'Applitools Demo App', 'Login');
// Test steps
// ...
// Close Applitools Eyes
await eyes.close(false)
});
The load and login steps do not need any changes because the interactions are the same. However, the “verify” steps reduce drastically to one-line snapshot calls:
test('should log into the demo app', async ({ page }) => {
// ...
// Verify login page
await eyes.check('Login page', Target.window().fully());
// ...
// Verify main page
await eyes.check('Main page', Target.window().matchLevel(MatchLevel.Layout).fully());
// ...
});
These snapshots capture the full window for both pages. The main page also sets a match level to “layout” so that differences in text and color are ignored. Snapshots will be captured once locally and uploaded to the Ultrafast Grid to be rendered on each target browser. Bye bye, long and complicated assertions!
Finally, after each test, we should add safety handling and result dumping:
test.afterEach(async () => {
await eyes.abort();
const results = await runner.getAllTestResults(false);
console.log('Visual test results', results);
});
The completed code for login.spec.js
should look like this:
const { test } = require('@playwright/test');
const {
VisualGridRunner,
Eyes,
Configuration,
BatchInfo,
BrowserType,
DeviceName,
ScreenOrientation,
Target,
MatchLevel
} = require('@applitools/eyes-playwright');
test.describe.configure({ mode: 'parallel' })
test.describe('A visual test', () => {
let eyes, runner;
test.beforeEach(async ({ page }) => {
await page.setViewportSize({width: 1600, height: 1200});
runner = new VisualGridRunner({ testConcurrency: 5 });
eyes = new Eyes(runner);
const configuration = new Configuration();
configuration.setBatch(new BatchInfo('Modern Cross Browser Testing Workshop'));
configuration.addBrowser(800, 600, BrowserType.CHROME);
configuration.addBrowser(700, 500, BrowserType.FIREFOX);
configuration.addBrowser(1600, 1200, BrowserType.IE_11);
configuration.addBrowser(1024, 768, BrowserType.EDGE_CHROMIUM);
configuration.addBrowser(800, 600, BrowserType.SAFARI);
configuration.addDeviceEmulation(DeviceName.iPhone_X, ScreenOrientation.PORTRAIT);
configuration.addDeviceEmulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT);
configuration.addDeviceEmulation(DeviceName.Galaxy_S5, ScreenOrientation.PORTRAIT);
configuration.addDeviceEmulation(DeviceName.Nexus_10, ScreenOrientation.PORTRAIT);
configuration.addDeviceEmulation(DeviceName.iPad_Pro, ScreenOrientation.LANDSCAPE);
eyes.setConfiguration(configuration);
});
test('should log into the demo app', async ({ page }) => {
// Open Applitools Eyes
await eyes.open(page, 'Applitools Demo App', 'Login');
// Load login page
await page.goto('https://demo.applitools.com');
// Verify login page
await eyes.check('Login page', Target.window().fully());
// Perform login
await page.fill('id=username', 'andy')
await page.fill('id=password', 'i<3pandas')
await page.click('id=log-in')
// Verify main page
await eyes.check('Main page', Target.window().matchLevel(MatchLevel.Layout).fully());
// Close Applitools Eyes
await eyes.close(false)
});
test.afterEach(async () => {
await eyes.abort();
const results = await runner.getAllTestResults(false);
console.log('Visual test results', results);
});
})
Now, it’s a visual test! Let’s run it.
Your account comes with an API key. Visual tests using Applitools Eyes need this API key for uploading results to your account. On your machine, set this key as an environment variable.
On Linux and macOS:
$ export APPLITOOLS_API_KEY=<value>
On Windows:
> set APPLITOOLS_API_KEY=<value>
Then, launch the test using only one browser locally:
$ npx playwright test —-browser=chromium
(Warning: If your playwright.config.js
file has projects configured, you will need to use the “--project
” option instead of the “--browser
” option. Playwright may automatically configure this if you run npm init playwright
to set up the project.)
When this test runs, it will upload snapshots for both the login page and the main page to the Applitools test cloud. It needs to run only one time locally to capture the snapshots. That’s why we set the command to run using only Chromium.
Open the Applitools dashboard to view the visual results:
Notice how this one login test has one result for each target configuration. All results have “New” status because they are establishing baselines. Also, notice how little time it took to run this batch of tests:
Running our test across 10 different browser configurations with 2 visual checkpoints each at a concurrency level of 5 took only 36 seconds to complete. That’s ultra fast! Running that many test iterations with a Selenium Grid or similar scale-out platform could take several minutes.
Run the test again. The second run should succeed just like the first. However, the new dashboard results now say “Passed” because Applitools compared the latest snapshots to the baselines and verified that they had not changed:
This time, all variations took 32 seconds to complete – about half a minute.
Passing tests are great, but what happens if a page changes? Consider an alternate version of the login page:
This version has a broken icon and a different login button. Modify the Playwright call to load the login page to test this version of the site like this:
await page.goto('https://demo.applitools.com/index_v2.html');
Now, when you rerun the test, results appear as “Unresolved” in the Applitools dashboard:
When you open each result, the dashboard will display visual comparisons for each snapshot. If you click the snapshot, it opens the comparison window:
The baseline snapshot appears on the left, while the latest checkpoint snapshot appears on the right. Differences will be highlighted in magenta. As the tester, you can choose to either accept the change as a new baseline or reject it as a failure.
Playwright truly is a nifty framework. Thanks to the Applitools Ultrafast Grid, you can upgrade any Playwright test with visual snapshots and run them against any browsers, even ones not natively supported by Playwright. Applitools enables Playwright tests to become cross-browser tests. Just note that this style of testing focuses on cross-browser page rendering, not cross-browser page interactions. You may still want to run your Playwright tests locally against Firefox and WebKit in addition to Chromium, while using the Applitools Ultrafast Grid to validate rendering on different browser and viewport configurations.
Want to see the full code? Check out this GitHub repository: applitools/workshop-cbt-playwright-js.
Want to try visual testing for yourself? Register for a free Applitools account.
Want to see how to do this type of cross-browser testing with Cypress? Check out this article.
The post Running Lightning-Fast Cross-Browser Playwright Tests Against any Browser appeared first on Automated Visual Testing | Applitools.
]]>The post Autonomous Testing: Test Automation’s Next Great Wave appeared first on Automated Visual Testing | Applitools.
]]>The word “automation” has become a buzzword in pop culture. It conjures things like self-driving cars, robotic assistants, and factory assembly lines. They don’t think about automation for software testing. In fact, many non-software folks are surprised to hear that what I do is “automation.”
The word “automation” also carries a connotation of “full” automation with zero human intervention. Unfortunately, most of our automated technologies just aren’t there yet. For example, a few luxury cars out there can parallel-park themselves, and Teslas have some cool autopilot capabilities, but fully-autonomous vehicles do not yet exist. Self-driving cars need several more years to perfect and even more time to become commonplace on our roads.
Software testing is no different. Even when test execution is automated, test development is still very manual. Ironic, isn’t it? Well, I think the day of “full” test automation is quickly approaching. We are riding the crest of the next great wave: autonomous testing. It’ll arrive long before cars can drive themselves. Like previous waves, it will fundamentally change how we, as testers, approach our craft.
Let’s look at the past two waves to understand this more deeply. You can watch the keynote address I delivered at Future of Testing: Frameworks 2022, or you can keep reading below.
In their most basic form, tests are manual. A human manually exercises the behavior of the software product’s features and determines if outcomes are expected or erroneous. There’s nothing wrong with manual testing. Many teams still do this effectively today. Heck, I always try a test manually before automating it. Manual tests may be scripted in that they follow a precise, predefined procedure, or they may be exploratory in that the tester relies instead on their sensibilities to exercise the target behaviors.
Testers typically write scripted tests as a list of steps with interactions and verifications. They store these tests in test case management repositories. Most of these tests are inherently “end-to-end:” they require the full product to be up and running, and they expect testers to attempt a complete workflow. In fact, testers are implicitly incentivized to include multiple related behaviors per test in order to gain as much coverage with as little manual effort as possible. As a result, test cases can become very looooooooooooong, and different tests frequently share common steps.
Large software products exhibit countless behaviors. A single product could have thousands of test cases owned and operated by multiple testers. Unfortunately, at this scale, testing is slooooooooow. Whenever developers add new features, testers need to not only add new tests but also rerun old tests to make sure nothing broke. Software is shockingly fragile. A team could take days, weeks, or even months to adequately test a new release. I know – I once worked at a company with a 6-month-long regression testing phase.
Slow test cycles forced teams to practice Waterfall software development. Rather than waste time manually rerunning all tests for every little change, it was more efficient to bundle many changes together into a big release to test all at once. Teams would often pipeline development phases: While developers are writing code for the features going into release X+1, testers would be testing the features for release X. If testing cycles were long, testers might repeat tests a few times throughout the cycle. If testing cycles were short, then testers would reduce the number of tests to run to a subset most aligned with the new features. Test planning was just as much work as test execution and reporting due to the difficulty in judging risk-based tradeoffs.
Slow manual testing was the bane of software development. It lengthened time to market and allowed bugs to fester. Anything that could shorten testing time would make teams more productive.
That’s when the first wave of test automation hit: manual test conversion. What if we could implement our manual test procedures as software scripts so they could run automatically? Instead of a human running the tests slowly, a computer could run them much faster. Testers could also organize scripts into suites to run a bunch of tests at one time. That’s it – that was the revolution. Let software test software!
During this wave, the main focus of automation was execution. Teams wanted to directly convert their existing manual tests into automated scripts to speed them up and run them more frequently. Both coded and codeless automation tools hit the market. However, they typically stuck with the same waterfall-minded processes. Automation didn’t fundamentally change how teams developed software, it just made testing better. For example, during this wave, running automated tests after a nightly build was in vogue. When teams would plan their testing efforts, they would pick a few high-value tests to automate and run more frequently than the rest of the manual tests.
Unfortunately, while this type of automation offered big improvements over pure manual testing, it had problems. First, testers still needed to manually trigger the tests and report results. On a typical day, a tester would launch a bunch of scripts while manually running other tests on the side. Second, test scripts were typically very fragile. Both tooling and understanding for good automation had not yet matured. Large end-to-end tests and long development cycles also increased the risk of breakage. Many teams gave up attempting test automation due to the maintenance nightmare.
The first wave of test automation was analogous to cars switching from manual to automatic transmissions. Automation made the task of driving a test easier, but it still required the driver (or the tester) to start and stop the test.
The second test automation wave was far more impactful than the first. After automating the execution of tests, focus shifted to automating the triggering of tests. If tests are automated, then they can run without any human intervention. Therefore, they could be launched at any time without human intervention, too. What if tests could run automatically after every new build? What if every code change could trigger a new build that could then be covered with tests immediately? Teams could catch bugs as soon as they happen. This was the dawn of Continuous Integration, or “CI” for short.
Continuous Integration revolutionized software development. Long Waterfall phases for coding and testing weren’t just passé – they were unnecessary. Bite-sized changes could be independently tested, verified, and potentially deployed. Agile and DevOps practices quickly replaced the Waterfall model because they enabled faster releases, and Continuous Integration enabled Agile and DevOps. As some would say, “Just make the DevOps happen!”
The types of tests teams automated changed, too. Long end-to-end tests that covered “grand tours” with multiple behaviors were great for manual testing but not suitable for automation. Teams started automating short, atomic tests focused on individual behaviors. Small tests were faster and more reliable. One failure pinpointed one problematic behavior.
Developers also became more engaged in testing. They started automating both unit tests and feature tests to be run in CI pipelines. The lines separating developers and testers blurred.
Teams adopted the Testing Pyramid as an ideal model for test count proportions. Smaller tests were seen as “good” because they were easy to write, fast to execute, less susceptible to flakiness, and caught problems quickly. Larger tests, while still important for verifying workflows, needed more investment to build, run, and maintain. So, teams targeted more small tests and fewer large tests. You may personally agree or disagree with the Testing Pyramid, but that was the rationale behind it.
While the first automation wave worked within established software lifecycle models, the second wave fundamentally changed them. The CI revolution enabled tests to run continuously, shrinking the feedback loop and maximizing the value that automated tests could deliver. It gave rise to the SDET, or Software Development Engineer in Test, who had to manage tests, automation, and CI systems. SDETs carried more responsibilities than the automation engineers of the first wave.
If we return to our car analogy, the second wave was like adding cruise control. Once the driver gets on the highway, the car can just cruise on its own without much intervention.
Unfortunately, while the second wave enabled teams to multiply the value they can get out of testing and automation, it came with a cost. Test automation became full-blown software development in its own right. It entailed tools, frameworks, and design patterns. The continuous integration servers became production environments for automated tests. While some teams rose to the challenge, many others struggled to keep up. The industry did not move forward together in lock-step. Test automation success became a gradient of maturity levels. For some teams, success seemed impossible to reach.
Now, these two test automation waves I described do not denote precise playbooks every team followed. Rather, they describe the general industry trends regarding test automation advancement. Different teams may have caught these waves at different times, too.
Currently, as an industry, I think we are riding the tail end of the second wave, rising up to meet the crest of a third. Continuous Integration, Agile, and DevOps are all established practices. The innovation to come isn’t there.
Over the past years, a number of nifty test automation features have hit the scene, such as screen recorders and smart locators. I’m going to be blunt: those are not the next wave, they’re just attempts to fix aspects of the previous waves.
You may agree or disagree with my opinions on the usefulness of these tools, but the fact is that they all share a common weakness: they are vulnerable to behavioral changes. Human testers must still intervene as development churns.
These tools are akin to a car that can park itself but can’t fully drive itself. They’re helpful to some folks but fall short of the ultimate dream of full automation.
The first two waves covered automation for execution and scheduling. Now, the bottleneck is test design and development. Humans still need to manually create tests. What if we automated that?
Consider what testing is: Testing equals interaction plus verification. That’s it! You do something, and you make sure it works correctly. It’s true for all types of tests: unit tests, integration tests, end-to-end tests, functional, performance, load; whatever! Testing is interaction plus verification.
During the first two waves, humans had to dictate those interactions and verifications precisely. What we want – and what I predict the third wave will be – is autonomous testing, in which that dictation will be automated. This is where artificial intelligence can help us. In fact, it’s already helping us.
Applitools has already mastered automated validation for visual interfaces. Traditionally, a tester would need to write several lines of code to functionally validate behaviors on a web page. They would need to check for elements’ existence, scrape their texts, and make assertions on their properties. There might be multiple assertions to make – and other facets of the page left unchecked. Visuals like color and position would be very difficult to check. Applitools Eyes can replace almost all of those traditional assertions with single-line snapshots. Whenever it detects a meaningful change, it notifies the tester. Insignificant changes are ignored to reduce noise.
Automated visual testing like this fundamentally simplifies functional verification. It should not be seen as an optional extension or something nice to have. It automates the dictation of verification. It is a new type of functional testing.
The remaining problem to solve is dictation of interaction. Essentially, we need to train AI to figure out proper app behaviors on its own. Point it at an app, let it play around, and see what behaviors it identifies. Pair those interactions with visual snapshot validation, and BOOM – you have autonomous testing. It’s testing without coding. It’s like a fully-self-driving car!
Some companies already offer tools that attempt to discover behaviors and formulate test cases. Applitools is also working on this. However, it’s a tough problem to crack.
Even with significant training and refinement, AI agents still have what I call “banana peel moments:” times when they make surprisingly awful mistakes that a human would never make. Picture this: you’re walking down the street when you accidentally slip on a banana peel. Your foot slides out from beneath you, and you hit your butt on the ground so hard it hurts. Everyone around you laughs at both your misfortune and your clumsiness. You never saw it coming!
Banana peel moments are common AI hazards. Back in 2011, IBM created a supercomputer named Watson to compete on Jeopardy, and it handily defeated two of the greatest human Jeopardy champions at that time. However, I remember watching some of the promo videos at the time explaining how hard it was to train Watson how to give the right answers. In one clip, it showed Watson answering “banana” to some arbitrary question. Oops! Banana? Really?
While Watson’s blunder was comical, other mistakes can be deadly. Remember those self-driving cars? Tesla autopilot mistakes have killed at least a dozen people since 2016. Autonomous testing isn’t a life-or-death situation like driving, but testing mistakes could be a big risk for companies looking to de-risk their software releases. What if autonomous tests miss critical application behaviors that turn out to crash once deployed to production? Companies could lose lots of money, not to mention their reputations.
So, how can we give AI for testing the right training to avoid these banana peel moments? I think the answer is simple: set up AI for testing to work together with human testers. Instead of making AI responsible for churning out perfect test cases, design the AI to be a “coach” or an “advisor.” AI can explore an app and suggest behaviors to cover, and the human tester can pair that information with their own expertise to decide what to test. Then, the AI can take that feedback from the human tester to learn better for next time. This type of feedback loop can help AI agents not only learn better testing practices generally but also learn how to test the target app specifically. It teaches application context.
AI and humans working together is not just a theory. It’s already happened! Back in the 90s, IBM built a supercomputer named Deep Blue to play chess. In 1996, it lost 4-2 to grandmaster and World Chess Champion Garry Kasparov. One year later, after upgrades and improvements, it defeated Kasparov 3.5-2.5. It was the first time a computer beat a world champion at chess. After his defeat, Kasparov had an idea: What if human players could use a computer to help them play chess? Then, one year later, he set up the first “advanced chess” tournament. To this day, “centaurs,” or humans using computers, can play at nearly the same level as grandmasters.
I believe the next great wave for test automation belongs to testers who become centaurs – and to those who enable that transformation. AI can learn app behaviors to suggest test cases that testers accept or reject as part of their testing plan. Then, AI can autonomously run approved tests. Whenever changes or failures are detected, the autonomous tests yield helpful results to testers like visual comparisons to figure out what is wrong. Testers will never be completely removed from testing, but the grindwork they’ll need to do will be minimized. Self-driving cars still have passengers who set their destinations.
This wave will also be easier to catch than the first two waves. Testing and automation was historically a do-it-yourself effort. You had to design, automate, and execute tests all on your own. Many teams struggled to make it successful. However, with the autonomous testing and coaching capabilities, AI testing technologies will eliminate the hardest parts of automation. Teams can focus on what they want to test more than how to implement testing. They won’t stumble over flaky tests. They won’t need to spend hours debugging why a particular XPath won’t work. They won’t need to wonder what elements they should and shouldn’t verify on a page. Any time behaviors change, they rerun the AI agents to relearn how the app works. Autonomous testing will revolutionize functional software testing by lowering the cost of entry for automation.
If you are plugged into software testing communities, you’ll hear from multiple testing leaders about their thoughts on the direction of our discipline. You’ll learn about trends, tools, and frameworks. You’ll see new design patterns challenge old ones. Something I want you to think about in the back of your mind is this: How can these things be adapted to autonomous testing? Will these tools and practices complement autonomous testing, or will they be replaced? The wave is coming, and it’s coming soon. Be ready to catch it when it crests.
The post Autonomous Testing: Test Automation’s Next Great Wave appeared first on Automated Visual Testing | Applitools.
]]>