The post Ultrafast Cross Browser Testing with Selenium Java appeared first on Automated Visual Testing | Applitools.
]]>Learn why cross-browser testing is so important and an approach you can take to make cross-browser testing with Selenium much faster.
Cross-browser testing is a form of functional testing in which an application is tested on multiple browsers (Chrome, Firefox, Edge, Safari, IE, etc.) to validate that functionality performs as expected.
In other words, it is designed to answer the question: Does your app work the way it’s supposed to on every browser your customers use?
While modern browsers generally conform to key web standards today, important problems remain. Differences in interpretations of web standards, varying support for new CSS or other design features, and rendering discrepancies between the different browsers can all yield a user experience that is different from one browser to the next.
A modern application needs to perform as expected across all major browsers. Not only is this a baseline user expectation these days, but it is critical to delivering a positive user experience and a successful app.
At the same time, the number of screen combinations (between screen sizes, devices and versions) is rising quickly. In recent years the number of screens required to test has exploded, rising to an industry average of 81,480 screens and reaching 681,296 for the top 30% of companies.
Ensuring complete coverage of each screen on every browser is a common challenge. Effective and fast cross-browser testing can help alleviate the bottleneck from all these screens that require testing.
Traditional approaches to cross-browser testing in Selenium have existed for a while, and while they still work, they have not scaled well to handle the challenge of complex modern applications. They can be time-consuming to build, slow to execute and challenging to maintain in the face of apps that change frequently.
Applitools Developer Advocate and Test Automation University Director Andrew Knight (AKA Pandy Knight) recently conducted a hands-on workshop where he explored the history of cross-browser testing, its evolution over time and the pros and cons of different approaches.
Andrew then explores a modern cross-browser testing solution with Selenium and Applitools. He walks you through a live demo (which you can replicate yourself by following his shared Github repo) and explains the benefits and how to get started. He also covers how you can accelerate test automation with integration into CI/CD to achieve Continuous Testing.
Check out the workshop below, and follow along with the Github repo here.
At Applitools we are dedicated to making software testing faster and easier so that testers can be more effective and apps can be visually perfect. That’s why we use our industry-leading Visual AI and built the Applitools Ultrafast Grid, a key component of the Applitools Test Cloud that enables ultrafast cross-browser testing. If you’re looking to do cross-browser testing better but don’t use Selenium, be sure to check out these links too for more info on how we can help:
The post Ultrafast Cross Browser Testing with Selenium Java appeared first on Automated Visual Testing | Applitools.
]]>The post What is Cross Browser Testing? Examples & Best Practices appeared first on Automated Visual Testing | Applitools.
]]>In this guide, learn everything you need to know about cross-browser testing, including examples, a comparison of different implementation options and how you can get started with cross-browser testing today.
Cross Browser Testing is a testing method for validating that the application under test works as expected on different browsers, at varying viewport sizes, and devices. It can be done manually or as part of a test automation strategy. The tooling required for this activity can be built in-house or provided by external vendors.
When I began in QA I didn’t understand why cross-browser testing was important. But it quickly became clear to me that applications frequently render differently at different viewport sizes and with different browser types. This can be a complex issue to test effectively, as the number of combinations required to achieve full coverage can become very large.
Here’s an example of what you might look for when performing cross-browser testing. Let’s say we’re working on an insurance application. I, as a user, should be able to view my insurance policy details on the website, using any browser on my laptop or desktop.
This should be possible while ensuring:
There are various aspects to consider while implementing your cross-browser testing strategy.
“Different devices and browsers: chrome, safari, firefox, edge”
Thankfully IE is not in the list anymore (for most)!
You should first figure out the important combinations of devices and browsers and viewport sizes your userbase is accessing your application from.
PS: Each team member should have access to the analytics data of the product to understand patterns of usage of the product. This data, which includes OS, browser details (type, version, viewport sizes) are essential to plan and test proactively, instead of later reacting to situations (= defects).
This will tell you the different browser types, browser versions, devices, viewport sizes you need to consider in your testing and test automation strategy.
There are various ways you can perform cross-browser testing. Let’s understand them.
We usually have multiple browsers on our laptop / desktops. While there are other ways to get started, it is probably simplest to start implementing your cross browser tests here. You also need a local setup to enable debugging and maintaining / updating the tests.
If mobile-web is part of the strategy, then you also need to have the relevant setup available on local machines to enable that.
While this may seem the easiest, it can get out of control very quickly.
Examples:
The choices can actually vary based on the requirements of the project and on a case by case basis.
As alternatives, we have the liberty to choose and create either an in-house testing solution, or go for a platform / license / third party tool to support our device farm needs.
You can set up a central infrastructure of browsers and emulators or real devices in your organization that can be leveraged by the teams. You will also need some software to manage the usage and allocation of these browsers and devices.
This infrastructure can potentially be used in the following ways:
You can also opt to run the tests against browsers / devices in a cloud-based solution. You can select different device / browser options offered by various providers in the market that give you the wide coverage as per your requirements, without having to build / maintain / manage the same. This can also be used to run tests triggered from local machines, or from CI.
It is important to understand the evolution of browsers in recent years.
We need to factor this change in our cross browser testing strategy.
In addition, AI-based cross-browser testing solutions are becoming quite popular, which use machine learning to help scale your automation execution and get deep insights into the results – from a functional, performance and user-experience perspective.
To get hands-on experience in this, I signed-up for a free Applitools account, which uses a powerful Visual AI, and implemented a few tests using this tutorial as a reference.
Integrating Applitools with your functional automation is extremely easy. Simply select the relevant Applitools SDK based on your functional automation tech stack from here, and follow the detailed tutorial to get started.
Now, at any place in your test execution where you need functional or visual validation, add methods like eyes.checkWindow()
, and you are set to run your test against any browser or device of your choice.
Reference: https://applitools.com/tutorials/overview/how-it-works.html
Now you have your tests ready and running against a specific browser or device, scaling for cross-browser testing is the next step.
What if I told you with just the addition of the different device combinations, you can leverage the same single script to give you the functional and visual test results on the variety of combinations specified, covering the cross browser testing aspect as well.
Seems too far-fetched?
It isn’t. That is exactly what Applitools Ultrafast Test Cloud does!
The addition of lines of code below will do the magic. You can also go about changing the configurations, as per your requirements.
(Below example is from the Selenium-Java SDK. Similar configuration can be supplied for the other SDKs.)
// Add browsers with different viewports
config.addBrowser(800, 600, BrowserType.Chrome);
config.addBrowser(700, 500, BrowserType.FIREFOX);
config.addBrowser(1600, 1200, BrowserType.IE_11);
config.addBrowser(1024, 768, BrowserType.EDGE_CHROMIUM);
config.addBrowser(800, 600, BrowserType.SAFARI);
// Add mobile emulation devices in Portrait mode
config.addDeviceEmulation(DeviceName.iPhone_X, ScreenOrientation.PORTRAIT;
config.addDeviceEmulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT;
// Set the configuration object to eyes
eyes.setConfiguration(config);
Now when you run the test again, say against Chrome browser on your laptop, in the Applitools dashboard, you will see results for all the browser and device combinations provided above.
You may be wondering, the test ran just once on the Chrome browser. How did the results from all other browsers and devices come up? And so fast?
This is what Applitools Ultrafast Grid (a part of the Ultrafast Test Cloud) does under the hood:
eyes.checkWindow
call, the information captured (DOM, CSS, etc.) is sent to the Ultrafast Grid.What I like about this AI-based solution, is that:
Here is the screenshot of the Applitools dashboard after I ran my sample tests:
The Ultrafast Test Grid and Applitools Visual AI can be integrated into many popular and free and open source test automation frameworks to easily supercharge their effectiveness as cross-browser testing tools.
As you saw above in my code sample, Ultrafast Grid is compatible with Selenium. Selenium is the most popular open source test automation framework. It is possible to perform cross browser testing with Selenium out of the box, but Ultrafast Grid offers some significant advantages. Check out this article for a full comparison of using an in-house Selenium Grid vs using Applitools.
Cypress is another very popular open source test automation framework. However, it can only natively run tests against a few browsers at the moment – Chrome, Edge and Firefox. The Applitools Ultrafast Grid allows you to expand this list to include all browsers. See this post on how to perform cross-browser tests with Cypress on all browsers.
Playwright is an open source test automation framework that is newer than both Cypress and Selenium, but it is growing quickly in popularity. Playwright has some limitations on doing cross-browser testing natively, because it tests “browser projects” and not full browsers. The Ultrafast Grid overcomes this limitation. You can read more about how to run cross-browser Playwright tests against any browser.
Local Setup | In-House Setup | Cloud Solution | AI-Based Solution (Applitools) | |
---|---|---|---|---|
Infrastructure | Pros: Fast feedback on local machine Cons: Needs to be repeated for each machine where the tests need to execute All configurations cannot be set up locally | Pros: No inbound / outbound connectivity required Cons: Needs considerable effort to set up, maintain and update the infrastructure on a continued basis | Pros: No efforts required build / maintain / update the infrastructure Cons: Needs inbound and outbound connectivity from internal network Latency issues may be seen as requests are going to cloud based browsers / devices | Pros: No effort required to setup |
Setup and Maintenance | To be taken care of by each team member from time to time; including OS/ Browser version updates | To be taken care of by the internal team from time to time; including OS/ Browser version updates | To be taken care of by the service provider | To be taken care of by the service provider |
Speed of Feedback | Slowest, as all dependencies to be taken care of, and test needs to be repeated for each browser / device combination | Depends on concurrent usage due to multiple test runs | Depends on network latency Network issues may cause intermittent failures Depends on reliability and connectivity of the service provider | Fast and seamless scaling |
Security | Best as in-house, using internal firewalls, vpns, network and data storage | Best as in-house, using internal firewalls, vpns, network and data storage | High Risk: Needs inbound network access from service provider to the internal test environments. Browsers / devices will have access to the data generated by running the test – cleanup is essential. No control who has access to the cloud service provider infrastructure, and if they access your internal resources. | Low risk. There is no inbound connection to your internal infrastructure. Tests are running on internal network – so no data on Applitools server (other than screenshots used for comparison with baseline) |
Depending on your project strategy, scope, manual or automation requirements and of course, the hardware or infrastructure combinations, you should make a choice that not only suits the requirements but gives you the best returns and results.
Based on my past experiences, I am very excited about the Applitools Ultrafast Test Cloud – a unique way to scale test automation seamlessly. In the process, I ended up writing less code, and got amazingly high test coverage, with very high accuracy. I recommend everyone to try this and experience it themselves!
Want to get started with Applitools today? Sign up for a free account and check out our docs to get up and running today, or schedule a demo and we’ll be happy to answer any questions you may have.
Editor’s Note: This post was originally published in January 2022, and has been updated for accuracy and completeness.
The post What is Cross Browser Testing? Examples & Best Practices appeared first on Automated Visual Testing | Applitools.
]]>The post Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.
]]>How can you choose the best cross-browser testing tool for your needs? We’ll review the challenges of cross-browser testing and consider some leading cross-browser testing solutions.
Nowadays, testing a website or an app using one single browser or device will lead to disastrous consequences, and testing the same website or app on multiple browsers using ONLY the traditional functional testing approach may lead to production issues and lots of visual bugs.
Combinations of browsers, devices, viewports, and screen orientations (portrait or landscape) can reach the thousands. Performing manual testing on this vast amount of possibilities is no longer feasible, same as just running the usual functional testing scripts hoping to cover the most critical aspects, regions, or functionalities of our sites.
In this article, we are going to focus on the challenges and leading solutions for cross-browser testing.
Cross-browser testing makes sure that your web apps work across different web browsers and devices. Usually, you want to cover the most popular browser configurations or the ones specified as supported browsers/devices based on your organization’s products and services.
Basically because rendering is different and modern web apps have responsive design, but you also have to consider that each web browser handles JavaScript differently, and each browser may render things differently based on different viewports or device screen sizes. These rendering differences can result in costly bugs and negative user experience.
Cross-browser testing has been around for quite some time now. Traditionally, testers run multiple tests and test in parallel on different browsers and this is fine, from a functional point of view.
Today, we know for a fact that running only these kinds of traditional functional tests across a set of browsers does not guarantee your website or app’s integrity. But let’s define and understand the difference between Traditional Functional Testing and Visual Testing. Traditional functional testing is a type of software testing where the basic functionalities of an app are tested against a set of specifications. On the other hand, Visual Testing allows you to test for visual bugs, which are extremely difficult to uncover with the traditional functional testing approach.
As mentioned, traditional functional testing on its own will not capture the visual testing aspect and could lead to lack of coverage. You have to take into consideration the possibility of visual bugs, regardless of the amount of elements you actually test. Even if you tested all of them, you may encounter visual bugs that could lead to false negatives, which means, your testing was done, your tests passed and you did not capture the bug.
Today we have mobile and IoT device proliferation, complex responsive design viewport requirements, and dynamic content. Since rendering the UI is subjective, the majority of cross-browser defects are visual.
To handle all these possibilities or scenarios, you need a tool or framework that not only runs tests but provides reliable feedback – and not just false positives or tests pending to be approved or rejected.
When it comes to cross-browser testing, you have several options, same as for visual testing. In this article, we will explore some of the most popular cross-browser testing tools.
If you have the resources, time, and knowledge, you can spin up your own Selenium Grid and do some cross-browser testing. This may be useful based on your project size and approach.
As mentioned, if you understand the components and steps to accomplish this, go for it!
Now, be aware, to maintain a home-grown Selenium grid cluster is not an easy task. You may find some difficulties or issues when running and maintaining hundreds of browser/nodes. Because of this, most companies end up outsourcing this tasks to vendors like Browserstack or LambdaTest, in order to save time and energy and bring more stability to their Selenium Grid infrastructure.
Most of these vendors are really expensive, which means that you will need to have a dedicated project budget just for running your UI tests on their cloud. Not to mention the packages or plans you’ll have to acquire to run a decent amount of parallel tests.
When it comes to cross-browser testing and visual testing, you could use any of the available tools or frameworks, for instance LambdaTest or BrowserStack. But how can we choose? Which one is better? Are they all offering the same thing?
Before choosing any Selenium Grid solutions, there are some key inherit issues that we must take into consideration:
Applitools Ultrafast Grid is the next generation of cross-browser testing. With the Ultrafast Grid, you can run functional and visual tests once, and it instantly renders all screens across all combinations of browsers, devices, and viewports.
Visual AI is a technology that improves snapshot comparisons. It goes deeper than pixel-to-pixel comparisons to identify changes that would be meaningful to the human eye.
Visual snapshots provide a much more robust, comprehensive, and simpler mechanism for automating verifications. Instead of writing hundreds of lines of assertions with locators, you can write a single-line snapshot capture using Applitools Eyes.
When you compound that stability with the modern cross-platform testing technology of the Ultrafast Test Grid that stability multiplies. This improved efficiency guarantees delivery of high-quality apps, on-time and without the need of multiple suites or test scripts.
Think and analyze the time that it currently takes to complete a full testing cycle on your end using traditional cross-browser testing solutions. Going from installing, writing, running, analyzing, reporting and maintaining your tests. Engineers now have the Ultrafast Grid and Visual AI technology that can be easily set on your framework, and that is also capable of testing large, modern apps across multiple environments in just minutes.
Traditional cross-browser testing solutions that offer visual testing, are usually providing this as a separate feature or add-on that you have to pay for. What this feature does is basically taking screenshots for you to compare with other screenshots previously taken. So you can just imagine the amount of time that will take to accept or reject all these tests, and take into account that most of them will not necessarily bring useful intel, as the website or app may not change from one day to another.
The Ultrafast Grid goes beyond simple screenshots. Applitools SDKs uploads DOM snapshots, not screenshots, to the Ultrafast Grid. Snapshots include all the resources to render a page (HTML, CSS …) and are much smaller than screenshots, so they are basically uploaded faster.
To learn more about the Ultrafast Grid functionality and configuration, take a look at this article > https://applitools.com/docs/topics/overview/using-the-ultrafast-grid.html
Here are some of the benefits and differences you’ll find when using this framework:
Selenium Grid Solutions are everywhere, and the price varies between vendors and features. IF you have infinite time, infinite resources and infinite budget, it would be ideal to run all the tests on all the browsers and analyze the results on every code change/build. But for a company trying to optimize their velocity and run tests on every pull request/build, the Applitools Ultrafast Grid provides a compelling balance between performance, stability, cost and risk.
The post Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.
]]>The post Applitools Recognized as ‘Major Player’ in IDC MarketScape: Worldwide Cloud Testing 2022 Vendor Assessment appeared first on Automated Visual Testing | Applitools.
]]>Applitools is proud to announce its positioning in the Major Players category in the IDC MarketScape: Worldwide Cloud Testing 2022 Vendor Assessment — Empowering Business Velocity. Applitools was also positioned in the Major Players category in the IDC MarketScape: Worldwide Mobile Testing and Digital Quality 2022 Vendor Assessment — Enabling Multimodal Dynamism for Digital Innovation.
Written by Melinda-Carol Ballou, Research Director, Agile ALM, Quality & Portfolio Strategies at IDC Research, this IDC study uses the IDC MarketScape model to provide an assessment of 24 vendors for worldwide cloud testing and enterprise automated software quality (ASQ) SaaS solutions.
“Participating vendors needed to have sufficient cloud testing automated software quality capabilities available in key areas of concern (e.g., test infrastructure provisioning and configuration management; deep analytics for analysis of performance optimization, service virtualization, and architectural and other analysis to enable visibility into the health of applications deployed in native and hybrid cloud; readiness for software targeting the cloud; and/or delivery of their ASQ software solution in the cloud with partner integration for other capabilities) for IDC clients.”
Source: IDC
Applitools products considered as part of these vendor assessments include Applitools Eyes and Applitools Ultrafast Test Cloud (the combination of Applitools Eyes, Applitools Ultrafast Grid and Applitools Native Mobile Grid).
As software development teams rapidly deliver new products and services to market through more frequent and shorter release cycles, they struggle to fully test the customer experience due to increasing application complexity and an explosion of device/browser combinations. When development teams are confident they can fix functional and visual bugs faster, they can push more high-quality code faster than ever before.
To enable automated software quality consistently, quickly, and at a fraction of the cost, Applitools extends to its customers the power of Applitools Visual AI—the only AI-powered computer vision that accurately mimics the human eye and brain to avoid undetected functional and visual bugs, minimizing false positive bug alerts.
Visual AI is trained on more than 1 billion images and supports analysis across custom regions, with advanced comparison modes/match levels and auto-maintenance of test results. Tests infused with Visual AI (via Applitools Eyes) are created 5.8x faster, run 3.8x more stable, and catch 45% more bugs vs. traditional functional testing. In addition, tests powered by Visual AI can take advantage of the ultrafast speed and stability of Applitools Ultrafast Test Cloud.
The Ultrafast Test Cloud can instantly validate entire application pages and detect issues on even the most complex and dynamic pages. It allows users to write and execute tests once locally with support for more than 50 test frameworks and programming languages including: Cypress, Storybook, Selenium Java, Selenium JavaScript, Selenium C#, Selenium Python, Selenium Ruby, Selenium IDE, Webdriver.IO, TestCafe, and more.
A single functional test run captures the DOM & CSS rules for every browser state, automatically rendering it in parallel across all browsers (Chrome, Firefox, Safari, Edge, and Internet Explorer) and viewports using the Ultrafast Grid. Screenshots are then instantly analyzed by Applitools Eyes to find functional and visual bugs.
Applitools integrates with all the major CI/CD platforms, including GitHub, GitLab, Bitbucket, Jenkins, Azure DevOps, Travis CI, Circle CI, Semaphore, TeamCity, and Bamboo, as well as with defect- tracking/collaboration systems, including Jira, CA Rally, Microsoft Teams, and Slack. The Applitools Eyes dashboard also enables collaboration between design, product, development, testing, and DevOps teams.
Automated software testing powered by Visual AI can help developers test 18 times faster across the full test cycle including writing, running, analyzing, reporting, and maintaining tests. That’s because Visual AI-powered automated tests that leverage the Ultrafast Test Cloud can run 30-50x faster than traditional solutions, with 99.9999% accuracy.*
“References contacted by IDC found that Applitools’ solution led to greater efficiency in testing operations. They reported cost savings of up to 95% by making the switch to Applitools in combination with adoption of containers — additional testing tools were not required. Another customer reported that it was able to increase the speed of testing to reduce approval time for moving websites to production from 29 days to 1.5 hours. This is especially significant for a company maintaining 2,400 websites — Applitools helped ensure that sites maintained the same digital experience look and feel across pages as they changed dynamically, supporting up to five browsers and four viewpoints”.
Source: IDC
Applitools is helping more than 400 of the world’s top digital brands release, test, and monitor flawless mobile, web, and native apps in a fully automated way. We help our customers modernize critical test automation use cases — functional and visual regression testing, web and mobile UI/UX testing, cross browser / cross device testing, localization testing, PDF testing, digital accessibility and legal/compliance testing — to transform the way their businesses deliver innovation at the speed of DevOps without jeopardizing brand integrity.
Learn about the impact Applitools is having on these organizations by reading the customer stories in our case study library: https://applitools.com/case-studies/.
To support the community, Applitools also maintains and manages Test Automation University, a free online learning community with over 100,000 members, that hosts more than 50 courses on a wide range of test automation topics and best practices.
“Being recognized as a major player in the IDC MarketScape for worldwide cloud testing underscores our ability to bring speed and stability to automated software quality with Visual AI and Ultrafast Test Cloud. We’re the world’s fastest-growing software testing company because our customers are using the most advanced, yet simple way to ensure brand integrity across any digital end-user experience. Their success is our success.”
Moshe Milman, COO and co-founder, Applitools
It takes a village. Our fantastic team has worked long and hard to achieve this recognition from IDC Research, but this is also the direct result of the feedback and collaboration we’ve had with our customers – we could not have done it without you. Because of our strong community and our valued partnerships, the industry has also recognized Applitools as the:
Thank you for trusting Applitools to deliver flawless automated testing for you, and we’re excited to head into the future of testing together.
For more, schedule a demo and see for yourself how Visual AI is helping industry leaders deliver visually perfect digital experiences.
*Modern Cross Browser Testing Through Visual AI Report https://applitools.com/modern-cross-browser-testing-report/
The post Applitools Recognized as ‘Major Player’ in IDC MarketScape: Worldwide Cloud Testing 2022 Vendor Assessment appeared first on Automated Visual Testing | Applitools.
]]>The post Testing Storybook Components in Any Browser – Without Writing Any New Tests! appeared first on Automated Visual Testing | Applitools.
]]>Learn how to automatically do ultrafast cross-browser testing for Storybook components without needing to write any new test automation code.
Let’s face it: modern web apps are complex. If a team wants to provide a seamless user experience on a deadline, they need to squeeze the most out of the development resources they have. Component libraries help tremendously. Developers can build individual components for small things like buttons and big things like headers to be used anywhere in the frontend with a consistent look and feel.
Storybook is one of the most popular tools for building web components. It works with all the popular frameworks, like React, Angular, and Vue. With Storybook, you can view tweaks to components as you develop their “stories.” It’s awesome! However, manually inspecting components only works small-scale when you, as the developer, are actively working on any given component. How can a team test their Storybook components at scale? And how does that fit into a broader web app testing strategy?
What if I told you that you could automatically do cross-browser testing for Storybook components without needing to define any new tests or write any new automation code? And what if I told you that it could fit seamlessly into your existing development workflow? You can do this with the power of Applitools and your favorite CI tool! Let’s see how.
Historically, web app testing strategies divide functional testing into three main levels:
These three levels make up the classic Testing Pyramid. Each level of testing mitigates a unique type of risk. Unit tests pinpoint problems in code, integration tests catch problems where entities meet, and end-to-end tests exercise behaviors like a user.
The rise of frontend component libraries raises an interesting question: Where do components belong among these levels? Components are essentially units of the UI. In that sense, they should be tested individually as “UI units” to catch problems before they become widespread across multiple app views. One buggy component could unexpectedly break several pages. However, to test them properly, they should be rendered in a browser as if they were “live.” They might even call APIs indirectly. Thus, arguably, component testing should be sandwiched between traditional integration and end-to-end testing.
Wait, another level of testing? Nobody has time for that! It’s hard enough to test adequate coverage at the three other levels, let alone automate those tests. Believe me, I understand the frustration. Unfortunately, component libraries bring new risks that ought to be mitigated.
Thankfully, Applitools provides a way to visually test all the components in a Storybook library with the Applitools Eyes SDK for Storybook. All you need to do is install the @applitools/eyes-storybook
package into your web app project, configure a few settings, and run a short command to launch the tests. Applitools Eyes will turn each story for each component into a visual test case. On the first run, it will capture a visual snapshot for each story as a “baseline” image. Then, subsequent runs will capture “checkpoint” snapshots and use Visual AI to detect any changes. You don’t need to write any new test code – tests become a side effect of creating new components and stories!
In this sense, visual component testing with Applitools is like autonomous testing. Test generation and execution is completely automated, and humans review the results. Since testing can be done autonomously, component testing is easy to add to an existing testing strategy. It mitigates lots of risk for low effort. Since it covers components very well, it can also reduce the number of tests at other layers. Remember, the goal of a testing strategy is not to cover all the things but rather to use available resources to mitigate as much risk as possible. Covering a whole component library with an autonomous test run frees up folks to focus on other areas.
Let’s walk through how to set up visual component tests for a Storybook library. You can follow the steps below to add visual component tests to any web app that has a Storybook library. Give it a try on one of your own apps, or use my example React app that I’ll use as an example below. You’ll also need Node.js installed as a prerequisite.
To get started, you’ll need an Applitools account to run visual tests. If you don’t already have an Applitools account, you can register for free using your email or GitHub account. That will let you run visual tests with basic features.
Once you get your account, store your API key as an environment variable. On macOS or Linux, use this command:
export APPLITOOLS_API_KEY=<your-api-key>
On Windows:
set APPLITOOLS_API_KEY=<your-api-key>
Next, you need to add the eyes-storybook package to your project. To install this package into a new project, run:
npm install --save-dev @applitools/eyes-storybook
Finally, you’ll need to add a little configuration for the visual tests. Add a file named applitools.config.js
to the project’s root directory, and add the following contents:
module.exports = {
concurrency: 1,
batchName: "Visually Testing Storybook Components"
}
The concurrency
setting defines how many visual snapshot comparisons the Applitools Ultrafast Test Cloud will perform in parallel. (With a free account, you are limited to 1.) The batchName
setting defines a name for the batch of tests that will appear in the Applitools dashboard. You can learn about these settings and more under Advanced Configuration in the docs.
That’s it! Now, we’re ready to run some tests. Launch them with this command:
npx eyes-storybook
Note: If your components use static assets like image files, then you will need to append the
-s
option with the path to the directory for static files. In my example React app, this would be-s public
.
The command line will print progress as it tests each story. Once testing is done, you can see all the results in the Applitools dashboard:
Run the tests a second time for checkpoint comparisons:
If you change any of your components, then tests should identify the changes and report them as “Unresolved.” You can then visually compare differences side-by-side in the Applitools dashboard. Applitools Eyes will highlight the differences for you. Below is the result when I changed a button’s color in my React app:
You can give the changes a thumbs-up if they are “right” or a thumbs-down if they are due to a regression. Applitools makes it easy to pinpoint changes. It also provides auto-maintenance features to minimize the number of times you need to accept or reject changes.
When Applitools performs visual testing, it captures snapshots from tests running on your local machine, but it does everything else in the Ultrafast Test Cloud. It rerenders those snapshots – which contain everything on the page – against different browser configurations and uses Visual AI to detect any changes relative to baselines.
If no browsers are specified for Storybook components, Applitools will run visual component tests against Google Chrome running on Linux. However, you can explicitly tell Applitools to run your tests against any browser or mobile device.
You might not think you need to do cross-browser testing for components at first. They’re just small “UI units,” right? Well, however big or small, different browsers render components differently. For example, a button may have rectangular edges instead of round ones. Bigger components are more susceptible to cross-browser inconsistencies. Think about a navbar with responsive rendering based on viewport size. Cross-browser testing is just as applicable for components as it is for full pages.
Configuring cross-browser testing for Storybook components is easy. All you need to do is add a list of browser configs to your applitools.config.js
file like this:
module.exports = {
concurrency: 1,
batchName: "Visually Testing Storybook Components",
browser: [
// Desktop
{width: 800, height: 600, name: 'chrome'},
{width: 700, height: 500, name: 'firefox'},
{width: 1600, height: 1200, name: 'ie11'},
{width: 1024, height: 768, name: 'edgechromium'},
{width: 800, height: 600, name: 'safari'},
// Mobile
{deviceName: 'iPhone X', screenOrientation: 'portrait'},
{deviceName: 'Pixel 2', screenOrientation: 'portrait'},
{deviceName: 'Galaxy S5', screenOrientation: 'portrait'},
{deviceName: 'Nexus 10', screenOrientation: 'portrait'},
{deviceName: 'iPad Pro', screenOrientation: 'landscape'},
]
}
This declaration includes ten unique browser configurations: five desktop browsers with different viewport sizes, and five mobile devices with both portrait and landscape orientations. Every story will run against every specified browser. If you run the test suite again, there will be ten times as many results!
As shown above, my batch included 90 unique test instances. Even though that’s a high number of tests, Applitools Ultrafast Test Cloud ran them in only 32 seconds! That really is ultrafast for UI tests.
Applitools Eyes makes it easy to run visual component tests, but to become truly autonomous, these tests should be triggered automatically as part of regular development workflows. Any time someone makes a change to these components, tests should run, and the team should receive feedback.
We can configure Continuous Integration (CI) tools like Jenkins, CircleCI, and others for this purpose. Personally, I like to use GitHub Actions because they work right within your GitHub repository. Here’s a GitHub Action I created to run visual component tests against my example app every time a change is pushed or a pull request is opened for the main
branch:
name: Run Visual Component Tests
on:
push:
pull_request:
branches:
- main
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Node.js
uses: actions/setup-node@v2
- name: Install dependencies
run: npm install
- name: Run visual component tests
run: npx eyes-storybook -s public
env:
APPLITOOLS_API_KEY: ${{ secrets.APPLITOOLS_API_KEY }}
The only extra configuration needed was to add my Applitools API key as a repository secret.
Components are just one layer of complex modern web apps. A robust testing strategy should include adequate testing at all levels. Thankfully, visual testing with Applitools can take care of the component layer with minimal effort. Unit tests can cover how the code works, such as a component’s play
method. Integration tests can cover API requests, and end-to-end tests can cover user-centric behaviors. Tests at all these levels together provide great protection for your app. Don’t neglect any one of them!
The post Testing Storybook Components in Any Browser – Without Writing Any New Tests! appeared first on Automated Visual Testing | Applitools.
]]>