The post What is Cross Browser Testing? Examples & Best Practices appeared first on Automated Visual Testing | Applitools.
]]>In this guide, learn everything you need to know about cross-browser testing, including examples, a comparison of different implementation options and how you can get started with cross-browser testing today.
Cross Browser Testing is a testing method for validating that the application under test works as expected on different browsers, at varying viewport sizes, and devices. It can be done manually or as part of a test automation strategy. The tooling required for this activity can be built in-house or provided by external vendors.
When I began in QA I didn’t understand why cross-browser testing was important. But it quickly became clear to me that applications frequently render differently at different viewport sizes and with different browser types. This can be a complex issue to test effectively, as the number of combinations required to achieve full coverage can become very large.
Here’s an example of what you might look for when performing cross-browser testing. Let’s say we’re working on an insurance application. I, as a user, should be able to view my insurance policy details on the website, using any browser on my laptop or desktop.
This should be possible while ensuring:
There are various aspects to consider while implementing your cross-browser testing strategy.
“Different devices and browsers: chrome, safari, firefox, edge”
Thankfully IE is not in the list anymore (for most)!
You should first figure out the important combinations of devices and browsers and viewport sizes your userbase is accessing your application from.
PS: Each team member should have access to the analytics data of the product to understand patterns of usage of the product. This data, which includes OS, browser details (type, version, viewport sizes) are essential to plan and test proactively, instead of later reacting to situations (= defects).
This will tell you the different browser types, browser versions, devices, viewport sizes you need to consider in your testing and test automation strategy.
There are various ways you can perform cross-browser testing. Let’s understand them.
We usually have multiple browsers on our laptop / desktops. While there are other ways to get started, it is probably simplest to start implementing your cross browser tests here. You also need a local setup to enable debugging and maintaining / updating the tests.
If mobile-web is part of the strategy, then you also need to have the relevant setup available on local machines to enable that.
While this may seem the easiest, it can get out of control very quickly.
Examples:
The choices can actually vary based on the requirements of the project and on a case by case basis.
As alternatives, we have the liberty to choose and create either an in-house testing solution, or go for a platform / license / third party tool to support our device farm needs.
You can set up a central infrastructure of browsers and emulators or real devices in your organization that can be leveraged by the teams. You will also need some software to manage the usage and allocation of these browsers and devices.
This infrastructure can potentially be used in the following ways:
You can also opt to run the tests against browsers / devices in a cloud-based solution. You can select different device / browser options offered by various providers in the market that give you the wide coverage as per your requirements, without having to build / maintain / manage the same. This can also be used to run tests triggered from local machines, or from CI.
It is important to understand the evolution of browsers in recent years.
We need to factor this change in our cross browser testing strategy.
In addition, AI-based cross-browser testing solutions are becoming quite popular, which use machine learning to help scale your automation execution and get deep insights into the results – from a functional, performance and user-experience perspective.
To get hands-on experience in this, I signed-up for a free Applitools account, which uses a powerful Visual AI, and implemented a few tests using this tutorial as a reference.
Integrating Applitools with your functional automation is extremely easy. Simply select the relevant Applitools SDK based on your functional automation tech stack from here, and follow the detailed tutorial to get started.
Now, at any place in your test execution where you need functional or visual validation, add methods like eyes.checkWindow()
, and you are set to run your test against any browser or device of your choice.
Reference: https://applitools.com/tutorials/overview/how-it-works.html
Now you have your tests ready and running against a specific browser or device, scaling for cross-browser testing is the next step.
What if I told you with just the addition of the different device combinations, you can leverage the same single script to give you the functional and visual test results on the variety of combinations specified, covering the cross browser testing aspect as well.
Seems too far-fetched?
It isn’t. That is exactly what Applitools Ultrafast Test Cloud does!
The addition of lines of code below will do the magic. You can also go about changing the configurations, as per your requirements.
(Below example is from the Selenium-Java SDK. Similar configuration can be supplied for the other SDKs.)
// Add browsers with different viewports
config.addBrowser(800, 600, BrowserType.Chrome);
config.addBrowser(700, 500, BrowserType.FIREFOX);
config.addBrowser(1600, 1200, BrowserType.IE_11);
config.addBrowser(1024, 768, BrowserType.EDGE_CHROMIUM);
config.addBrowser(800, 600, BrowserType.SAFARI);
// Add mobile emulation devices in Portrait mode
config.addDeviceEmulation(DeviceName.iPhone_X, ScreenOrientation.PORTRAIT;
config.addDeviceEmulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT;
// Set the configuration object to eyes
eyes.setConfiguration(config);
Now when you run the test again, say against Chrome browser on your laptop, in the Applitools dashboard, you will see results for all the browser and device combinations provided above.
You may be wondering, the test ran just once on the Chrome browser. How did the results from all other browsers and devices come up? And so fast?
This is what Applitools Ultrafast Grid (a part of the Ultrafast Test Cloud) does under the hood:
eyes.checkWindow
call, the information captured (DOM, CSS, etc.) is sent to the Ultrafast Grid.What I like about this AI-based solution, is that:
Here is the screenshot of the Applitools dashboard after I ran my sample tests:
The Ultrafast Test Grid and Applitools Visual AI can be integrated into many popular and free and open source test automation frameworks to easily supercharge their effectiveness as cross-browser testing tools.
As you saw above in my code sample, Ultrafast Grid is compatible with Selenium. Selenium is the most popular open source test automation framework. It is possible to perform cross browser testing with Selenium out of the box, but Ultrafast Grid offers some significant advantages. Check out this article for a full comparison of using an in-house Selenium Grid vs using Applitools.
Cypress is another very popular open source test automation framework. However, it can only natively run tests against a few browsers at the moment – Chrome, Edge and Firefox. The Applitools Ultrafast Grid allows you to expand this list to include all browsers. See this post on how to perform cross-browser tests with Cypress on all browsers.
Playwright is an open source test automation framework that is newer than both Cypress and Selenium, but it is growing quickly in popularity. Playwright has some limitations on doing cross-browser testing natively, because it tests “browser projects” and not full browsers. The Ultrafast Grid overcomes this limitation. You can read more about how to run cross-browser Playwright tests against any browser.
Local Setup | In-House Setup | Cloud Solution | AI-Based Solution (Applitools) | |
---|---|---|---|---|
Infrastructure | Pros: Fast feedback on local machine Cons: Needs to be repeated for each machine where the tests need to execute All configurations cannot be set up locally | Pros: No inbound / outbound connectivity required Cons: Needs considerable effort to set up, maintain and update the infrastructure on a continued basis | Pros: No efforts required build / maintain / update the infrastructure Cons: Needs inbound and outbound connectivity from internal network Latency issues may be seen as requests are going to cloud based browsers / devices | Pros: No effort required to setup |
Setup and Maintenance | To be taken care of by each team member from time to time; including OS/ Browser version updates | To be taken care of by the internal team from time to time; including OS/ Browser version updates | To be taken care of by the service provider | To be taken care of by the service provider |
Speed of Feedback | Slowest, as all dependencies to be taken care of, and test needs to be repeated for each browser / device combination | Depends on concurrent usage due to multiple test runs | Depends on network latency Network issues may cause intermittent failures Depends on reliability and connectivity of the service provider | Fast and seamless scaling |
Security | Best as in-house, using internal firewalls, vpns, network and data storage | Best as in-house, using internal firewalls, vpns, network and data storage | High Risk: Needs inbound network access from service provider to the internal test environments. Browsers / devices will have access to the data generated by running the test – cleanup is essential. No control who has access to the cloud service provider infrastructure, and if they access your internal resources. | Low risk. There is no inbound connection to your internal infrastructure. Tests are running on internal network – so no data on Applitools server (other than screenshots used for comparison with baseline) |
Depending on your project strategy, scope, manual or automation requirements and of course, the hardware or infrastructure combinations, you should make a choice that not only suits the requirements but gives you the best returns and results.
Based on my past experiences, I am very excited about the Applitools Ultrafast Test Cloud – a unique way to scale test automation seamlessly. In the process, I ended up writing less code, and got amazingly high test coverage, with very high accuracy. I recommend everyone to try this and experience it themselves!
Want to get started with Applitools today? Sign up for a free account and check out our docs to get up and running today, or schedule a demo and we’ll be happy to answer any questions you may have.
Editor’s Note: This post was originally published in January 2022, and has been updated for accuracy and completeness.
The post What is Cross Browser Testing? Examples & Best Practices appeared first on Automated Visual Testing | Applitools.
]]>The post Modern Cross Device Testing for Android & iOS Apps appeared first on Automated Visual Testing | Applitools.
]]>Learn the cross device testing practices you need to implement to get closer to Continuous Delivery for native mobile apps.
Modern cross device testing is the system by which you verify that an application delivers the desired results on a wide variety of devices and formats. Ideally this testing will be done quickly and continuously.
There are many articles explaining how to do CI/CD for web applications, and many companies are already doing it successfully, but there is not much information available out there about how to achieve the same for native mobile apps.
This post will shed light on the cross device testing practices you need to implement to get a step closer to Continuous Delivery for native mobile apps.
The number of mobile devices used globally is staggering. Based on the data from bankmycell.com, we have 6.64 billion smartphones in use.
Even if we are building and testing an app which impacts a fraction of this number, that is still a very huge number.
The below chart shows the market share by some leading smartphone vendors over the years.
One of the biggest challenges for testing mobile apps is that across all manufacturers combined, there are 1000s of device types in use today. Depending on the popularity of your app, this means there are a huge number of devices your users could be using.
These devices will have variations based on:
It is clear that you cannot run our tests on each type of device that may be used by your users.
So how do you get quick feedback and confidence from your testing that (almost) no user will get impacted negatively when you release a new version of your app?
Before we think about the strategy for running your automated tests for mobile apps, we need to have a good and holistic mobile testing strategy.
Along with testing the app functionality, mobile testing has additional dimensions, and hence complexities as compared with web-app testing.
You need to understand the impact of the aspects mentioned above and see what may, or may not be applicable to you.
Here are some high-level aspects to consider in your mobile testing strategy:
Once you have figured out your Mobile Testing Strategy, you now need to think about how and what type of automated tests can give you good, reliable, deterministic and fast feedback about the quality of your apps. This will result in you identifying the different layers of your test automation pyramid.
Remember: It is very important to execute all types of automated tests, on every code change and new app that is built. The functional / end-2-end / UI tests for your app should also be run at this time.
Additionally, you need to be able to run the tests on a local developer / qa machine, as well in your Continuous Integration (CI) system. In case of native / hybrid mobile apps, developers and QAs should be able to install the app on the (local) devices they have available with them, and run the tests against that. For CI-based execution, you need to have some form of device farm available locally in your network, or cloud-based to allow execution of the tests.
This continuous testing approach will provide you with quick feedback and allow you to fix issues almost as soon as they creep in the app.
Testing and automating mobile apps have additional complexities. You need to install the app in some device before your automated tests can be run against it.
Let’s explore your options for devices.
Real devices are ideal to run the tests. Your users / customers are going to use your app using a variety of real devices.
In order to allow proper development and testing to be done, each team member needs access to relevant types of devices (which is subject to their user-base).
However, it is not as easy to have a variety of devices available for running the automated tests, for each team member (developer / tester).
The challenges of having the real devices could be related to:
Hence we need a different strategy for executing tests on mobile devices. Emulators and Simulators come to the rescue!
Before we get into specifics about the execution strategy, it is good to understand the differences between an emulator and simulator.
Android-device emulators and iOS-device simulators make it easy for any team member to easily spin up a device.
An emulator is hardware or software that enables one computer system (called the host) to behave like another computer system (called the guest). An emulator typically enables the host system to run software or use peripheral devices designed for the guest system
An emulator can mimic the operating system, software and the hardware features of the android device.
A Simulator runs on your Mac and behaves like a standard Mac app while simulating iPhone, iPad, Apple Watch, or Apple TV environments. Each combination of a simulated device and software version is considered its own simulation environment, independent of the others, with its own settings and files. These settings and files exist on every device you test within a simulation environment.
An iOS simulator mimics the internal behavior of the device. It cannot mimic the OS / hardware features of the device.
Emulators / Simulators are a great and cost-effective way to overcome the challenges of real devices. These can easily be created as per the requirements and needs by any team member and can be used for testing and also running automated tests. You can also relatively easily set up and use the emulators / simulators in your CI execution environment.
While emulators / simulators may seem like they will solve all the problems, that is not the case. Like with anything, you need to do a proper evaluation and figure out when to use real devices versus emulators / simulators.
Below are some guidelines that I refer to.
The above approach of using real-devices or emulators / simulators will help your team to shift-left and achieve continuous testing.
There is one challenge that still remains – scaling! How do you ensure your tests run correctly on all supported devices?
A classic, or rather, a traditional way to solve this problem is to repeat the automated test execution on a carefully chosen variety of devices. This would mean, if you have 5 important types of devices, and you have a 100 tests automated, then you are essentially running 500 tests.
This approach has multiple disadvantages:
We all know these disadvantages, however, there is no better way to overcome this. Or, is there?
The Applitools Native Mobile Grid for Android and iOS apps can easily help you to overcome the disadvantages of traditional cross-device testing.
It does this by running your test on 1 device, but getting the execution results from all the devices of your choice, automatically. Well, almost automatically. This is how the Applitools Native Mobile Grid works:
Below is an example of how to specify Android devices for Applitools Native Mobile Grid:
Configuration config = eyes.getConfiguration(); //Configure the 15 devices we want to validate asynchronously
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S9, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S9_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S8, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S8_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Pixel_4, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_8, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_9, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S10_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S20, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S20_PLUS, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21_PLUS, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21_ULTRA, ScreenOrientation.PORTRAIT));
eyes.setConfiguration(config);
Below is an example of how to specify iOS devices for Applitools Native Mobile Grid:
Configuration config = eyes.getConfiguration(); //Configure the 15 devices we want to validate asynchronously
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_mini));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_13_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_13_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_XS));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_X));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_XR));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_8));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_7));
eyes.setConfiguration(config);
Every call to Applitools to do a visual validation will automatically do the functional and visual validation for each device specified in the configuration above.
The Applitools Native Mobile Grid has many advantages.
Read this post on How to Scale Mobile Automation Testing Effectively for more specific details of this amazing solution!
Using the Applitools Visual AI allows you to extend coverage at the top of your Test Automation Pyramid by including AI-based visual testing along with your UI/UX testing.
Using the Applitools Native Mobile Grid for cross device testing of Android and iOS apps makes your CI loop faster by providing seamless scaling across all supported devices as part of the same test execution cycle.
You can watch my video on Mobile Testing 360deg (https://applitools.com/event/mobile-testing-360deg/) where I share many examples and details related to the above to include as part of your mobile testing strategy.
To start using the Native Mobile Grid, simply sign up at the link below to request access. You can read more about the Applitools Native Mobile Grid in our blog post or on our website.
Happy testing!
The post Modern Cross Device Testing for Android & iOS Apps appeared first on Automated Visual Testing | Applitools.
]]>The post What is Visual AI? appeared first on Automated Visual Testing | Applitools.
]]>In this guide, we’ll explore Visual Artificial Intelligence (AI) and what it means. Read on to learn what Visual AI is, how it’s being applied today, and why it’s critical across a range of industries – and in particular for software development and testing.
From the moment we open our eyes, humans are highly visual creatures. The visual data we process today increasingly comes in digital form. Whether via a desktop, a laptop, or a smartphone, most people and businesses rely on having an incredible amount of computing power available to them and the ability to display any of millions of applications that are easy to use.
The modern digital world we live in, with so much visual data to process, would not be possible without Artificial Intelligence to help us. Visual AI is the ability for computer vision to see images in the same way a human would. As digital media becomes more and more visual, the power of AI to help us understand and process images at a massive scale has become increasingly critical.
Artificial Intelligence refers to a computer or machine that can understand its environment and make choices to maximize its chance of achieving a goal. As a concept, AI has been with us for a long time, with our modern understanding informed by stories such as Mary Shelley’s Frankenstein and the science fiction writers of the early 20th century. Many of the modern mathematical underpinnings of AI were advanced by English mathematician Alan Turing over 70 years ago.
Since Turing’s day, our understanding of AI has improved. However, even more crucially, the computational power available to the world has skyrocketed. AI is able to easily handle tasks today that were once only theoretical, including natural language processing (NLP), optical character recognition (OCR), and computer vision.
Visual AI is the application of Artificial Intelligence to what humans see, meaning that it enables a computer to understand what is visible and make choices based on this visual understanding.
In other words, Visual AI lets computers see the world just as a human does, and make decisions and recommendations accordingly. It essentially gives software a pair of eyes and the ability to perceive the world with them.
As an example, seeing “just as a human does” means going beyond simply comparing the digital pixels in two images. This “pixel comparison” kind of analysis frequently uncovers slight “differences” that are in fact invisible – and often of no interest – to a genuine human observer. Visual AI is smart enough to understand how and when what it perceives is relevant for humans, and to make decisions accordingly.
Visual AI is already in widespread use today, and has the potential to dramatically impact a number of markets and industries. If you’ve ever logged into your phone with Apple’s Face ID, let Google Photos automatically label your pictures, or bought a candy bar at a cashierless store like Amazon Go, you’ve engaged with Visual AI.
Technologies like self-driving cars, medical image analysis, advanced image editing capabilities (from Photoshop tools to TikTok filters) and visual testing of software to prevent bugs are all enabled by advances in Visual AI.
One of the most powerful use cases for AI today is to complete tasks that would be repetitive or mundane for humans to do. Humans are prone to miss small details when working on repetitive tasks, whereas AI can repeatedly spot even minute changes or issues without loss of accuracy. Any issues found can then either be handled by the AI, or flagged and sent to a human for evaluation if necessary. This has the dual benefit of improving the efficiency of simple tasks and freeing up humans for more complex or creative goals.
Visual AI, then, can help humans with visual inspection of images. While there are many potential applications of Visual AI, the ability to automatically spot changes or issues without human intervention is significant.
Cameras at Amazon Go can watch a vegetable shelf and understand both the type and the quantity of items taken by a customer. When monitoring a production line for defects, Visual AI can not only spot potential defects but understand whether they are dangerous or trivial. Similarly, Visual AI can observe the user interface of software applications to not only notice when changes are made in a frequently updated application, but also to understand when they will negatively impact the customer experience.
Traditional testing methods for software testing often require a lot of manual testing. Even at organizations with sophisticated automated testing practices, validating the complete digital experience – requiring functional testing, visual testing and cross browser testing – has long been difficult to achieve with automation.
Without an effective way to validate the whole page, Automation Engineers are stuck writing cumbersome locators and complicated assertions for every element under test. Even after that’s done, Quality Engineers and other software testers must spend a lot of time squinting at their screens, trying to ensure that no bugs were introduced in the latest release. This has to be done for every platform, every browser, and sometimes every single device their customers use.
At the same time, software development is growing more complex. Applications have more pages to evaluate and increasingly faster – even continuous – releases that need testing. This can result in tens or even hundreds of thousands of potential screens to test (see below). Traditional testing, which scales linearly with the resources allocated to it, simply cannot scale to meet this demand. Organizations relying on traditional methods are forced to either slow down releases or reduce their test coverage.
At Applitools, we believe AI can transform the way software is developed and tested today. That’s why we invented Visual AI for software testing. We’ve trained our AI on over a billion images and use numerous machine learning and AI algorithms to deliver 99.9999% accuracy. Using our Visual AI, you can achieve automated testing that scales with you, no matter how many pages or browsers you need to test.
That means Automation Engineers can quickly take snapshots that Visual AI can analyze rather than writing endless assertions. It means manual testers will only need to evaluate the issues Visual AI presents to them rather than hunt down every edge and corner case. Most importantly, it means organizations can release better quality software far faster than they could without it.
Additionally, due to the high level of accuracy, and efficient validation of the entire screen, Visual AI opens the door to simplifying and accelerating the challenges of cross browser and cross device testing. Leveraging an approach for ‘rendering’ rather than ‘executing’ across all the device/browser combinations, teams can get test results 18.2x faster using the Applitools Ultrafast Test Cloud than traditional execution grids or device farms.
As computing power increases and algorithms are refined, the impact of Artificial Intelligence, and Visual AI in particular, will only continue to grow.
In the world of software testing, we’re excited to use Visual AI to move past simply improving automated testing – we are paving the way towards autonomous testing. For this vision (no pun intended), we have been repeatedly recognized as an industry leader by the industry and our customers.
What is Visual Testing (blog)
The Path to Autonomous Testing (video)
What is Applitools Visual AI (learn)
Why Visual AI Beats Pixel and DOM Diffs for Web App Testing (article)
How AI Can Help Address Modern Software Testing (blog)
The Impact of Visual AI on Test Automation (report)
How Visual AI Accelerates Release Velocity (blog)
Modern Functional Test Automation Through Visual AI (free course)
Computer Vision defined (Wikipedia)
The post What is Visual AI? appeared first on Automated Visual Testing | Applitools.
]]>The post Introducing the Next Generation of Native Mobile Test Automation appeared first on Automated Visual Testing | Applitools.
]]>Native mobile testing can be slow and error-prone with questionable ROI. With Ultrafast Test Cloud for Native Mobile, you can now leverage Applitools Visual AI to test native mobile apps with stability, speed, and security – in parallel across dozens of devices. The new offer extends the innovation of Ultrafast Cloud beyond browsers and into the mobile applications.
Mobile testing has a long and difficult history. Many industry-standard tools and solutions have struggled with the challenge of testing across an extremely wide range of devices, viewports and operating systems.
The approach currently in use by much of the industry today is to utilize a lab made up of emulators, or simulators, or even large farms of real devices. Then the tests must be run on every device independently. The process is not only costly, slow and insecure, but it is prone to errors as well.
At Applitools, we had already developed technology to solve a similar problem for web testing, and we were determined to solve this issue for mobile testing too.
Today, we are introducing the Ultrafast Test Cloud for Native Mobile. We built on the success of the Ultrafast Test Cloud Platform, which is already being used to boost the performance and quality of responsive web testing by 150 of the world’s top brands. The Ultrafast Test Cloud for Native Mobile allows teams to run automated tests on native mobile apps on a single device, and instantly render it across any desired combination of devices.
“This is the first meaningful evolution of how to test native mobile apps for the software industry in a long time,” said Gil Sever, CEO and co-founder of Applitools. “People are increasingly going to mobile for everything. One major area of improvement needed in delivering better mobile apps faster, is centered around QA and testing. We’re building upon the success of Visual AI and the Ultrafast Test Cloud to make the delivery and execution of tests for native mobile apps more consistent and faster than ever, and at a fraction of the cost.”
Last year we introduced our Ultrafast Test Grid, enabling teams to test for the web and responsive web applications against all combinations of browsers, devices and viewports with blazing speed. We’ve seen how some of the largest companies in the world have used the power of Visual AI and the Ultrafast Test Grid to execute their visual and functional tests more rapidly and reliably on the web.
We’re excited to now be able to offer the same speed and agility, and security for native mobile applications. If you’re familiar with our current Ultrafast Test Grid offering, you’ll find the experience a familiar one.
Mobile usage continues to rise globally, and more and more critical activity – from discovery to research and purchase – is taking place online via mobile devices. Consumers are demanding higher and higher quality mobile experiences, and a poorly functioning site or visual bugs can detract significantly from the user’s experience. There is a growing portion of your audience you can only convert with a five-star quality app experience.
While testing has traditionally been challenging on mobile, the Ultrafast Test Cloud for Native Mobile increases your ability to test quickly, early and often. That means you can develop a superior mobile experience at less cost than the competition, and stand out from the crowd.
With this announcement, we’re also launching our free early access program, with access to be granted on a limited basis at first. Prioritization will be given to those who register early. To learn more, visit the link below.
The post Introducing the Next Generation of Native Mobile Test Automation appeared first on Automated Visual Testing | Applitools.
]]>The post Automating Functional / End-2-End Tests Across Multiple Platforms appeared first on Automated Visual Testing | Applitools.
]]>This post talks about an approach to Functional (end-to-end) Test Automation that works for a product available on multiple platforms.
It shares details on the thought process & criteria involved in creating a solution that includes how to write the tests, and run it across the multiple platforms without any code change.
Lastly, the open-sourced solution also has examples on how to implement a test that orchestrates the simulation between multiple devices / browsers to simulate multiple users interacting with each other as part of the same test.
We will cover the following topics.
How many times do we see products available only on a single platform? For example, Android app only, or iOS app only?
Organisations typically start building the product on a particular platform, but then they do expand to other platforms as well.
Once the product is available on multiple platforms, do they differ in their functionality? There definitely would be some UX differences, and in some cases, the way to accomplish the functionality would be different, but the business objectives and features would still be similar across both platforms. Also, one platform may be ahead of the other in terms of feature parity.
The above aspects of product development are not new.
The interesting question is – how do you build your Functional (End-2-End / UI / System) Test Automation for such products?
To answer this question, let’s take an example of any video conferencing application – something that we would all be familiar with in these times. We will refer to this application as “MySocialConnect” for the remainder of this post.
MySocialConnect is available on the following platforms:
In terms of functionality, the majority of the functionality is the same across all these platforms. Example:
There are also some functionality differences that would exist. Example:
So, repeating the big question for MySocialConnect is – how do you build your Functional (End-2-End / UI / System) Test Automation for such products?
I would approach Functional automation of MySocialConnect as follows:
In addition, I need the following capabilities in my automation:
To help implement the criteria mentioned above, I built (and open-sourced on github) my automation framework – teswiz. The implementation is based on the discussion and guidelines in [Visual] Mobile Test Automation Best Practices and Test Automation in the World of AI & ML.
After a lot of consideration, I chose the following tech stack and toolset to implement my automated tests in teswiz.
Using Cucumber, the tests are specified with the following criteria:
Based on these criteria, here is a simple example of how the test can be written.
The tags on the above test indicates that the test is implemented and ready for execution against the Android apk and the web browser.
Given the context of MySocialConnect, implementing tests that are able to simulate real meeting scenarios would add the most value – as that is the crux of the product.
Hence, there is support built-in to the teswiz framework to allow implementation of multi-user scenarios. The main criteria for implementing such scenarios are:
Here is a simple example of how this test can be specified.
In the above example, there are 2 users – “I” and “you”, each on a different platform – “android” and “web” respectively.
The automated tests are run in different ways – depending on the context.
Ex: In CI, we may want to run all the tests, for each of the supported platforms
However, on local machines, the QA / SDET / Developers may want to run only specific subset of the tests – be it for debugging, or verifying the new test implementation.
Also, there may be cases where you want to run the tests pointing to your application for a different environment.
The teswiz framework supports all these configurations, which can be controlled from the command-line. This prevents having to make any code / configuration file changes to run a specific subset type of tests.
This is the high-level architecture of the teswiz framework.
Based on the data from the study done on the “Impact of Visual AI on Test Automation,” Applitools Visual AI helps automate your Functional Tests faster, while making the execution more stable. Along with this, you will get increased test coverage and will be able to find significantly more functional and visual issues compared to the traditional approach.
You can also scale your Test Automation execution seamlessly with the Applitools UltraFast Test Cloud and use the Contrast Advisor capability to ensure the application-under-test meets the accessibility guidelines of the WCAG 2.0 / 2.1 standards very early in the development stage.
Read this blog post about “Visual Testing – Hype or Reality?” to see some real data of how you can reduce the effort, while increasing the test coverage from our implementation significantly by using Applitools Visual AI.
Hence it was a no-brainer to integrate Applitools Visual AI in the teswiz framework to support adding visual assertions to your implementation simply by providing the APPLITOOLS_API_KEY. Advanced configurations to override the defaults for Applitools can be done via the applitools_config.json file.
This integration works for all the supported browsers of WebDriver and all platforms supported by Appium.
It is very important to have good and rich reports of your test execution. These reports not only make it valuable to pinpoint the reasons of the failing test, but also should be able to give an understanding of the trend of execution and quality of the product under test.
I have used ReportPortal.io as my reporting tool – it is extremely easy to set up and use and allows me to also add screenshots, log files and other information that may seem important along with the test execution to make root cause analysis easy.
I have open-sourced this teswiz framework so you do not need to reinvent the wheel. See this page to get started – https://github.com/znsio/teswiz#what-is-this-repository-about
Feel free to raise issues / PRs against the project for adding more capabilities that will benefit all.
The post Automating Functional / End-2-End Tests Across Multiple Platforms appeared first on Automated Visual Testing | Applitools.
]]>The post Fast Testing Across Multiple Browsers appeared first on Automated Visual Testing | Applitools.
]]>If you think like the smartest people in software, you conclude that testing time detracts from software productivity. Investments in parallel test platforms pay off by shortening the time to validate builds and releases. But, you wonder about the limits of parallel testing. If you invest in infrastructure for fast testing across multiple browsers, do you capture failures that justify such an investment?
Back in the day, browsers used different code bases. In the 2000s and early 2010s, most application developers struggled to ensure cross browser behavior. There were known behavior differences among Chrome, Firefox, Safari, and Internet Explorer.
Annoyingly, each major version of Internet Explorer had its own idiosyncrasies. When do you abandon users who still run IE 6 beyond its end of support date? How do you handle the IE 6 thorough IE 10 behavioral differences?
While Internet Explorer differences could be tied to major versions of operating systems, Firefox and Chrome released updates multiple times per year. Behaviors could change slightly between releases. How do you maintain your product behavior with browsers in the hands of your customers that you might not have developed with or tested against?
Cross browser testing proved itself a necessary evil to catch potential behavior differences. In the beginning, app developers needed to build their own cross browser infrastructure. Eventually, companies arose to provide cross browser (and then cross device) testing as a service.
In the 2020s, speed can provide a core differentiator for app providers. An app that delivers features more quickly can dominate a market. Quality issues can derail that app, so coverage matters. But, how do app developers ensure that they get a quality product without sacrificing speed of releases?
In this environment, some companies invest in cross browser test infrastructure or test services. They invest in the large parallel infrastructure needed in creating and maintaining cross browser tests. And, the bulk of uncovered errors end up being rendering and visual differences. So, these tests require some kind of visual validation. But, do you really need to repeatedly run each test?
Applitools concluded that repeating tests required costly infrastructure as well as costly test maintenance. App developers intend that one server response work for all browsers. With its Ultrafast Grid, Applitools can capture the DOM state on one browser and then repeat it across the Applitools Ultrafast Test Cloud. Testers can choose among browsers, devices, viewport sizes and multiple operating systems. How much faster can this be?
In the Applitools Ultrafast Cross Browser Hackathon, participants used the traditional legacy method of running tests across multiple browsers to compare behavior results. Participants then compared their results with the more modern approach using the Applitools Ultrafast Grid. Read here about one participant’s experiences.
The time that matters is the time that lets a developer know the details about a discovered error after a test run. For the legacy approach, coders wrote tests for each platform of interest, including validating and debugging the function of each app test on each platform. Once the legacy test had been coded, the tests were run, analyzed, and reports were generated.
For the Ultrafast approach, coders wrote their tests using Applitools to validate the application behavior. These tests used fewer lines of code and fewer locators. Then, the coders called the Applitools Ultrafast Grid and specified the browsers, viewports, and operating systems of interest to match the legacy test infrastructure.
The report included this graphic showing the total test cycle time for the average Hackathon submission of legacy versus Ultrafast:
Here is a breakdown of the average participant time used for legacy versus Ultrafast across the Hackathon:
Activity | Legacy | Ultrafast |
---|---|---|
Actual Run Time | 9 minutes | 2 minutes |
Analysis Time | 270 minutes | 10 minutes |
Report Time | 245 minutes | 15 minutes |
Test Coding Time | 1062 minutes | 59 minutes |
Code Maintenance Time | 120 minutes | 5 minutes |
The first three activities, test run, analysis, and report, make up the time between initiating a test and taking action. Across the three scenarios in the hackathon, the average legacy test required a total of 524 minutes. The average for Ultrafast was 27 minutes. For each scenario, then, the average was 175 minutes – almost three hours – for the legacy result, versus 9 minutes for the Ultrafast approach.
On top of the operations time for testing, the report showed the time taken to write and maintain the test code for the legacy and Ultrafast approaches. Legacy test coding took over 1060 minutes (17 hours, 40 minutes), while Ultrafast only required an hour. And, code maintenance for legacy took 2 hours, while Ultrafast only required 5 minutes.
As the Hackathon results showed, Ultrafast testing runs more quickly and gives results more quickly.
Legacy cross-browser testing imposes a long time from test start to action. Their long run and analysis times do not make them suitable for any kind of software build validation. Most of these legacy tests get run in final end-to-end acceptance, with the hope that no visual differences get uncovered.
Ultrafast approaches enable app developers to build fast testing across multiple browsers into software build. Ultrafast analysis catches unexpected build differences quickly so they can be resolved during the build cycle.
By running tests across multiple browsers during build, Ultrafast Grid users shorten their find-to-resolve cycle to branch validation even prior to code merge. They catch the rendering differences and resolve them as part of the feature development process instead of the final QA process.
Ultrafast testers seamlessly resolve unexpected browser behavior as they check in their code. This happens because, in less than 10 minutes on average, they know what differences exist. They could not do this if they had to wait the nearly three hours needed in the legacy approach. Who wants to wait half a day to see if their build worked?
Combine the other speed differences in coding and maintenace, and it becomes clear why Ultrafast testing across multiple browsers makes it possible for developers to run Ultrafast Grid in development.
Next, we will cover code stability – the reason why Ultrafast tests take, on average, 5 minutes to maintain, instead of two hours.
The post Fast Testing Across Multiple Browsers appeared first on Automated Visual Testing | Applitools.
]]>The post The Many Uses of Visual Testing appeared first on Automated Visual Testing | Applitools.
]]>Often times, when we’re talking about tools to help us with testing, specifically automation tools, we hear a lot of preaching about not misusing these tools.
For example, people often ask how to use Selenium WebDriver – which is a browser automation tool – to do API testing. This clearly isn’t the right tool for the job.
While I most certainly agree that using the wrong tool for the job is not really efficient, I can also appreciate creative uses of tools for other means.
People “misuse” tools every day to meet their needs and end up realizing that while this specific tool was not designed for a particular use case, it actually works extremely well!
For example, here is a clothes hanger. It is obviously designed to hang clothing.
But necessity is the mother of innovation. So when you lock yourself out of your car, this tool all of a sudden has a new use!
Coca-cola was actually created as a medicine but after the manufacturing company was purchased, they began selling Coca-cola as a soft drink.
As if that wasn’t enough of a repurpose, coke can also be used to clean corrosion from batteries! (I should probably stop drinking this ?)
So as we see, misusing a tool isn’t always bad. Some tools can be used for more than their intended purpose.
As engineers, most of us are curious and creative. This is a recipe for innovation!
I’m working with automated visual testing a lot these days. It’s an approach that uses UI snapshots to allow you to verify that your application looks the way it’s supposed to.
Applitools does this by taking a screenshot of your application when you first write your test, and then comparing that screenshot with how the UI looks during regression runs. It is intended to find cosmetic bugs that could negatively impact your customer’s experience. It’s designed to find visual bugs that otherwise cannot be discovered by functional testing tools that query the DOM.
Take a look at a few examples of visual bugs:
While Applitools is second to none in finding cosmetic issues that may be costly for companies, I began to wonder how I could misuse this tool for good. I explored some of the hard problems in test automation to see if I can utilize the Applitools Eyes API to solve those as well!
Let’s look at a common e-commerce scenario where I want to test the flow of buying this nice dress. I select the color, size, and quantity. Then I add it to the cart and head over to the cart to verify.
And here’s the code to test this scenario:
Looking at the shopping cart, we’ve only verified that it contains the Tokyo Talkies dress. And that verification is by name. There’s a LOT more on this screen that is going unverified. And not just the look and feel, but the color, size, quantity, price, buttons, etc.
Sure, we can write more assertions, but this starts getting really long. We have doubled the size of the test here and this is just a subset of the all the checks we could possibly do.
What if I used visual testing to not only make sure the app looks good, but to also increase my coverage?
On line 10 here, I added a visual assertion. This covers everything I’ve thought about and even the stuff that I didn’t. And I’m now back to 11 lines here – so less code and more coverage!
I worked on a localized product and automating the tests was really tough. We originally only supported the English version of the product; but after the product was internationalized, we synched into the localized Strings that development used for the product so we were at least able to assert on the text we needed.
However, not all languages are written left to right. Some are right to left, like Arabic. How could I verify this without visual testing?
Netflix internationalized their product and quickly saw UI issues. Their product assumed English string lengths, line heights, and graphic symbols. They translated the product into 26 languages – which is essentially 26 different versions of the product that need to be maintained and regression-tested.
And good localization also accounts for cultural variances and includes things like icon and image replacements. All of these are highly visual – which makes it a good case for visual testing.
Using Applitools, writing the test for different locales is not too bad, especially since you don’t need to deal with the translated context in the assertions. And the visual tests will verify the sites of each locale.
Looking at the English-translated version of this website, I can see a few bugs here.
Trying to verify everything on this page programmatically without visual testing would be painful and can easily miss some of these localization issues.
I’m sure anyone who has had to write test automation to work on multiple platforms would agree with me that this is not fun at all! In fact, this is quite the chore. And yet, our applications are expected o work on so many different configurations. Multiple browsers, multiple phones, tablets, you name it!
For example, here’s a web view and a mobile view of the Doordash application.
There are quite a few differences, such as:
Because of these differences, we either need to write totally different framework code for the various configurations, or include conditional logic for every place where the app differs. Like I said, painful!
But the worse part of it all is that the return on investment is really low. I hardly find any cross-platform bugs using this approach. And it’s not because they don’t exist. It’s because most cross-platform bugs are visual bugs!
The viewport size changes, and all of a sudden, your app looks goofy! ?
So what if instead of just using visual testing to make sure my app looks nice, I bended this technology a bit to execute my cross-platform tests more efficiently?
Like instead of a functional grid that executes my tests step by step across all of the devices I specify, what about a visual grid that allows me to write my test only once, without the conditional viewport logic? Then executes my test and blasts the application’s state across all of my supported devices, browsers, and viewports in parallel so that I can find the visual bugs? ?
That’s pretty powerful and yes, we can use visual testing to do this too!
There’s a lot of talk about accessibility testing lately. It’s one of those important things that often gets missed by development teams.
You may have heard of the recent Supreme Court case where a blind man sued a pizza franchise because their site was not accessible.
This is not a game. We have to take this seriously. Can visual testing help with this at all?
Yep, what if we used visual testing to be able to detect accessibility violations like the contrast between colors, the font sizes, etc? This could make a nice complement to other accessibility tools that are analyzing from the DOM.
A/B testing is a nightmare for test automation, and sometimes impossible. It’s where you have two variations of your product as an experiment to see which one performs better with your users.
Let’s say Variation B did much better than Variation A. We’d assume that’s because our users really liked Variation B. But what if Variation A had a serious bug that prevented many users from converting?
The problem is that many teams don’t automate tests for both variations because it’s throw away code, and you’re not entirely sure which variation you’ll get each time the test runs.
Instead of writing a bunch of conditionals and trying to maintain the locators for both variations, what if we used visual testing instead? Could that make things easier to automate?
Indeed! Applitools supports A/B testing by allowing you to save multiple baseline images for your app’s variations.
I could write one test and instead of coding all the differences between the two variations, I could simply do the visual check and provide it with photos of both variations.
All the cool apps are now providing a dark mode option. But how do you write automated tests for this? It’s kind of an A/B type of scenario where the app can be in either variation. But the content is the same. So that makes it relatively easy to write the code but then we miss stuff.
For example, when Slack first offered dark mode, I noticed that I couldn’t see any code samples.
As much as I work with visual testing, it didn’t dawn on me that I could use visual testing for this until Richard Bradshaw pointed it out to me. In hindsight, DUH of course this can be tackled by visual testing. But in the moment, it wasn’t apparent to me because visual tools don’t advertise this as a use case.
Which brings me back to my original point…
Most creators make a solution for a specific problem. They aren’t thinking of ALL of our use cases. So, I encourage you to not just explore your products, but explore your toolset and don’t be afraid to misuse (but not abuse) your tools where it makes sense.
The post The Many Uses of Visual Testing appeared first on Automated Visual Testing | Applitools.
]]>The post Can Automated Cross Browser Testing Be Ultrafast? appeared first on Automated Visual Testing | Applitools.
]]>Early January this year, Applitools announced the results of their Visual AI Rockstar Hackathon winners of which I am lucky to be included as one of their silver winners. I blogged about my experience as to how I approached the hackathon and my honest feedback as to why I think it’s modernising the way we do test automation which you can find in this post – Applitools: Modernising the way we do test automation.
6 months in and they announced another hackathon but this time the focus is on cross browser testing via Ultrafast Grid and comparing it with traditional solutions such as running the same functional tests on different browsers and viewports locally. I participated in the hackathon again and ended up as one of their gold winners this time, which I’m extremely pleased, because not only did I win one of their amazing prizes but also, I improved my technical skill and learned a lot about the true cost of cross browser testing.
First, let’s talk about why cross browser testing? Why another hackathon?
If you’re like me and have been testing web applications for some time, you’ll know that cross browser testing is a pain and time consuming to test. There is no way you can test 100% cross browser coverage, unless you have a lot of time and want to devote all your testing efforts in cross browser testing alone plus don’t forget that you also need to check on other viewports. In time, it gets really boring and tedious. This is why it’s a great idea to automate cross browser tests as much as possible so we can focus on other areas where we are also needed such as exploratory and accessibility testing.
Nowadays, there are not a lot of functional differences between different browsers. The code that gets deployed is mostly similar on any platform but the way the code is rendered visually exposes differences that we need to catch. Rather than doing cross browser functional testing where we are testing the functionality across different browsers, a better alternative is to do cross browser visual testing instead where we validate the look of our pages instead because this is what our users see.
The problem is automated cross browser testing, whether it’s functional or visual, can still take a considerable amount of time to set up because you need to create an automation framework that can scale and easily be maintained. This can be quite a challenge for testers who are considerably new in test automation.
Cross Browser Testing in a nutshell
The purpose of this hackathon was to show how easy and how fast cross browser visual testing can be if you’re using modern tools such as Applitools. It’s also to highlight that existing testing tools are great for doing cross browser functional testing but not so great with cross browser visual testing which I’ll expand on later.
The hackathon was split into automating three different scenarios for a sample e-commerce website called AppliFashion. The scenarios should be automated using any testing tool of your choice and also with the same testing tool but with using Applitools alongside. The automated tests will then be executed on two versions of the website – version 1, which is assumed to be bug free, and version 2, which has new bugs added in. On version 2, you have to update the automated tests to fix the bugs introduced and then compare the effort of doing this with the traditional testing tool that you have chosen as opposed to using Applitools.
I decided to use Cypress as my testing tool and while it’s a great tool for automating the browser, I spent 5.5 hours doing cross browser visual testing with this approach and still felt that I missed a lot of visual bugs. Let’s look at this in more detail.
Writing the tests took quite some time. I needed to verify a lot of the elements and get their selectors so I could assert that they were visible on the page. Some of the elements were hidden if you are on different viewports so these had to be reflected on the tests. You can find an example code on how I handled this on my applitools ultrafast grid github repo. The test execution time was also slightly longer because I had more viewports (desktop, tablet, mobile) and browsers (Chrome and Firefox) to cover locally.
When it was time to run the same test on version 2, I had to make some adjustments to my tests and log the bugs that my automated tests found. This took me an hour because I had to update the selectors to fix my tests but also, I wasn’t confident that my tests found all the visual bugs on version 2. I had to find some of the bugs manually since verifying CSS changes was difficult for me with just using Cypress alone.
When it came to test reporting and project refactoring, I invested 2 hours on this task. As I knew from experience, good reporting helps everyone make sense of test data. I wanted to integrate Mochawesome reporter so I could present the test results nicely to the hackathon judges. I wrote a tutorial on how to do this which you can find in this post – Test Reporting with Cypress and Mochawesome. I also started noticing that my test code was getting a lot of duplication so I did some refactoring to clean up my automation framework.
Now let’s look at how long it took me to do cross browser visual testing with Cypress and Applitools.
In total, I spent around just over one hour writing the tests for both version 1 and version 2 when using Cypress with Applitools. The time difference was massive! There were a few visual bugs that I missed that Applitools caught but even if this was the case, I didn’t have to rewrite my tests at all. All the adjustments were done on the Applitools Dashboard directly and marking the bugs through its annotation feature.
Writing the tests was considerably faster. As opposed to verifying individual selectors and checking that it’s visible, I took a screenshot of the whole page which is a better approach for visual validation.
Visual validation test with Cypress and Applitools
The code snippet above is simpler and will catch more visual bugs with less or even no test code maintenance.
So you might be wondering from the above code, how did I handle the cross browser capabilities? This was easily achieved by creating a file called `applitools.config.js` on the root of your project and specifying the list of browsers that you want your tests to execute on. By utilising Ultrafast Grid and setting my concurrency to 10, I was able to run the tests quicker too.
Achieving Cross Browser Visual Testing with Applitools
Overall, this was another excellent hackathon from Applitools and showed that cross browser testing can be easy and fast. I’ve mentioned this in the past already that one of the trends that I’m seeing is more and more testing tools are becoming user friendly and if you are new to test automation, this is great news!
Also, from my experience, the production bugs that get missed most frequently are visual bugs. A test that hasn’t loaded any of its CSS files can still work functionally and your automated functional test will still pass. Rather than doing cross browser functional testing, it’s better to do cross browser visual testing to get maximum value.
Finally, the massive time saving that it provides means that we, as testers, have more time to explore the areas that automated tests can’t catch and that is a big win.
For More Information
The post Can Automated Cross Browser Testing Be Ultrafast? appeared first on Automated Visual Testing | Applitools.
]]>The post Modern Cross Browser Testing with Cypress and Applitools appeared first on Automated Visual Testing | Applitools.
]]>Cypress, among other things, validates the structure of your DOM. It verifies that a button is visible or that it has the correct text. But what about the look and styling of our app? How can we test that our application looks good visually? We can use Cypress to verify that it has the correct CSS properties but then our code would look very long and complex. It will be a guarantee that engineers will avoid maintaining this test ?
This is why we need visual testing as part of our test strategy. With visual testing, we are validating what the user sees on different browsers/viewports/mobile devices. However, it’s very time consuming when you do it manually.
Imagine if someone told you that you have to test the below image manually.
There are 30 differences. You can probably find them after quite some time. But then this provides a really slow feedback loop. The obvious solution would be of course to automate this!
Now, automated visual testing is not new. There are so many tools out there which can help you with visual testing. These tools use pixel-by-pixel comparison to compare the two images. The idea is you have a baseline image which is your source of truth. Every time there is a code change and our automated tests are run, a test image will be taken. The visual testing tool will then compare this test image with the baseline image and then check the differences. At the end, it will report to us whether our test passed or failed.
The problem with pixel-by-pixel visual testing though is it’s very sensitive even with small changes. Even if you have a 1px difference, your test will fail even though on the human eye, the two images look exactly similar.
You also get the issue that if you try to run these tests on your build pipelines, you might see a lot of pixel differences especially if the base image is generated locally such as the image above. Looking at this test image, if you ignore the mismatch image in the middle and compare the two images on left and the right, some of the changes that were reported look ok. But because these images were taken on different machine setups , the tool has reported a lot of pixel differences. You can use Docker to solve this and generate the base image using the same configuration as the test image, but from personal experience, I still get flakiness with using pixel-by-pixel comparison tools.
Also, what if you have dynamic data? In the test image above, we have a change of data here but the overall layout is similar. You can probably set the mismatch threshold to be slightly higher so your tests will fail only if they reach the threshold that you defined. The problem with this though is that you might miss actual visual bugs.
Most of the existing open source tools for visual testing only runs on 1 or 2 browsers. For example, one of the tools that we were using before, called BackstopJS, which is a popular visual testing framework, only runs visual tests on Chrome headlessly. AyeSpy, which is a tool that was actually created by one of the teams here at News UK, is another visual testing tool, which hooks into your Selenium Grid to run your visual tests on different browsers. But still, it’s a bit limited. If you are using the Selenium Docker images, they only have images for Chrome and Firefox. What if you want to run your visual tests on Safari or Internet Explorer? You can definitely verify this yourself, but again as mentioned, it’s time consuming.
How can we solve these different visual testing issues?
This is where Applitools comes in. Unlike existing visual tools, Applitools uses AI comparison algorithms so images are compared just like a human would. It was founded in 2013 and pretty much integrates with almost all testing frameworks out there. You name it – Selenium, Cypress, Appium, WebdriverIO, Espresso, XCUITest, even Storybook! With Applitools, you can validate visual differences on a wide range of browsers and viewports. By using different comparison algorithms (exact, strict, layout and content), you have different options to compare images and can cater different scenarios such as dynamic data or animations.
Rather than taking a screenshot, Applitools will extract a snapshot of your DOM. This is one of the reasons why visual tests are fast to run in Applitools. Once the DOM snapshots have been uploaded to the Applitools Cloud, the Applitools Ultrafast Grid, which offers users a way to render screens in multiple browsers and viewports simultaneously to generate the screenshots and do the AI powered image comparison.
To get you started, you need to install the following package to your project. This is specific to Cypress so you would need to install the correct package depending on your test framework of choice.
npm i --D @applitools/eyes-cypress
Once this has been installed, you need to configure the plugin and the easiest way to do this is to run the following command on your terminal
npx eyes-setup
This will automatically add the necessary imports needed to get Applitools working in your Cypress project.
Let’s start doing some coding and add some validations on a sample react image app that I created a while back. It is a simple image gallery which uses unsplash API for the backend. An example github repo which contains the different code examples can be found here.
Our Cypress test can look like the following code snippet. Keep in mind, this only asserts the application to some extent. I can add more code to verify that it has the correct CSS attributes but I don’t want to make the code too lengthy.
Now, let’s look at how we can write the test using Cypress and Applitools.
Applitools provides the following commands to Cypress as a minimum setup. `cy.eyesOpen` initiates the Cypress eyes SDK and we pass some arguments such as our app name, batch name and the browser size (defaults to Chrome). The command `cy.eyesCheckWindow` will take a DOM snapshot so every call to this command means a DOM snapshot will be generated. You can call this every time you do an action such as visiting your page under test or clicking buttons, and dropdown menus. Finally, once you are finished, you just call `cy.eyesClose`. To know more about the Cypress eyes SDK, please visit their documentation here to find more information.
In order to run this in Applitools, you need to export an API key which is detailed on this article. Once you have the API key, on your terminal you need to run:
export APPLITOOLS_API_KEY=<your_key>
npx cypress open
Once the test is finished, if you go to the Applitools dashboard, you should see your test being run. The first time you run it, there will be no baseline image then when you reran the test and everything looks good, you should see the following.
Since we are using the unsplash API, we don’t have control as to what data gets returned. When we refresh the app, we might get different results. As an example, the request that I am making to unsplash will get the popular photos on a given day. If I reran my test again tomorrow, then the images will be different like the one shown below.
The good thing is we can apply a layout region so the actual data will be ignored or we can also set the match level as Layout in our test which we can preview on the dashboard. If the layout of our image gallery has changed, Applitools will report it as an issue.
Now, let’s create some changes in our application (code references found here) and introduce the following:
If we run the test where we only use Cypress, how many of these changes do you think your test will catch? Will it catch that there is a new footer component? How about updating the background colour of the search bar? How about the missing header icon? Probably not because we didn’t write any assertions for it. Now, let’s rerun the test written in Cypress and Applitools.
Looking at the image above, it caught all the changes and we didn’t had to update our test since all the maintenance is done on the Applitools side. Any issues can be raised directly on the dashboard and you can also configure it to integrate to your JIRA projects.
To run the same test on different browsers, you just need to specify the browser options on your Applitools configuration. I’ve refactored the code a bit and created a file called `applitools.config.js` and moved some of the setup we added initially in our `cy.eyesOpen` in this class.
Just simply reran your test and check the results in the Applitools dashboard.
This is just an introductory post on how you can use Applitools so if you want to know more about its other features, check out the following resources:
While open source pixel by pixel comparison tools can help you get started with visual testing, using Applitools can modernize the way you do testing. As always, do a thorough analysis of a tool to see if it will meet your testing needs ?
The post Modern Cross Browser Testing with Cypress and Applitools appeared first on Automated Visual Testing | Applitools.
]]>