Michael Battat, Author at Automated Visual Testing | Applitools https://applitools.com/blog/author/michaelbattat/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Mon, 07 Nov 2022 17:44:22 +0000 en-US hourly 1 What is Visual Testing? https://applitools.com/blog/visual-testing/ Mon, 22 Nov 2021 15:48:00 +0000 https://applitools.com/blog/?p=5069 Visual testing evaluates the visible output of an application and compares that output against the results expected by design. You can run visual tests at any time on any application with a visual user interface.

The post What is Visual Testing? appeared first on Automated Visual Testing | Applitools.

]]>
Visual testing

Learn what visual testing is, why visual testing is important, the differences between visual and functional testing and how you can get started with automated visual testing today.

Editor’s Note: This post was originally published in 2019, and has been recently updated for accuracy and completeness.

What is Meant By Visual Testing?

Visual testing evaluates the visible output of an application and compares that output against the results expected by design. In other words, it helps catch “visual bugs” in the appearance of a page or screen, which are distinct from strictly functional bugs. Automated visual testing tools, like Applitools, can help speed this visual testing up and reduce errors that are occur with manual verification.

You can run visual tests at any time on any application with a visual user interface. Most developers run visual tests on individual components during development, and on a functioning application during end-to-end tests.

In today’s world, in the world of HTML, web developers create pages that appear on a mix of browsers and operating systems. Because HTML and CSS are standards, frontend developers want to feel comfortable with a ‘write once, run anywhere’ approach to their software. Which also translates to “Let QA sort out the implementation issues.” QA is still stuck checking each possible output combination for visual bugs.

This explains why, when I worked in product management, QA engineers would ask me all the time, “Which platforms are most important to test against?” If you’re like most QA team members, your test matrix has probably exploded: multiple browsers, multiple operating systems, multiple screen sizes, multiple fonts — and dynamic responsive content that renders differently on each combination.

If you are with me so far, you’re starting to answer the question: why do visual testing?

Why is Visual Testing Important?

We do visual testing because visual errors happen — more frequently than you might realize. Take a look at this visual bug on Instagram’s app:

The text and ad are crammed together. If this was your ad, do you think there would be a revenue impact? Absolutely.

Visual bugs happen at other companies too: Amazon. GoogleSlack. Robin Hood. Poshmark. Airbnb. Yelp. Target. Southwest. United. Virgin Atlantic. OpenTable. These aren’t cosmetic issues. In each case, visual bugs are blocking revenue.

If you need to justify spending money on visual testing, share these examples with your boss.

All these companies are able to hire some of the smartest engineers in the world. If it happens to Google, or Instagram, or Amazon, it probably can happen to you, too.

Why do these visual bugs occur? Don’t they do functional testing? They do — but it’s not enough.

Visual bugs are rendering issues. And rendering validation is not what functional testing tools are designed to catch. Functional testing measures functional behavior.

Why can’t functional test cover visual issues?

Sure, functional test scripts can validate the size, position, and color scheme of visual elements. But if you do this, your test scripts will soon balloon in size due to checkpoint bloat.

To see what I mean, let’s look at an Instagram ad screen that’s properly rendered. There are 21 visual elements by my count — various icons, text. (This ignores iOS elements at the top like WiFi signal and time, since those aren’t controlled by the Instagram app.)


If you used traditional checkpoints in a functional testing tool like Selenium Webdriver, Cypress, WebdriverIO, or Appium, you’d have to check the following for each of those 21 visual elements:

  1. Visible (true/false)
  2. Upper-left x,y coordinates
  3. Height
  4. Width
  5. Background color

That means you’d need the following number of assertions:

21 visual elements x 5 assertions per element = 105 lines of assertion code

Even with all this assertion code, you wouldn’t be able to detect all visual bugs. Such as whether a visual element can’t be accessed because it’s being covered up, which blocked revenue in the above examples from Yelp, Southwest, United, and Virgin Atlantic. And, you’d miss subtleties like the brand logo, or the red dot under the heart.

But it gets worse: if OS, browser, screen orientation, screen size, or font size changes, your app’s appearance will change as a result. That means you have to write another 105 lines of functional test assertions. For EACH combination of OS/browser/font size/screen size/screen orientation/font size.

You could end up with thousands of lines of assertion code — any of which might need to change with a new release. Trying to maintain that would be sheer madness. No one has time for that.

You need visual testing because visual errors occur. And you need visual testing because you cannot rely on functional tests to catch visual errors.

What is Manual Visual Testing?

Because automated functional testing tools are poorly suited for finding visual bugs, companies find visual glitches using manual testers. Lots of them (more on that in a bit).

For these manual testers, visual testing behaves a lot like this spot-the-difference game:

To understand how time-consuming visual testing can be, get out your phone and time how long it takes for you to find all six visual differences. I took a minute to realize that the writing in the panels doesn’t count. It took me about 3 minutes to spot all six. Or, you can cheat and look at the answers.

Why does it take so long? Some differences are difficult to spot. In other cases, our eyes trick us into finding differences that don’t exist.

Manual visual testing means comparing two screenshots, one from your known good baseline image, and another from the latest version of your app. For each pair of images, you have to invest time to ensure you’ve caught all issues. Especially if the page is long, or has a lot of visual elements. Think “Where’s Waldo”…

Challenges of manual testing

If you’re a manual tester or someone who manages them, you probably know how hard it is to visually test.

If you are a test engineer reading this paragraph, you already know this: web page testing only starts with checking the visual elements and their function on a single operating system, browser, browser orientation, and browser dimension combination. Then continue on to other combinations. And, that’s where a huge amount of test effort lies – not in the functional testing, but in the inspection of visual elements across the combination of an operating system, browser, screen orientation, and browser dimensions.

To put it in perspective, imagine you need to test your app on:

  • 5 operating systems: Windows, MacOS, Android, iOS, and Chrome.
  • 5 popular browsers: Chrome, Firefox, Internet Explorer (Windows only) Microsoft Edge (Windows Only), and Safari (Mac only).
  • 2 screen orientations for mobile devices: portrait and landscape.
  • 10 standard mobile device display resolutions and 18 standard desktop/laptop display resolutions from XGA to 4G.

If you’re doing the math, you think it’s the browsers running on each platform (a total of 21 combinations) multiplied by the two orientations of the ten mobiles (2×10)=20 added to the 18 desktop display resolutions.

21 x (20+18) = 21 x 38 = 798 Unique Screen Configurations to test

That’s a lot of testing — for just one web page or screen in your mobile app.

Except that it’s worse. Let’s say your app has 100 pages or screens to test.

798 Screen Configurations x 100 Screens in-app = 79,800 Screen Configurations to test

Meanwhile, companies are releasing new app versions into production as frequently as once a week, or even once a day.

How many manual testers would you need to test 79,800 screen configurations in a week? Or a day? Could you even hire that many people?

Wouldn’t it be great if there was a way to automate this crazy-tedious process?

Well, yes there is…

What is Automated Visual Testing?

Automated visual testing uses software to automate the process of comparing visual elements across various screen combinations to uncover visual defects.

Automated visual testing piggybacks on your existing functional test scripts running in a tool like Selenium Webdriver, Cypress, WebdriverIO, or Appium. As your script drives your app, your app creates web pages with static visual elements. Functional testing changes visual elements, so each step of a functional test creates a new UI state you can visually test.

Automated visual testing evolved from functional testing. Rather than descending into the madness of writing assertions to check the properties of each visual element, automated visual testing tools visually check the visual appearance of an entire screen with just one assertion statement. This leads to test scripts that are MUCH simpler and easier to maintain.

But, if you’re not careful, you can go down an unproductive rat hole. I’m talking about Snapshot Testing.

What is Snapshot Testing?

First generation automated visual testing uses a technology called snapshot testing. With snapshot testing, a bitmap of a screen is captured at various points of a test run and its pixels are compared to a baseline bitmap.

Snapshot testing algorithms are very simplistic: iterate through each pixel pair, then check if the color hex code is the same. If the color codes are different, raise a visual bug.

Because they can be built relatively easily, there are a number of open-source and commercial snapshot testing tools. Unlike human testers, snapshot testing tools can spot pixel differences quickly and consistently. And that’s a step forward. A computer can highlight the visual differences in the Hocus Focus cartoon easily. A number of these tools market themselves as enabling “pixel perfect testing”.

Sounds like a good idea, right?

What are Problems With Snapshot Testing?

Alas, pixels aren’t visual elements. Font smoothing algorithms, image resizing, graphics cards, and even rendering algorithms generate pixel differences. And that’s just static content. The actual content can vary between any two interfaces. As a result, a tool that expects exact pixel matches between two images can be filled with pixel differences.

If you want to see some examples of bitmap differences affecting snapshot testing, take a look at the blog post we wrote on this topic last year.

Unfortunately, while you might think snapshot testing makes intuitive sense, practitioners like you are finding that the conditions for running successful bitmap comparisons require a stationary target, while your company continues to develop dynamic websites across a range of browsers and operating systems. You can try to force your app to behave a certain way – but you may not always succeed.

Can you share some details of Snapshot Testing Problems?

For example, when testing on a single browser and operating system:

  • Identify and isolate (mute) fields that change over time, such as radio signal strength, battery state, and blinking cursors.
  • Ignore user data that might otherwise change over time, such as visitor count.
  • Determine how to support testing content on your site that must change frequently – especially if you are a media company or have an active blog.
  • Consider how different hardware or software affects antialiasing.

When doing cross-browser testing, you must also consider:

  • Text wrapping, because you cannot guarantee the locations of text wrapping between two browsers using the same specifications. The text can break differently between two browsers, even with identical screen size.
  • Image rendering software, which can affect the pixels of font antialiasing as well as images and can vary from browser to browser (and even on a single browser among versions).
  • Image rendering hardware, which may render bitmaps differently.
  • Variations in browser font size and other elements that affect the text.

If you choose to pursue snapshot testing in spite of these issues, don’t be surprised if you end up joining the group of experienced testers who have tried, and then ultimately abandoned, snapshot testing tools.

Can I See Some Snapshot Testing Problems In Real Life?

Here are some quick examples of these real-life bitmap issues.

If you use pixel testing for mobile apps, you’ll need to deal with the very dynamic data at the top of nearly every screen: network strength, time, battery level, and more:

When you have dynamic content that shifts over time — news, ads, user-submitted content — where you want to check to ensure that everything is laid out with proper alignment and no overlaps. Pixel comparison tools can’t test for these cases. Twitter’s user-generated content is even more dynamic, with new tweets, like, retweet, and comment counts changing by the second.

Your app doesn’t even need to change to confuse pixel tools. If your baselines and test screenshots were captured on different machines with different display settings for anti-aliasing, that can turn pretty much the entire page into a false positive, like this:

Source: storybook.js.org

If you’re using pixel tools and you still have to track down false positives and expose false negatives, what does that say about your testing efficiency?

For these reasons, many companies throw out their pixel tools and go back to manual visual testing, with all of its issues.

There’s a better alternative: using AI — specifically computer vision — for visual testing.

How Do I Use AI for Automated Visual Testing?

The current generation of automated visual testing uses a class of artificial intelligence algorithms called computer vision as a core engine for visual comparison. Typically these algorithms are used to identify objects with images, such as with facial recognition. We call them visual AI testing tools.

AI-powered automated visual testing combines a learning algorithm to interpret the relationship between a rendered page and intended display of visual elements with actual visual elements and locations. Like pixel tools, AI-powered automated visual testing takes page snapshots as your functionally tests run. Unlike pixel-based comparators, AI-powered automated visual test tools use algorithms instead of pixels to determine when errors have occurred.

Unlike snapshot testers, AI-powered automated visual testing tools do not need special environments that remain static to ensure accuracy. Testing and real-world customer data show that AI testing tools have a high degree of accuracy even with dynamic content because the comparisons are based on relationships and not simply pixels.

Here’s a comparison of the kinds of issues that AI-powered visual testing tools can handle compared to snapshot testing tools:

Visual Testing Use CaseSnapshot TestingVisual AI
Cross-browser testingNoYes
Account balancesNoYes
Mobile device status barsNoYes
News contentNoYes
Ad contentNoYes
User submitted contentNoYes
Suggested contentNoYes
Notification iconsNoYes
Content shiftsNoYes
Mouse hoversNoYes
CursorsNoYes
Anti-aliasing settingsNoYes
Browser upgradesNoYes

Some AI-powered test tools have been tested at a false positive rate of 0.001% (or 1 in every 100,000 errors).

AI-Powered Test Tools In Action

An AI-powered automated visual testing tool can test a wide range of visual elements across a range of OS/browser/orientation/resolution combinations. Just running the first baseline of rendering and functional test on a single combination is sufficient to guide an AI-powered tool to test results across the range of potential platforms

Here are some examples of how AI-powered automated visual testing improves visual test results by awareness of content.

This is a comparison of two different USA Today homepage images. When an AI-powered tool looks at the layout comparison, the layout framework matters, not the content. Layout comparison ignores content differences; instead, layout comparison validates the existence of the content and relative placement. Compare that with a bitmap comparison of the same two pages (also called “exact comparison:):

Literally, every non-white space (and even some of the white space) is called out.

Which do you think would be more useful in your validation of your own content?

When Should I Use Visual Testing?

You can do automated visual testing with each check-in of front-end code, after unit testing and API testing, and before functional testing — ideally as part of your CI/CD pipeline running in Jenkins, Travis, or another continuous integration tool.

How often? On days ending with “y”. 🙂

Because of the accuracy of AI-powered automated visual testing tools, they can be deployed in more than just functional and visual testing pre-production. AI-powered automated visual testing can help developers understand how visual element components will render across various systems. In addition to running in development, test engineers can also validate new code against existing platforms and new platforms against running code.

AI-powered tools like Applitools allow different levels of smart comparison.

AI-powered visual testing tools are a key validation tool for any app or web presence that requires a regular change in content and format. For example, media companies change their content as frequently as twice per hour use AI-powered automated testing to isolate real errors that affect paying customers without impacting. And, AI-powered visual test tools are key tools in the test arsenal for any app or web presence going through brand revision or merger, as the low error rate and high accuracy lets companies identify and fix problems associated with major DOM, CSS and Javascript changes that are core to those updates.

Talk to Applitools

Applitools is the pioneer and leading vendor in AI-powered automated visual testing. Applitools has a range of options to help you become incredibly productive in application testing. We can help you test components in development. We can help you find the root cause of the visual errors you have encountered. And, we can run your tests on an Ultrafast Grid that allows you to recreate your visual test in one environment across a number of others on various browser and OS configurations. Our goal is to help you realize the vision we share with our customers – you need to create functional tests for only one environment and let Applitools run the validation across all your customer environments after your first test has passed. We’d love to talk testing with you – feel free to reach out to contact us anytime.

More To Read About Visual Testing

If you liked reading this, here are some more Applitools posts and webinars for you.

  1. Visual Testing for Mobile Apps by Angie Jones
  2. Visual Assertions – Hype or Reality? – by Anand Bagmar
  3. The Many Uses of Visual Testing by Angie Jones
  4. Visual UI Testing as an Aid to Functional Testing by Gil Tayar
  5. Visual Testing: A Guide for Front End Developers by Gil Tayar
  6. Visual Testing FAQ

Find out more about Applitools. Setup a live demo with us, or if you’re the do-it-yourself type, sign up for a free Applitools account and follow one of our tutorials.

The post What is Visual Testing? appeared first on Automated Visual Testing | Applitools.

]]>
16 reasons why to use Selenium IDE in 2021 (and 1 why not) https://applitools.com/blog/why-selenium-ide-2019/ Tue, 27 Apr 2021 16:31:56 +0000 https://applitools.com/blog/?p=4344 (Editor’s Note: This post has been recently updated for accuracy and completeness. It was originally published in March 2019 by Al Sargent.)  Have you tried using Selenium IDE for your...

The post 16 reasons why to use Selenium IDE in 2021 (and 1 why not) appeared first on Automated Visual Testing | Applitools.

]]>

(Editor’s Note: This post has been recently updated for accuracy and completeness. It was originally published in March 2019 by Al Sargent.) 

Have you tried using Selenium IDE for your QA test automation?

You can find lots of feedback from users around the world.

Still skeptical? That makes sense.

There’s been plenty of stigma around using record and replay tools like Selenium IDE rather than scripted QA automation tools like Selenium Webdriver, Cypress, and WebdriverIO. And, for seemingly good reason.

Traditionally, record and playback tools suffer from a litany of issues, including:

  1. No cross-browser support
  2. Brittle tests
  3. Difficult to wait for app under test
  4. No conditional logic
  5. Chaining one test script to call another not available
  6. Unable to embed code into recorded scripts
  7. No way to edit scripts once recorded
  8. Lacking a script debugger
  9. No way to run scripts in parallel
  10. No way to run tests from Continuous Integration build scripts
  11. Lack of integration with source code control systems
  12. No plugins to extend functionality
  13. No way to do visual UI testing
  14. Poor support for responsive web
  15. No way to quickly diagnose front-end bugs
  16. Unable to export tests to languages like Java
  17. No way to enable data-driven tests

Revising Selenium IDE

Back in 2019, Applitools helped revise the Selenium IDE project. Two years earlier, the project had effectively died. Selenium IDE only ran on Firefox. With Firefox 55, Selenium IDE broke, and there seemed to be no motivation to fix it.

Plenty of articles back then explained why Selenium IDE was bad. There was this Quora thread comparing Selenium IDE with Selenium Webdriver. And plenty of issues listed in the Selenium IDE questions on Stackoverflow. Plus this top 10 list of issues with record & replay.

However, Applitools engineers got involved and addressed the bugs – as well as some of the shortcomings. In a major enhancement, Applitools made it possible to run Selenium IDE on both Chrome and Firefox. The team expanded the code export functionality from IDE-captured tests. Also, the team provided code hooks allowing others to write their own export hooks.

With great Applitools integration, Selenium IDE can help engineers with or without coding skills build effective tests quickly.

Sixteen Reasons Outlined

Here’s a list of 16 reasons why — and one why not – to try Selenium IDE. Read them, and let Applitools know what you think.

Let’s dive in.

#1: Selenium IDE is cross-browser

Selenium IDE first came out in 2006.

It was a different time. iPhones didn’t exist, the Motorola Razr flip phone was the must-have device, and Borat topped the movie box office. Firefox was the shiny new browser, and Chrome wouldn’t come out for two more years.

So it’s no surprise that Selenium IDE hitched its wagon to Firefox. Unfortunately, it remained that way for over a decade, frustrating the heck out of users with its single-browser support.

No more.

Selenium IDE runs as a Google Chrome Extension

….and Firefox Add-on:

Even better, Selenium IDE can run its tests on Selenium WebDriver servers. You can do this using Selenium IDE’s command line test runner, called SIDE Runner.

You can think of SIDE Runner as blending elements of Selenium IDE and Selenium Webdriver. It takes a Selenium IDE script, saved as a .side file, and runs that using browser drivers should as ChromeDriver, EdgeDriver, Firefox’s geckodriver, IEDriver, and SafariDriver.

SIDE Runner, and the drivers above, are available as a straightforward npm installs. Here’s what it looks like in action:

#2 Robust Tests

For years, brittle tests have been an issue for functional tests — whether you record them or code them by hand. A huge contributor to this problem has been object locators. These are how your QA automation tool identifies which field to fill, or which button to click. These can be a button label, an XPath expression, or something else.

Developers are constantly sadistically tormenting QA teams releasing new features, and as a result, their UI code is constantly changing as well. When UI changes, object locators often do as well.

Selenium IDE fixes that by capturing multiple object locators when you record your script. During playback, if Selenium IDE can’t find one locator, it tries each of the other locators until it finds one that works. Your test will fail only if none of the locators work.

This doesn’t guarantee scripts will always playback but it does insulate scripts against many changes. Here’s a screenshot of how it works. As you can see, Selenium IDE captures linkText, an XPath expression, and CSS-based locators.

Imagine building this functionality in Selenium Webdriver. You’d have to first gather up all potential Xpath locators, then all CSS locators, then iterate through each until you find an object match. It’d be a huge chunk of time to automate just one interaction, and you’d be left with a mess of hard-to-maintain code.

Selenium IDE provides an alternative that is fast, resilient, and easy-to-maintain.

#3 Wait For Your App

When running tests, it’s essential to give your application time to catch up to your test automation tool. This can include time for backend operations, fetching page elements, and rendering the page. It’s especially necessary when running on staging servers that are under-resourced.

Why does waiting matter? If your test script tries to interact with some page element (field, button, etc.) that hasn’t loaded, it will stop running.

Thankfully, the new Selenium IDE knows automatically wait for your page to load. Also, commands that interact with some element wait for the element to appear on the page. This should eliminate most, if not all, of your explicit waits.

But, if that’s not enough, the new Selenium IDE gives you other options.

In the new Selenium IDE there’s a global set speed command that you can use to pause after every test step. Even better, you can set this from the toolbar in the new Selenium IDE. Check it out below.

Between automatic waits and global set speed, you should have a lot fewer pause commands. That means your tests will be simpler and easier to maintain.

If you need more fine-grained control, Selenium IDE lets you insert steps to wait for an element to meet some condition: editable, present, or visible — or the opposite (not editable, not present, or not visible).

Finally, there’s the pause command that you can insert after individual steps. Selenium IDE has had this for a long time; feel free to use if you’re feeling nostalgic.

#4 Conditional Logic

When testing web applications, your scripts have to handle intermittent user interface elements that can randomly appear in your app. These are those oh-so-helpful cookie notices, as well as popups for special offers, quote requests, newsletter subscriptions, paywall notifications, and adblocker requests.

Conditional logic is a great way to handle these intermittent UI annoyances features. You want your scripts to say, If X appears, click the link to make it go away.

You can easily insert conditional logic — also called control flow —  into your Selenium IDE scripts. Here are details, and how it looks:

#5 Modular Test Scripts

Just like application code, test scripts need to be modular. Why?

Many of your test scripts will have steps to sign into your app, sign up for an account, and sign out of an app. It’s a waste of time to re-create those test steps over and over.

Selenium IDE lets one script run another. Let’s say you have a login script that all your other scripts call. You can easily insert this step into Selenium IDE. Here’s how it looks:

This way, if your sign in, sign up, or sign out functionality changes, you only have one test script to change. That makes test maintenance a lot easier.

Here’s a quick demo of this in action:

#6 Selenium IDE supports embedded code

As broad as the Selenium IDE API is, it doesn’t do everything. For this reason, Selenium IDE has execute script and execute async script commands that lets your script call a JavaScript snippet.

This provides you with a tremendous amount of flexibility by being able to take advantage of the flexibility of JavaScript and wide range of JavaScript libraries.

To use it, click on the test step where you want JavaScript to run, choose Insert new command, and type execute script or execute async script in the command field, as shown below:

#7 Scripts Can Be Edited

In the old Selenium IDE, scripts couldn’t be edited. For this reason, Selenium IDE tests were considered throwaway scripts: if they didn’t work, you’d have to delete them and re-record a test.

With the new Selenium IDE, you can easily modify your tests. Insert, modify, and delete commands, as you can see below. No more throwaway scripts!

#8 Available Debugger

Pretty much every IDE on the market has combined an editor and a debugger. (That is, after all, what’s meant by Integrated Development Environment.)

But not the old Selenium IDE. It had no debugger. (Whoops.)

The new Selenium IDE lives up to its name, and provides a way for you to set breakpoints in your script. Just click on the left margin of your test.

This way, you can inspect your browser’s state when your script stops due to a breakpoint. Here’s how it looks:

This makes it a lot easier to troubleshoot issues. (Speaking of troubleshooting, check out #16 below.)

#9 Run Scripts In Parallel

With the old Selenium IDE tests could only be run one at a time. This made tests take much longer. Alternatives like Selenium Grid were only available when used with Selenium WebDriver.

Selenium IDE can run tests in parallel. This lets you get through your test suites much faster.

To run multiple SIDE Runner tests in parallel, just tell it the number of parallel processes you want. Here’s an example of running three tests at once:

No, that’s not a Bandersnatch reference…

Here’s a quick video of this in action (view in full screen since the fonts are small):

#10 Run From CI Build Scripts

Because SIDE Runner is called from the command line, you can easily fit into your continuous integration build scripts, so long as your CI server can call selenium-ide-runner and upload the .side file (your test script) as a build artifact. For example, here’s how to upload an input file in Jenkins, Travis, and CircleCI.

This means that Selenium IDE can be better integrated into your DevOps toolchain. The scripts created by your less-technical QA team members — including business analysts — can be run with every build. This helps align QA with the rest of the business and ensures that you have fewer bugs escaped into production.

#12 Selenium IDE scripts can be managed in a code repository

Other record and replay tools store their tests in a range of binary file formats. (For example, here are UFT’s binary file formats.) You could check these into a source code repo, such as GitHub or GitLab, but it wouldn’t be all that useful since you couldn’t inspect test scripts, compare differences, or selectively pull in changes.

In contrast, the new Selenium IDE stores test scripts as JSON files. This makes them easy to inspect, diff, and modify. Here’s a script I recorded, viewed in Sublime text editor. You can easily change the starting URL, window size, and object locators.

If you manage your Selenium Webdriver scripts in GitHub, GitLab, Bitbucket, Azure DevOps, AWS CodeCommit, Google Cloud Source, or some other source code repo, you can now do the same for your Selenium IDE scripts.

#12 Extensible With Plugins

Unlike the old Selenium IDE, the new Selenium IDE supports third-party plugins to extend its functionality. Here’s how to build your own Selenium IDE plugin.

This is pretty exciting. You can imagine companies building plugins to have Selenium IDE do all kinds of things — upload scripts to a functional testing cloud, a load testing cloud, or a production application monitoring service like New Relic Synthetics.

Plenty of companies have integrated Selenium Webdriver into their offerings. I bet the same will happen with Selenium IDE as well.

Speaking of new plugins…

#13 Do Visual UI Testing

We here at Applitools have built a Selenium IDE plugin to do AI-powered visual validations on Selenium IDE, called Applitools for Selenium IDE. (Imaginative, right?)

To get it, head to the Chrome and Firefox stores, do the three-second install, plugin your Applitools API key, and you’re ready to go.

Create a Selenium IDE script, choose Insert new command, type eyes (that’s the name of our product), and insert a visual checkpoint into your test script. Like this:

Visual checkpoints are a great way to ensure that your UI renders correctly. Rather than a bunch of assert statements on all your UI elements — which would be a pain to maintain — one visual checkpoint checks all your page elements.

Best of all, Applitools uses visual AI to look at your web app the same way a human does, ignoring minor differences. This means fewer fake bugs to frustrate you and your developers — a problem that often leads simple pixel comparison tools to fail. When Applitools finds a visual bug, it’s worth paying attention to.

Here’s an example of Applitools Visual AI in action, finding a missing logo on a GitHub page. We didn’t have to create an assert statement on the logo; Applitools visual AI figured this out on its own.

#14 Visually Test Responsive Web Apps

When you’re testing the visual layout of your responsive web apps, it’s a good idea to do it on a wide range screen sizes (also called viewports) to ensure that nothing appears out of whack. It’s all too easy for responsive web bug to creep in.

And when they do, the results can range of merely cosmetic to business-stopping. Here’s Southwest Airlines putting the kibosh on their checkout process with a responsive bug that covers up the Continue button:

Not good, right?

When you use Applitools for Selenium IDE, you can visually test your webpages on Applitools Ultrafast Grid. This cloud-based testing service has over 100 combinations of browsers, emulated devices, and viewport sizes. This lets you do thorough visual testing on all your web apps.

Here’s how you specify which combinations to test on:

Once your tests run on Ultrafast Grid, you can easily check your test results on all the various combinations, like this:

Your responsive web bugs can run but they cannot hide…

#15 Pinpoint The Cause Of Front-end Bugs

Every Selenium IDE script you run with Ultrafast Grid can be analyzed with our Root Cause Analysis.

This matters because, to bastardize Jerry Seinfeld, it’s not enough to FIND a bug. You have to FIX the bug.

Like the Seinfeld car rental company, every testing tool I know of finds bugs, but doesn’t tell you how to fix them.

Except Applitools.

When you find a visual bug in Applitools, click on it, and view the relevant DOM and CSS diffs, as shown below:

I want to point out that we don’t show all DOM and CSS diffs — just the handful that are likely to have caused a visual bug. This makes debugging visual bugs go much faster.

We covered a ton of different ways Selenium IDE and Applitools work together. Here’s a visual summary:

#16 Export Webdriver scripts 

Originally, Selenium IDE could export to Webdriver Java, but the 2019 refresh required additional coding. That code has been written for the following exports:

  • C# NUnit
  • C# xUnit
  • Java JUnit
  • JavaScript Mocha
  • Python pytest
  • Ruby RSpec

Additionally, you can create and contribute your own code export package. You can find the instructions in the Selenium IDE documentation.

Selenium IDE Limitations

Since this document first got posted, the two limitations have been addressed substantially. Originally, code export needed to be completed, and it was – with Java support in early 2019. As mentioned above, anyone can contribute scripting export code to the project, which is how the export set has grown.

Selenium IDE doesn’t support data-driven scripts directly

In the original design, Selenium IDE could not import a bunch of tabular data, like a CSV file or database table, and then run a parameterized test once for each row of data. The direct feature is still of interest – but remains blocked by a bug. You can track progress here.

However, intrepid engineers have proposed a work-around using SIDE Runner.  Contributor PawelSuwinski writes:

“With SIDE runner is a just matter of side file preprocessing before running. I did it in some php project as part of a composer script, I do not have any JS npm run-script working example but would use templates concept this way:

  1. For CSV data use something like csv2json to get data in JSON format
  2. Creating template SIDE file use store json with Target like ex. %DATA%
  3. In preprocessor replace all ‘%DATA%’ in template side file with target data (ex. using rexreplace) and save it as a target side file (in cache/ tmp area)
  4. Run side runner on target side file”

Work on this feature continues. Let Applitools know if you have tried the workaround successfully.

Summary

Here’s how Selenium IDE compares to traditional record & replay:

CapabilityTraditional Record & ReplaySelenium IDE
Cross-browser supportNoYes
Resilient testsNoYes
Automatically wait for app under testNoYes
Conditional logicNoYes
Run one test from anotherNoYes
Embed code into scriptsNoYes
Edit scriptsNoYes
Debug scriptsNoYes
Run scripts in parallelNoYes
Run scripts during CI buildsNoYes
Manage scripts in source code repoNoYes
Plugins to extend functionalityNoYes
Visual UI testingNoYes (w/ plugin)
Responsive web supportNoYes (w/ plugin)
Diagnose root cause of visual bugsNoYes (w/ plugin)
Export tests to codeNoYes
Data-driven testsNoWorkaround proposed

‘Less is more

Selenium IDE is part of a larger trend of software making life simpler for technical folks. One example: the broad range of codeless tools for developing applications.

Other examples: Serverless offerings like AWS Lambda make it easier to write just the code you need to get a job done. And Schemaless databases like MongoDB provide architects with much more flexibility to innovate versus tightly constricted SQL databases.

Codeless, serverless, schemaless — and now scriptless, with Selenium IDE. We might be seeing a trend here.

Go deeper

To get started, check out this tutorial on Selenium IDE. It’s done by Raja Rao, a former QA manager who’s been using Selenium Webdriver for over a decade. So he knows what he’s talking about.

Beyond that, here’s a fairly complete list of resources to learn the new Selenium IDE in 2021:

Selenium IDE pages

Applitools for Selenium IDE pages

Videos

How do you plan on using Selenium IDE? Let us know!

The post 16 reasons why to use Selenium IDE in 2021 (and 1 why not) appeared first on Automated Visual Testing | Applitools.

]]>
From Selenium To Robotics with Jason Huggins https://applitools.com/blog/jason-huggins/ Mon, 05 Apr 2021 21:43:21 +0000 https://applitools.com/?p=28169 Jason’s team needed a reliable way to test their application across these browsers. So, Jason and two colleagues at ThoughtWorks did research on the available tools. Finding nothing that met their needs, they began writing what became Selenium.

The post From Selenium To Robotics with Jason Huggins appeared first on Automated Visual Testing | Applitools.

]]>

Jason Huggins has combined wicked brilliance, great experience, serendipity and perseverance. Jason is a luminary of software testing. And, he will be one of the key speakers at this week’s Future of Testing Mobile North America Conference, sponsored by Applitools. 

Jason serves today as founder and CEO at Tapster Robotics, but you may know him better as the co-founder and/or creator of amazing software testing tools. Jason co-created Selenium, and he co-created Appium. And, Jason co-founded Sauce Labs. 

Jason has chronicled his experiences in numerous places online. Here is a short guide to some cool recordings. 

Jason Huggins – Starting Selenium

Joe Colantonio and Jason have this great discussion from 2017 discussing the origins of the Selenium project. You might already know the story. Jason and his team had been developing a time and expense application at ThoughtWorks in 2003. 

Back then, ThoughtWorks had a global presence, but anyone outside of headquarters dealt with huge latency issues just to log their timesheets. The round-trip time to add another row to an expense report clearly slowed the work of someone in San Francisco and seemed positively glacial to someone in India. To overcome this limitation, Jason’s team decided to use JavaScript in the browser to do this work – instead of going back to the server.

However, JavaScript had not become a standard. Code Jason wrote would run on Internet Explorer but break on Mozilla. A fix for the Mozilla code might break IE. And updates to both browsers might break everything. 

Jason’s team needed a reliable way to test their application across these browsers. So, Jason and two colleagues at ThoughtWorks did research on the available tools. Finding nothing that met their needs, they began writing what became Selenium. Selenium could enter data and click buttons on a series of web pages to run through different test scenarios. And, Selenium could do this across multiple browsers.

Jason talks a bit more about this in his keynote address from the 2011 Selenium Conference.

Test Project Grows

After building the test software, Jason thought he would go back to the time and expense application. But, as people inside ThoughtWorks found out about his work, they wanted to know more about the web application testing tool instead. Other teams wanted to use the tool for their own projects.

Eventually, ThoughtWorks realized that ThoughtWorks clients would want this kind of testing tool. ThoughtWorks concluded that the test code would aid their projects if it could get easily into the hands of their clients. As a result, ThoughtWorks made the test software project open-source.

From there, it took five years for Selenium to become a 1.0 product, and Jason had long since left ThoughtWorks. In the intervening years, Selenium has dominated much of web application test – thanks to Jason’s desire to automate application tests back in 2003.

Robotics and Sauce Labs

You can watch Jason’s interview with Tim O’Brien of O’Reilly Media where he talks about the first robot he built to play Angry Birds, which he showed off at the JavaOne conference in San Francisco in 2011. He also talks about founding Sauce Labs, and his experience at Google that led up to joining the Sauce team.

As he discusses his Sauce experience, Jason talks about leaving ThoughtWorks and joining Google. He helped Google build their Selenium farm. This infrastructure would test web apps developed at Google. 

Jason realized that this test infrastructure could reside anywhere on the Internet., He also understood that a companies no longer needed dedicated test infrastructure. He took this insight and joined the team founding Sauce Labs. Sauce provide the infrastructure in the cloud.

Jason also talks about his experience with robotics. He built what he calls a “bitbeambot” and his idea of building a robot that could play Angry Birds. Then, he demonstrates his home-built robot doing this.

Appium with Dan Cuellar and Jason Huggins

A third great video comes from the 2018 Appium Conference in London. Jason joins Dan Cuellar, the founder of Appium, to discuss how Appium almost did not come to be – and how Jason contributed to the creation of what became Appium.

First, Jason talks about the creation of Selenium Remote Control, Selenium Grid, and Selenium Webdriver. Finally, he talks about the need for a standard – and how WebDriver got submitted to W3C for standardization.

Next, Dan talks about the need to test mobile applications running on iOS. As he goes through the initial iOS specification he talks about running into a command:

host.performtaskwithpathargumentstimeout()

This command would take JavaScript from a file, apply it to the iOS application, and then take the response and save it to a file. As Jason says, “Ludicrous.” But it fits with the iOS model. Everything in iOS development had to be done in Xcode, except for this command. And this ugly command made the Appium project possible.

Jason and Dan were working together at this point. Jason came up with the name “Appium.” It wasn’t the original idea – but they couldn’t use what they had wanted. So, Appium – Selenium for Apps. And, eventually, Android as well as iOS.

You will find a lot more fun history in Dan and Jason’s talk.

Tapster Robotics

From the O’Reilly video, Jason makes it clear that he loves robotics and the world of makers. He founded Tapster Robotics to help companies that want to validate their user interface physically. 

From his own hand-built robot playing Angry Birds, he now has a Tapster robot that can do the same. Tapster robots can test smartphones and tablets, as well as other push-button devices. The device can interact with top screens as well as side buttons. 

Jason continues to develop great tools to help people test. And, he continues to participate in the world of software testing.

Get Ready For A Great Talk

Jason joins the Future of Testing Mobile North America conference with great experience, a lot of stories, and his current passion. We at Applitools thank him for joining our conference. We look forward to his presentation.

The post From Selenium To Robotics with Jason Huggins appeared first on Automated Visual Testing | Applitools.

]]>
Test Automation University is now 75,000 students strong https://applitools.com/blog/tau-contributors/ Tue, 23 Feb 2021 09:32:28 +0000 https://applitools.com/?p=27058 What does it take to make a difference in the lives of 75,000 people? Applitools has reached 75,000 students enrolled in Test Automation University, a global online platform led by...

The post Test Automation University is now 75,000 students strong appeared first on Automated Visual Testing | Applitools.

]]>

What does it take to make a difference in the lives of 75,000 people?

Applitools has reached 75,000 students enrolled in Test Automation University, a global online platform led by Angie Jones that provides free courses on things test automation. Today, more engineers understand how to create, manage, and maintain automated tests.

What Engineers Have Learned on TAU

Engineers have learned how to automate UI, mobile, and API tests. They have learned to write tests in specific languages, including Java, JavaScript, Python, Ruby, and C#. They have applied tests through a range of frameworks including Selenium, Cypress, WebdriverIO, TestCafe, Appium, and Jest.

75,000 engineers would exceed the size of some 19,000 cities and towns in the United States. They work at large, established companies and growing startups. They work on every continent with the possible exception of Antarctica.

What makes Test Automation University possible? Contributors, who create all the coursework.

Thank You, Instructors

As of this writing, Test Automation University consists of 54 courses taught by 39 different instructors. Each instructor has contributed knowledge and expertise. You can find the list of authors on the Test Automation University home page.

Here are the instructors of the most recently added courses to TAU.

AuthorCourseDetailsChapters
Profile Name
Corina Pip
JUnit 5
Learn to execute and verify your automated tests with JUnit 517
Profile Name
Matt Chiang
WinAppDriver
Learn how to automate Windows desktop testing with WinAppDriver10
Profile Name
Marie Drake
Test Automation for Accessibility
Learn the fundamentals of automated accessibility testing8
Profile Name
Lewis Prescott
API Testing In JavaScript
Learn how to mock and test APIs in JavaScript5
Profile Name
Andrew Knight
Introduction to pytest
Learn how to automate tests using pytest10
Profile Name
Moataz Nabil
E2E Web Testing with TestCafe
Learn how to automate end-to-end testing with TestCafe15
Profile Name
Aparna Gopalakrishnan
Continuous Integration with Jenkins

Learn how to use Jenkins for Continuous Integration5
Profile Name
Moataz Nabil
Android Test Automation with Espresso
Learn how to automate Android tests with Espresso11
Profile Name
Mark Thompson
Introduction to JavaScript
Learn how to program in JavaScript6
Profile Name
Dmitri Harding
Introduction to NightwatchJS
Learn to automate web UI tests with NightwatchJS8
Profile Name
Rafaela Azevedo
Contract Tests with Pact
Learn how to implement contract tests using Pact8
Profile Name
Simon Berner
Source Control for Test Automation with Git
Learn the basics of source control using Git8
Profile Name
Paul Merrill
Robot Framework
Learn to use Robot Framework for robotic process automation (RPA)7
Profile Name
Brendan Connolly
Introduction to Nunit
Learn to execute and verify your auotmated tests with nUnit8
Profile Name
Gaurav Singh
Automated Visual Testing with Python
Learn how to automate visual testing in Python with Applitools11

Thank You, Students

As engineers and thinkers, the students continue to expand their knowledge through TAU coursework.

Each course contains quizzes of several questions per chapter. Each student who completes a course gets credit for questions answered correctly. Students who have completed the most courses and answered the most questions successfully make up the TAU 100.

Some of the students who lead on the TAU 100 include:

StudentCreditsRank
Profile Name Osanda Nimalarathna
Founder @MaxSoft
Ambalangoda Sri Lanka
44,300
Griffin
Profile Name Patrick Döring
Sr. QA Engineer @Pro7
Munich Germany
44,300
Griffin
Profile NameDarshit Shah
Sr. QA Engineer @N/A
Ahmedabad India
40,250Griffin
Profile NameAdha Hrustic
QA Engineer @Klika
Bosnia and Herzegovina
39,575Griffin
Profile NameHo Sang
Principal Technical Test Engineer @N/A
Kuala Lumpur Malaysia
38,325Griffin
Profile Name Gopi Srinivasan
Senior SDET Lead @Trimble Inc
Chennai India
38,075Griffin
Profile Name Ivo Dimitrov
Sr. QA Engineer @IPD
Sofia Bulgaria
37,875Griffin
Profile Name Malith Karunaratne
Technical Specialist – QE @Pearson Lanka
Sri Lanka
36,400Griffin
Profile Name Stéphane Colson
Freelancer @Testing IT
Lyon France
35,325Griffin
Profile NameTania Pilichou
Sr. QA Engineer @Workable
Athens Greece
35,025Griffin

Join the 75K!

Get inspired by the engineers around the world who are learning new test automation skills through Test Automation University.

Through the courses on TAU, you’ll not only learn how to automate tests, but more importantly, you’ll learn to eliminate redundant tests, add automation into your continuous integration processes, and make your testing an integral part of your build and delivery processes.

Learn a new language. Pick up a new testing framework. Know how to automate tests for each part of your development process – from unit and API tests through user interface, on-device, and end-to-end tests.

No matter what you learn, you will become more valuable to your team and company with your skills on how to improve quality through automation.

The post Test Automation University is now 75,000 students strong appeared first on Automated Visual Testing | Applitools.

]]>
Thunderhead Speeds Quality Delivery with Applitools https://applitools.com/blog/thunderhead-speeds-quality-delivery-with-applitools/ Tue, 16 Feb 2021 07:15:36 +0000 https://applitools.com/?p=26911 Thunderhead is the recognised global leader in the Customer Journey Orchestration and Analytics market. The ONE Engagement Hub helps global brands build customer engagement in the era of digital transformation.  ...

The post Thunderhead Speeds Quality Delivery with Applitools appeared first on Automated Visual Testing | Applitools.

]]>

Thunderhead is the recognised global leader in the Customer Journey Orchestration and Analytics market. The ONE Engagement Hub helps global brands build customer engagement in the era of digital transformation.  

Thunderhead provides its users with great insights into customer behavior. To continue to improve user experience with their highly-visual web application, Thunderhead develops continuously. How does Thunderhead keep this visual user experience working well? A key component is Applitools.

Before – Using Traditional Output Locators

Prior to using Applitools, Thunderhead drove its UI-driven tests with Selenium for browser automation and Python as the primary test language. They used traditional web element locators both for setting test conditions and for measuring the page responses.

Element locators have been state-of-the-art for measuring page response because of precision. Locators get generated programmatically. Test developers can find any visual structure on the page as an element.

Depending on page complexity, a given page can have dozens, or even hundreds, of locators. Because test developers can inspect individual locators, they can choose which elements they want to check. But, locators limit inspection. If a change takes place outside the selected locators, the test cannot find the change.

These output locators must be maintained as the application changes. Unmaintained locators can cause test problems by reporting errors because the locator value has changed while the test has not. Locators may also remain the same but reflect a different behavior not caught by a test.

Thunderhead engineers knew about pixel diff tools for visual validation. They also had experience with those tools; they had concluded that pixel diff tools would be unusable for test automation because of the frequency of false positives.

Introducing Applitools at Thunderhead

When Thunderhead started looking to improve their test throughput, they came across Applitools. Thunderhead had not considered a visual validation tool, but Applitools made some interesting claims. The engineers thought that AI might be marketing buzz, but they were intrigued by a tool that could abstract pixels into visible elements.

As they began using Applitools, Thunderhead engineers realized that Applitools gave them the ability to inspect an entire page.  Not only that, Applitools would capture visual differences without yielding bogus errors. Soon they realized that Applitools offered more coverage than their existing web locator tests, with less overall maintenance because of reduced code.

The net benefits included:

  • Coverage – Thunderhead could write tests for each visible on-page element on every page
  • Maintainability – By measuring the responses visually, Thunderhead did not have to maintain all the web element locator code for the responses – reducing the effort needed to maintain tests
  • Visual Validation – Applitools helped Thunderhead engineers see the visual differences between builds under test, highlighting problems and aiding problem-solving.
  • Faster operation – Visual validation analyzed more quickly than traditional web element locators.

Moving Visual Testing Into Development

After. using Applitools in end-to-end testing, Thunderhead realized that Applitools could help in several areas.

First, Applitools could help with development. Often, when developers made changes to the user interface, unintended consequences could show up at check-in time. However, by waiting for end-to-end tests to expose these issues, developers often had to stop existing work and shift context to repair older code. By moving visual validation to check-in, Thunderhead could make developers more effective.

Second, developers often waited to run their full suite of element locator tests until final build. These tests ran against multiple platforms, browsers, and viewports. The net test run would take several hours. The equivalent test. using Applitools took five minutes. So, Thunderhead could run these tests with every build.

For Thunderhead, the net result was both greater coverage with tests run at the right time for developer productivity.

Adding Visual Testing to Component Tests

Most recently, Thunderhead has seen the value of using a component library in their application development. By standardizing on the library, Thunderhead looks to improve development productivity over time. Components ensure that applications provide consistency across different development teams and use cases.

To ensure component behavior, Thunderhead uses Applitools to validate the individual components in the library. Thunderhead also tests the components in mocks that demonstrate the components in typical deployment uses cases.

By adding visual validation to components, Thunderhead expects to see visual consistency validated much earlier in the application development cycle.

Other Benefits From Applitools

Beyond the benefits listed above, Thunderhead has seen the virtual elimination of visual defects found through end-to-end testing. The check-in and build tests have exposed the vast majority of visual behavior issues during the development cycle. They have also made developers more productive by eliminating the context switches previously needed if bugs were discovered during end-to-end testing. As a result, Thunderhead has gained greater predictability in the development process.

In turn, Thunderhead engineers have gained greater agility. They can try new code and behaviors and know they will visually catch all unexpected behaviors. As a result, they are learning previously-unexplored dependencies in their code base. As they expose these dependencies, Thunderhead engineers gain greater control of their application delivery process.

With predictability and control comes confidence. Using Applitools has given Thunderhead increased confidence in the effectiveness of their design processes and product delivery. With Applitools, Thunderhead knows how customers will experience the ONE platform and how that experience changes over time.

Featured photo by Andreas Steger on Unsplash

The post Thunderhead Speeds Quality Delivery with Applitools appeared first on Automated Visual Testing | Applitools.

]]>
Skeptics Who Recommend Cross Browser Testing https://applitools.com/blog/skeptics-who-recommend-cross-browser-testing/ Fri, 12 Feb 2021 15:50:42 +0000 https://applitools.com/?p=26860 Whether you classify yourself as a cross browser skeptic or a grudging participant, it is clear that Applitools Ultrafast Grid offers something that differs from your current conception of cross browser testing.

The post Skeptics Who Recommend Cross Browser Testing appeared first on Automated Visual Testing | Applitools.

]]>

Who recommends cross browser testing to their organizations? 

In this series, we discuss the results of the Applitools Ultrafast Cross Browser Hackathon. Today, we will explain how those who were skeptics about cross browser testing would recommend that their organizations run Applitools Ultrafast Grid for cross browser testing.

Reviewing Past Posts In This Series

In this series we have covered the results of the Applitools Ultrafast Cross Browser Hackathon. 

  • My first post covered the overall results. I wrote specifically about the ease of creating cross browser tests with Applitools and introduced the topics to follow. 
  • My second post covered how the speed of Applitools Ultrafast Grid makes cross browser testing a reality within the application build process.
  • In the third, I explained how Applitools Visual AI tests provide much greater code stability compared with legacy cross browser tests, making the code easier to develop and maintain over time.

With this post, I will cover the survey topic in the hackathon: would Hackathon participants recommend that their organizations adopt legacy cross browser testing approach to their peers, and would they recommend Applitools Ultrafast Grid for that purpose?

Methodology

For this survey, Applitools used an approach called Net Promoter Score (NPS). Net Promoter Score uses a survey question with the highest correlation to satisfaction:

“On a scale of 0 to 10, with 0 being not likely and 10 being highly likely, how likely are you to recommend [the survey object] to others?”

Researchers have shown that this question correlates most highly with satisfaction. Respondents who give a 9 or 10 (highly likely to recommend) get classified as “promoters.” Promoters have high satisfaction. Those who give a 7 or 8 get classified as neutral – they are neither satisfied or dissatisfied. Others with a 6 or below get classified as detractors. Detractors have been dissatisfied with something and have no willingness to recommend the survey object.

Break the survey responses into respondents by value and count. Add 1 for each 10 and 9. Give 0 for each 7 or 8. Subtract 1 for each 6 or below. Then, normalize the results to 100 by taking your count and dividing it by the number of respondents, and multiplying by 100. 

Results can range from -100 to 100. According to Bain & Co, the developers of NPS:

  • 80 rates as ‘world class’
  • 50 rates as ‘excellent’
  • 20 rates as ‘favorable’
  • Around 0 rates as ‘neutral’
  • -10 and lower rates as ‘negative’

Asking Hackathon Participants

Applitools surveyed the the 2,224 Hackathon participants about their willingness to recommend the use of Applitools Ultrafast Grid to their peers. The survey also asked their willingness to recommend the use of legacy cross browser testing.  Of the participants, 203 engineers were able to run both the legacy and the Ultrafast cross browser tests. 

Here were the survey responses:

Few Fans Of Traditional Cross Browser Testing

For legacy cross browser testing, using a traditional test application and validation process, 68% of participants got classified as detractors. 17% got classified as passives. Only 15% promoted the legacy approach. This gave a NPS of -54 (rounded down). Participants were, in general, not fans of legacy cross browser testing.

This result mirrors how much people use the legacy approach to cross browser testing. Companies listed in the graphic above provide the infrastructure to run tests. They don’t reduce the test load, or the code load. Their pricing reflects the cheaper cost  for them to set up and maintain that infrastructure of devices, browsers, operating systems, and viewport sizes. Given the cost of setting up and maintaining legacy cross browser tests, it makes sense that not a lot of companies use cross browser testing actively.

Willing to Recommend Applitools Ultrafast Grid for Cross Browser Testing

Also, the survey asked participants their willingness to recommend Applitools Ultrafast Grid for Cross Browser testing. Here, 75% gave a 10 or 9 and got classified as promoters. Another 20% responded 7 or 8 and got classified as neutral. 

The promoters valued:

  • Speed of tests – noting they could run their tests in the build process
  • Simplicity of management – no tests needed to be run and tuned across multiple plaforms
  • Simplified code management – fewer locators meant easier to set up and manage
  • More accurate – the underlying Visual AI wasn’t plagued by false positives and caught all the code errors

The promoters discovered the ease of creating and maintaining cross browser tests using Applitools Ultrafast Grid. The promoters also realized that, with tests completed and analzyed accurately in well under 10 minutes, Applitools Ultrafast Grid made it possible to run cross browser tests in the scope of a build or during unit tests. Legacy tests, even when run in parallel, took tens of minutes to complete and analyze.

Implications of Recommending Cross Browser Testing

If you read my earlier posts, you know that two camps existed related to cross browser testing. One camp ran cross browser tests because they had encountered issues in the past and saw cross browser tests as a safe approach. The other camp avoided it altogether and thought cross browser testing unnecessary.

There is a third camp using Applitools Ultrafast Grid. This camp recognizes that the combination of:

  • test speed appropriate for software build processes
  • ease of deployment
  • lack of infrastructure to manage, and 
  • simplified test code management, 

made cross browser testing feasible. This third camp can deploy Applitools Ultrafast Grid to run and validate rendering behavior for any combination of browser, operating system, and viewport size and run this test set quickly at the unit, build, merge, and final test timeframes. 

What then are the implications for the Applitools Ultrafast Cross Browser Hackathon? 75% of the 208 engineers who completed both sets of tests could see the value of Applitools Ultrafast Grid just by using it. And, they would be willing to recommend it.  They realized that, whether they had previously released a bug to the field that they could have caught with cross browser testing, that Ultrafast Grid changed their understanding about this kind of testing completely.

Applitools users know that Applitools Visual AI makes it possible to run visual tests as part of their unit tests. These users can incorporate visual tests at every build and merge. And with Applitools Ultrafast Grid, they can incorporate cross browser tests as well.

Importantly, Hackathon participants learned these lessons just through their time on the Hackathon. 

What These Recommendations Mean For You

When you have 75% of engineers recommending something, it might be worth trying. Whether you classify yourself as a cross browser skeptic or a grudging participant, it is clear that Applitools Ultrafast Grid offers something that differs from your current conception of cross browser testing. 

At the very least, read the Applitools results in detail. You will learn why the engineers gave these recommendations.

More importantly, why not give Applitools a try? Sign up for a free account? Or, if you prefer, request a demo from an Applitools representative. 

Next Week

Next week, in my last blog post in this series, I will help you draw your own conclusions about Modern Cross Browser Testing.

The post Skeptics Who Recommend Cross Browser Testing appeared first on Automated Visual Testing | Applitools.

]]>
Stability In Cross Browser Test Code https://applitools.com/blog/stability-in-cross-browser-test-code/ Thu, 04 Feb 2021 23:56:57 +0000 https://applitools.com/?p=26674 If you read my previous blog, Fast Testing Across Multiple Browsers, you know that participants in the Applitools Ultrafast Cross Browser Hackathon learned the following: Applitools Ultrafast Grid requires an...

The post Stability In Cross Browser Test Code appeared first on Automated Visual Testing | Applitools.

]]>
Test Code Stability

If you read my previous blog, Fast Testing Across Multiple Browsers, you know that participants in the Applitools Ultrafast Cross Browser Hackathon learned the following:

  • Applitools Ultrafast Grid requires an application test to be run just once. Legacy approaches require repeating tests for each browser, operating system, and viewport size of interest.
  • Cross browser tests and analysis complete typically within 10 minutes, meaning that test times match the scale of application build times. Legacy test and analysis times involve several hours to generate results
  • Applitools makes it possible to incorporate cross browser tests into the build process, with both speed and accuracy.

Today, we’re going to talk about another benefit of using Applitools Visual AI and Ultrafast Grid: test code stability.

What is Test Code Stability?

Test code stability is the property of test code continuing to give consistent and appropriate results over time. With stable test code, tests that pass continue to pass correctly, and tests that fail continue to fail correctly. Stable tests do not generate false positives (report a failure in error) or generate false negatives (missing a failure).

Stable test code produces consistent results. Unstable test code requires maintenance to address the sources of instability. So, what causes test code instability?

Anand Bagmar did a great review of the sources of flaky tests. Some of the key sources of instability:

  • Race conditions – you apply inputs too quickly to ensure a consistent output
  • Ignoring settling time – your output becomes stable only after your sampling time
  • Network delay – your network infrastructure causes unexpected behavior
  • Dynamic environments – your inputs cannot guarantee all the outputs
  • Incompletely scoped test conditions – you have not specified the correct changes
  • Myopia – you only look for expected changes and actual changes occur elsewhere
  • Code changes – your code uses obsolete controls or measures obsolete output.

When you develop tests for an evolving application, code changes introduce the most instability in your tests. UI tests, whether testing the UI or complete end-to-end behavior, depends on the underlying UI code. You use your knowledge of the app code to build the test interfaces. Locator changes – whether changes to coded identifiers or CSS or Xpath locators – can cause your tests to break.

When test code depends on the App code, each app release will require test maintenance. Otherwise, no engineer can ensure that a “passing” test omitted an actual failure, or that  a “failing” test indicates a real failure and not a locator change.

Test Code Stability and Cross Browser Testing

Considering the instability sources, a tester like you takes on a huge challenge with cross browser tests. You need to ensure that your cross browser test infrastructure addresses these sources of instability so that your cross browser behavior matches expected results.

If you use a legacy approach to cross browser testing, you need to ensure that your physical infrastructure does not introduce network or other infrastructure sources of test flakiness.  Part of your maintenance ensures that your test infrastructure does not become a source of false positives or false negatives.  

Another check you make relates to responsive app design. How do you ensure responsive app behavior? How do you validate page location based on viewport size?

If you use legacy approaches, you spend a lot of time ensuring that your infrastructure, your tests, and your results all match expected app user behavior. In contrast, the Applitools approach does not require debugging and maintenance of multiple test infrastructures, since the purpose of the test involves ensuring proper rendering of server response.

Finally, you have to account for the impact of every new app coding change on your tests. How do you update your locators? How do you ensure that your test results match your expected user behavior?

Improving Stability: Limiting Dependency on Code Changes

One thing we have observed over time: code changes drive test code maintenance. We demonstrated this dependency relationship in the Applitools Visual AI Rockstar Hackathon, and again in the Applitools Ultrafast Cross Browser Hackathon. 

The legacy approach uses locators to both apply test conditions and measure application behavior. As locators can change from release to release, test authors must consider appropriate actions.

Many teams have tried to address the locator dependency in test code. 

Some test developers sit inside the development team. They create their tests as they develop their application, and they build the dependencies into the app development process. This approach can ensure that locators remain current. On the flip side, they provide little information on how the application behavior changes over time. 

Some developers provide a known set of identifiers in the development process. They work to ensure that the UI tests use a consistent set of identifiers. These tests can run the risk of myopic inspection. By depending on supplied identifiers – especially to measure application behavior, these tests run the risk of false negatives. While the identifiers do not change, they may no longer reflect the actual behavior of the application. 

The modern approach limits identifier use to applying test conditions. Applitools Visual AI measures the application response of the UI. This approach still depends on identifier consistency – but with way fewer identifiers. In both hackathons, participants cut their dependence on identifiers by 75% to 90% – basically, they used way fewer identifiers. Their code ran more consistently and required less maintenance.

Modern cross browser testing reduces locator dependence by up to 90% - resulting in more stable tests over time.

Implications of Modern Cross Browser Testing

Applitools Ultrafast Grid overcomes many of the hurdles that testers experience running legacy cross browser test approaches. Beyond the pure speed gains, Applitools offers improved stability and reduced test maintenance.

Modern cross browser testing reduces dependency on locators. By using Visual AI instead of locators to measure application response, Applitools Ultrafast Grid can show when an application behavior has changed – even if the locators remain the same. Or, alternatively, Ultrafast Grid can show when the behavior remains stable even though locators have changed. By reducing dependency on locators, Applitools ensures a higher degree of stability in test results.

Also, Applitools Ultrafast Grid reduces infrastructure setup and maintenance for cross browser tests. In the legacy setup, each unique browser requires its own setup and connection to the server. Each setup can have physical or other failure modes that must be identified and isolated independent of the application behavior. By capturing the response from a server once and validating the DOM across other target browsers, operating systems, and viewport sizes, Applitools reduces the infrastructure debug and maintenance efforts.

Conclusions

Participant feedback from the Hackathon provided us with consistent views on cross browser testing. From their perspective, participants viewed legacy cross browser tests as:

  • Likely to break on an app update
  • Susceptible to infrastructure problems
  • Expensive to maintain over time

In contrast, they saw Applitools Ultrafast Grid as:

  • Less expensive to maintain
  • More likely to expose rendering errors
  • Providing more consistent results.

You can read the entire report here.

What’s Next

What holds companies back from cross browser testing? Bad experiences getting results. But, what if they could get good test results and have a good experience at the same time? We ask participants about their experience on the Applitools Cross Browser Hackathon.

The post Stability In Cross Browser Test Code appeared first on Automated Visual Testing | Applitools.

]]>
Fast Testing Across Multiple Browsers https://applitools.com/blog/fast-testing-multiple-browsers/ Thu, 28 Jan 2021 08:22:47 +0000 https://applitools.com/?p=26281 Ultrafast testers seamlessly resolve unexpected browser behavior as they check in their code. This happens because, in less than 10 minutes on average, they know what differences exist. They could not do this if they had to wait the nearly three hours needed in the legacy approach. Who wants to wait half a day to see if their build worked?

The post Fast Testing Across Multiple Browsers appeared first on Automated Visual Testing | Applitools.

]]>

If you think like the smartest people in software, you conclude that testing time detracts from software productivity. Investments in parallel test platforms pay off by shortening the time to validate builds and releases. But, you wonder about the limits of parallel testing. If you invest in infrastructure for fast testing across multiple browsers, do you capture failures that justify such an investment?

The Old Problem: Browser Behavior

Back in the day, browsers used different code bases. In the 2000s and early 2010s, most application developers struggled to ensure cross browser behavior. There were known behavior differences among Chrome, Firefox, Safari, and Internet Explorer. 

Annoyingly, each major version of Internet Explorer had its own idiosyncrasies. When do you abandon users who still run IE 6 beyond its end of support date? How do you handle the IE 6 thorough IE 10 behavioral differences? 

While Internet Explorer differences could be tied to major versions of operating systems, Firefox and Chrome released updates multiple times per year. Behaviors could change slightly between releases. How do you maintain your product behavior with browsers in the hands of your customers that you might not have developed with or tested against? 

Cross browser testing proved itself a necessary evil to catch potential behavior differences. In the beginning, app developers needed to build their own cross browser infrastructure. Eventually, companies arose to provide cross browser (and then cross device) testing as a service.

The Current Problem: Speed Vs Coverage

In the 2020s, speed can provide a core differentiator for app providers. An app that delivers features more quickly can dominate a market. Quality issues can derail that app, so coverage matters. But, how do app developers ensure that they get a quality product without sacrificing speed of releases?

In this environment, some companies invest in cross browser test infrastructure or test services. They invest in the large parallel infrastructure needed in creating and maintaining cross browser tests. And, the bulk of uncovered errors end up being rendering and visual differences. So, these tests require some kind of visual validation. But, do you really need to repeatedly run each test? 

Applitools concluded that repeating tests required costly infrastructure as well as costly test maintenance. App developers intend that one server response work for all browsers. With its Ultrafast Grid, Applitools can capture the DOM state on one browser and then repeat it across the Applitools Ultrafast Test Cloud. Testers can choose among browsers, devices, viewport sizes and multiple operating systems. How much faster can this be?

Hackathon Goal – Fast Testing With Multiple Browsers

In the Applitools Ultrafast Cross Browser Hackathon, participants used the traditional legacy method of running tests across multiple browsers to compare behavior results. Participants then compared their results with the more modern approach using the Applitools Ultrafast Grid. Read here about one participant’s experiences.

The time that matters is the time that lets a developer know the details about a discovered error after a test run. For the legacy approach, coders wrote tests for each platform of interest, including validating and debugging the function of each app test on each platform. Once the legacy test had been coded, the tests were run, analyzed, and reports were generated. 

For the Ultrafast approach, coders wrote their tests using Applitools to validate the application behavior. These tests used fewer lines of code and fewer locators. Then, the coders called the Applitools Ultrafast Grid and specified the browsers, viewports, and operating systems of interest to match the legacy test infrastructure.

Hackathon Results – Faster Tests Across Multiple Browsers

The report included this graphic showing the total test cycle time for the average Hackathon submission of legacy versus Ultrafast:

Here is a breakdown of the average participant time used for legacy versus Ultrafast across the Hackathon:

ActivityLegacyUltrafast
Actual Run Time9 minutes2 minutes
Analysis Time270 minutes10 minutes
Report Time245 minutes15 minutes
Test Coding Time1062 minutes59 minutes
Code Maintenance Time120 minutes5 minutes

The first three activities, test run, analysis, and report, make up the time between initiating a test and taking action. Across the three scenarios in the hackathon, the average legacy test required a total of 524 minutes. The average for Ultrafast was 27 minutes. For each scenario, then, the average was 175 minutes – almost three hours – for the legacy result, versus 9 minutes for the Ultrafast approach.

On top of the operations time for testing, the report showed the time taken to write and maintain the test code for the legacy and Ultrafast approaches. Legacy test coding took over 1060 minutes (17 hours, 40 minutes), while Ultrafast only required an hour. And, code maintenance for legacy took 2 hours, while Ultrafast only required 5 minutes.

Why Fast Testing Across Multiple Browsers Matters

As the Hackathon results showed, Ultrafast testing runs more quickly and gives results more quickly. 

Legacy cross-browser testing imposes a long time from test start to action. Their long run and analysis times do not make them suitable for any kind of software build validation. Most of these legacy tests get run in final end-to-end acceptance, with the hope that no visual differences get uncovered. 

Ultrafast approaches enable app developers to build fast testing across multiple browsers into software build. Ultrafast analysis catches unexpected build differences quickly so they can be resolved during the build cycle.

By running tests across multiple browsers during build, Ultrafast Grid users shorten their find-to-resolve cycle to branch validation even prior to code merge. They catch the rendering differences and resolve them as part of the feature development process instead of the final QA process. 

Ultrafast testers seamlessly resolve unexpected browser behavior as they check in their code. This happens because, in less than 10 minutes on average, they know what differences exist. They could not do this if they had to wait the nearly three hours needed in the legacy approach. Who wants to wait half a day to see if their build worked?

Combine the other speed differences in coding and maintenace, and it becomes clear why Ultrafast testing across multiple browsers makes it possible for developers to run Ultrafast Grid in development.

What’s Next

Next, we will cover code stability – the reason why Ultrafast tests take, on average, 5 minutes to maintain, instead of two hours. 

The post Fast Testing Across Multiple Browsers appeared first on Automated Visual Testing | Applitools.

]]>
Fast, Efficient and Effective Cross Browser Testing https://applitools.com/blog/effective-cross-browser-testing/ Fri, 22 Jan 2021 19:36:30 +0000 https://applitools.com/?p=25845 What do you think about cross browser testing? Developers likely develop on only one browser – and maybe only one operating system. How does an app maker ensure that defects...

The post Fast, Efficient and Effective Cross Browser Testing appeared first on Automated Visual Testing | Applitools.

]]>

What do you think about cross browser testing?

Developers likely develop on only one browser – and maybe only one operating system. How does an app maker ensure that defects on other browsers will not escape to their user base? In theory, cross browser testing can help companies catch product defects before products get released to customers. But the legacy approach – setting up a bunch of parallel devices and running tests across each – incurs significant engineering skill and resource cost.

Choose Your Cross Browser Testing Camp

With this legacy infrastructure requirement, most engineering managers find themselves in one of two camps:

CAMP 1:

CROSS BROWSER TESTING

NECESSARY IF INEFFICIENT

CAMP 2:

CROSS-BROWSER TESTING

NOT WORTH THE EFFORT

Managers in Camp 1 run cross browser tests. These managers have been held accountable for missed bugs due to an untested platform in the past. They know that cross browser testing provides coverage, even though it requires resources for infrastructure deployment and test maintenance.

Managers in Camp 2 have not experienced significant field bugs that were caused by untested behavior in different browsers. Camp 2 managers have evaluated cross browser testing. They have concluded that cross browser testing has a low likelihood of exposing unique bugs.

So, yes, Camp 1 and Camp 2 share a common view of cross browser testing – slow, cumbersome to manage with limited effectiveness in catching defects. Camp 1 people have been burned by a bug they could have caught with cross browser testing. Members of Camp 2 have not.

Camp 3 – Use Modern Cross Browser Testing

The first two camps get stifled by the legacy approach to cross browser testing. In the legacy approach, you take your existing test infrastructure and port it from your browser of choice (Chrome, Firefox, Edge, Safari) on your test operating system and check out combinations of browsers and operating systems. You might even check out different viewport sizes to validate responsive app behavior.

Camp 3 uses Modern Cross Browser Testing, made possible by Applitools Visual AI and Applitools Ultrafast Grid. Applitools Visual AI provides highly accurate visual comparisons between a captured screen and a previous baseline. With Applitools Ultrafast Grid, the Visual AI screen capture also returns the DOM of the captured screen.  Then, Applitools Ultrafast Grid reruns the DOM against each target browser/operating system/viewport combination specified by the tester. For each target, Applitools captures the resulting screen and compares it against the relevant baseline.

Modern cross browser testing leverages key insights:

  • The bulk of apps generate a single DOM response regardless of the target browser;
  • Cross browser tests expose visual defects, as functional defects get caught elsewhere;
  • Legacy test code, which inspects elements of the DOM, requires lots of code to capture and compare all the visual elements for an app response;
  • Visual AI makes cross browser test automation feasible and effective.

Proving the Value – Applitools Cross Browser Testing Hackathon

At Applitools, we know that our Ultrafast Grid, combined with Visual AI, create a simple approach to cross browser testing. We overcome all the objections and complaints from both Camp 1 and Camp 2. And our approach contradicts the conventional wisdom of engineers who sit in Camp 1 and in Camp 2.

We found ourselves stuck with a question:

How do you get a group of really smart people who think one way to change their minds?

Evidence? Maybe. Let them try it for themselves? Possibly – but why would someone want to try something out?

What if you give them an incentive? Like a contest?

Applitools had a lot of experience from the 2019 Visual AI Rockstar Hackathon. We knew we could get engineers to try their hand at using Visual AI. Perhaps, we could get a similar response with another hackathon pitting legacy cross browser testing against Visual AI plus Ultrafast Grid.

So, we ran the Applitools Cross Browser Testing Hackathon in June 2020. My next few blog posts will go through the hackathon in detail and present you with some of the results and conclusions. They include:

  • Ultrafast Grid provides faster test execution
  • You need fewer test runs with Ultrafast Grid
  • Your tests require less coding (and less time debugging test code)
  • Ultrafast grid needs no additional hardware (whether onsite or provided by a service)
  • Cross browser tests with Ultrafast Grid are easier to maintain
  • A large number of the participants who were not previously inclined towards cross browser testing would recommend Visual AI and Ultrafast Grid.

You can skip the blog posts and read the full report instead.

You can read Marie Drake’s summary of her experience as a Hackathon participant and winner.

Or, you can wait for my next blog post.

The post Fast, Efficient and Effective Cross Browser Testing appeared first on Automated Visual Testing | Applitools.

]]>
9 Test Automation Predictions for 2021 https://applitools.com/blog/9-test-automation-predictions-2021/ Thu, 07 Jan 2021 20:24:40 +0000 https://applitools.com/?p=25359 Every year, pundits and critics offer their predictions for the year ahead. Here are my predictions for Test Automation in 2021.

The post 9 Test Automation Predictions for 2021 appeared first on Automated Visual Testing | Applitools.

]]>

Every year, pundits and critics offer their predictions for the year ahead. Here are my predictions for test automation in 2021. (Note: these are my personal predictions)

Prediction 1: Stand-alone QA faces challenges of dev teams with integrated quality engineering

chuttersnap tUSN3PNeX1U unsplash Small

Photo by CHUTTERSNAP on Unsplash

Teams running Continuous Integration/Continuous Deployment (CICD) have learned that developers must own the quality of their code. In 2021, everyone else will figure that out, too. Engineers know that the delay between developing code and finding bugs produces inefficient development teams. Companies running standalone QA teams find bugs later than teams with integrated quality. In 2021, this difference will begin to become painful as more companies adopt quality engineering in the midst of development.

Prediction 2: Development teams will own core test automation

sara cervera oEaGiyEjQyY unsplash Small

Photo by Sara Cervera on Unsplash

Dev owns test automation across many CICD teams. With more quality responsibility, more of the development teams will build test automation. Because they use JavaScript in development, front-end teams will choose JavaScript as the prime test automation language. As a result, Selenium JavaScript and Cypress adoption will grow, with Cypress seeing the most increase.

Prediction 3: Primary test automation moves to build

thisisengineering raeng zdBOU0faYK4 unsplash Small

Photo by ThisisEngineering RAEng on Unsplash

In 2021, core testing will occur during code build. In past test approaches, unit tests ran independently of system-level and full end-to-end integration tests. Quality engineers wrote much of the end-to-end test code. When bugs got discovered at the end, developers had to stop what they were doing to jump back and fix code. With bugs located at build time, developer productivity increases as they improve what they just checked in real-time.

Prediction 4: Speed+Coverage as the driving test metric

marc sendra martorell Vqn2WrfxTQ unsplash Small

Photo by Marc Sendra Martorell on Unsplash

As more testing moves to build, speed matters. Every minute needed to validate the build wastes engineering time. Check-in tests will require parallel testing for the unit, system, and end-to-end tests. Sure, test speed matters. What about redundant tests? Each test must validate unique aspects of the code. Developers will need to use existing tools or new tools to measure the fraction of unexercised code in their test suites.

Prediction 5: AI assists in selecting tests to ensure coverage

hitesh choudhary t1PaIbMTJIM unsplash Small

Photo by Hitesh Choudhary on Unsplash

To speed up testing, development teams will look to eliminate redundant tests. They will look to AI tools to generate test conditions, standardize test setup, and identify both untested code and redundancy in tests. You can look up a range of companies adding AI to test flows for test generation and refactoring. Companies adopting this technology will attempt to maximize test coverage as they speed up testing.

Prediction 6: Visual AI Page Checks Grows 10x

johen redman pktNMFsHNVs unsplash Small

Photo by Johen Redman on Unsplash

I’m making this prediction based on feedback from Applitools Visual AI customers. Each year, Applitools tracks the number of pages using Visual AI for validation. We continue to see exponential growth in visual AI use within our existing customers. The biggest driver for this growth in usage follows through from the next two predictions about Visual AI utility.

Prediction 7: Visual tests on every check-in

larissa gies MQWc4I9VuTc unsplash Small

Photo by Larissa Gies on Unsplash

When companies adopt visual testing, they often add visual validation to their end-to-end tests. At some point, every company realizes that bug discovery must come sooner. They want to uncover bugs at check-in, so developers can fix their code while it remains fresh in their minds. Visual AI provides the accuracy to provide visual validation on code build and code merge – letting engineers validate both the behavior and rendering of their code within the build process.

Prediction 8: Visual tests run with unit tests

wesley tingey 0are122T4ho unsplash Small

Photo by Wesley Tingey on Unsplash

Engineers treat their unit tests as sanity checks.  They run unit tests regularly and only check results when the tests fail. Why not automate unit tests for the UI? Many Applitools customers have been running visual validation alongside standard unit tests. Visual AI, unlike pixel diffs and DOM diffs, provides high accuracy validation for visual components and mocks. With the Ultrafast Test Platform, these checks can be validated across multiple platforms with just a single code run. Many more Applitools customers will adopt visual unit testing in 2021.

Prediction 9: The gap between automation haves and have-nots will grow

brett jordan 2dwqHcTjbtQ unsplash Small

Photo by Brett Jordan on Unsplash

As more development teams own test automation, we will see a stark divide between legacy and modern approaches. Modern teams will deliver features and capabilities more quickly with the quality demanded by users. Legacy teams will struggle to keep up; they will choose between quality and speed and continue to fall behind in reputation.

Where Do You See The Future?

These are nine predictions I see. What do you see? How will you get ahead of your competition in 2021? How will you keep from falling behind? What will matter to your business?

Each of us makes predictions and then sees how they come to fruition. Let’s check back in a year and see how each of us did.

Featured Photo by Sasha • Stories on Unsplash

The post 9 Test Automation Predictions for 2021 appeared first on Automated Visual Testing | Applitools.

]]>
Thriving Through Visual AI – Applitools Customer Insights 2020 https://applitools.com/blog/customer-insights-2020/ Wed, 23 Dec 2020 20:48:10 +0000 https://applitools.com/?p=25318 In this blog post, we share what we learned about how Applitools helps to reduce test code, shorten code rework cycles, and shrink test time.

The post Thriving Through Visual AI – Applitools Customer Insights 2020 appeared first on Automated Visual Testing | Applitools.

]]>

In this blog post, I cover the customer insights into their successes through the use of Applitools. I share what we learned about how our users speed up their application delivery by reducing test code, shortening code rework cycles, and reducing test time.

Customer Insight – Moving To Capture Visual Issues Earlier

We now know that our customers go through a maturity process when using Applitools. We see a typical maturity process:

  1. End-to-end test validation on one application
  2. [OPTIONAL] Increasing the end-to-end validation across other applications (where they exist)
  3. Moving validation to code check-in and
  4. Build validation
  5. Validating component and component mock development

End to End Validation

In automating application tests, our customers realize that they need a way to validate the layout and rendering of their applications through automation. They have learned the problems that can escape when even a manual check does not occur. But, manual testing is both expensive and error-prone.

Every one of our customers has experience with pixel diff for visual validation.  They uniformly reject pixel diff for end-to-end testing. From their experience, pixel diff reports too many false positives to make it useful for automation.

So, our customers begin by running a number of visual use cases through Applitools to understand its accuracy and consistency. They realize that Applitools will capture visual issues without reporting false positives. And, they begin adding Applitools to other production applications for end-to-end tests.

Check-In Validation

As Applitools users become comfortable with Applitools in their end-to-end tests, they begin to see inefficiencies in their workflow. End-to-end validation occurs well-after developers finish and check-in their code. To repair any uncovered errors, developers must switch context from their current task to rework the failing code. Rework impacts developer productivity and slows product release cycles.

Once users uncover this workflow inefficiency, they look to move visual validation to an earlier point in app development. Check-in makes a natural point for visual validation. At check-in, all functional and visual changes can be validated. Any uncovered errors can go immediately back to the appropriate developers for rework.

So, our customers add visual validation to their check-in process. Their developers become more efficient. And, developers become attuned to the interdependencies of their code with shared resources used across the team.

Regular Build Validation

As our customers appreciate the dependability of Applitools Visual AI, they realize that Applitools can be part of their regular build validation process. These customers use Applitools as a “visual unit test”, which should run green after every build. When Applitools fails with an unexpected error, they uncover an unexpected change. In this mode, our users generally expect passing tests.

At this level of maturity, end-to-end validation tests provide a sanity check. Our customers who have reached this level tell us that they never discover visual issues late in their product release process anymore.

Component and Mock Validation

Our most mature customers have moved validation into visual component construction and validation.

To ensure visual consistency, many app developers have adopted some kind of visual component library. The library defines a number of visual objects or components. Developers can assign properties to an object, or define a style sheet from which the component inherits properties.

To validate the components and show them in use, developers create mock-ups. These mock-ups let developers manipulate the components independent of a back-end application. Tools like Storybook serve up these mock-ups. Developers can test components and see how they behave through CSS changes.

Applitools’ most mature customers use Visual AI in their mock-up testing to uncover unexpected behavior and isolate unexpected visual dependencies. They find visual issues much earlier in the development process – leading to better app behavior and reduced app maintenance costs.

Customer Insight – Common Problems

Our customers break themselves into two kinds of problems:

  • Behavior Signals Trustworthiness
  • Visual Functionality
  • Competitive Advantage

Behavior Signals Trustworthiness

When buyers spend money, they do so with organizations they trust. Similarly, when investors or savers deposit money, they expect their fiduciary or institution to behave properly.

Take a typical bill paying application from a bank. The payees may be organized in alphabetical order, or in the amount previously paid. A user enters the amount to pay for a current bill. The app automatically calculates and reports the bill pay date. How would an online bank customer react to any of these misbehaviors:

  • Missing payees
  • Lack of information on prior payments
  • the inability to enter a payment amount
  • Misaligned pay date

How do buyers or investors react to these misbehaviors? As they tell it, some ignore issues, some submit bug reports, some call customer support. And, some just disappear. Misbehavior erodes trust. In the face of egregious or consistent misbehavior, customers go elsewhere.

App developers understand that app misbehavior erodes trust. So, how can engineers uncover misbehavior? Functional testing can ensure that an app functions correctly. It can even ensure that an app has delivered all elements on a page by identifier. But, functional testing can overlook color problems that render text or controls invisible, or rendering issues that result in element overlap.

When they realize they uncover too many visual issues late in development, app developers look for a solution. They need accurate visual validation to add to their existing end-to-end testing.

Visual Functionality

Another group of users builds applications with visual functionality. With these applications, on-screen tools let users draw, type, connect and analyze content. These applications can use traditional test frameworks to apply test conditions. The hard part comes when developers want to automate application testing.

Sure, engineers can use identifiers for some of the on-screen content. However, identifiers cannot capture graphical elements. To test their app, some engineers use free pixel diff tools to validate screenshots or screen regions. How do these translate to cross-browser behavior?  Or, how about responsive application designs on different viewport sizes?

At some point, all these teams realize they have wasted engineering resources on home-grown visual validation systems. So, they look for a commercial visual validation solution.

Competitive Advantage

The final issue we hear from our users involves a competitive advantage. They tell us they seek an engineering advantage to overcome technical and structural limitations. For example, as teams grow and change, the newest members have the challenge to learn the dependencies that cause errors. Also, development teams build up technical debt based on pragmatic release decisions.

Over time, existing code becomes a set of functionality with an unknown thread of dependencies. Developers are loath to touch existing code for fear of incurring unknown defects that will result in unexpected code delays. As you might imagine, visual defects make up a large percentage of these risks.

Developers recognize the need to work through this thread of dependencies. They look for a solution to help identify unexpected behavior, and its root cause, well before code gets released to customers. They need a highly-accurate visual validation solution to uncover and address visual dependencies and defects.

Hear From Our Customers

A number of Applitools customers shared their insight in our online webinars in 2020. All these webinars have been recorded for your review.

Full End-To-End Testing

In a two-talk webinar, David Corbett of Pushpay described how they are running full end-to-end testing to achieve the quality they need. He described in detail how they use their various tools – especially Applitools. Later in that same webinar, Alexey Shpakov of Atlassian described the full model of testing for Jira. He described their use of visual validation. Both talks described the movement of quality validation to the responsibility of developers.

Alejandro Sanchez-Giraldo of Vodafone spoke about his company’s focus on test innovation over time – including the range of test strategies he has tried. As a veteran, he knows that some approaches fail while others succeed, and he recognizes that learning often makes the difference between a catastrophic failure and a setback. He describes the full range of Vodafone’s testing.

Testing Design Systems

Marie Drake of News UK explained how News UK had deployed a design system to make its entire news delivery system more productive. In her webinar, Marie explained how they depended on Applitools for testing, from the component level all the way to the finished project. She showed how the design system resulted in faster development. And, she showed how visual validation provided the quality needed at News UK to achieve their business goals.

Similarly, Tyler Krupicka of Intuit described their design system in detail. He showed how they developed their components and testing mocks. He described the design system as giving Intuit the ability to make rapid mock-up changes in their applications. Tyler explained how Intuit used their design system to make small visual tweaks that they could evaluate in A/B testing to determine which tweak resulted in more customers and greater satisfaction.

Testing PDF Reports

Priyanka Halder, head of quality at GoodRx, describes her process for delegating quality across all of the engineering team as a way to accelerate the delivery of new features to market. She calls this “High-Performance Testing.” In her webinar, one of the keys to GoodRx is the library of drug description and interaction pages served up by GoodRx. GoodRx uses Applitools to validate this content, even as company logos, web banners, and page functionality get tweaked constantly.

Similarly, Fiserv uses Applitools to test a range of PDF content generated by Fiserv applications.  In their webinar, David Harrison and Christopher Kane of Fiserv describe how Applitools makes their whole workflow run more smoothly.

These are just some of the customer stories shared in their own words in 2020.

Looking Ahead to 2021

As I mentioned earlier, we plan to publish a series of case studies outlining customer successes with Applitools in 2021.

When you read the published stories, you might be surprised by the results they get. For example, the company whose test suite for their graphical application today runs in 5 minutes. They previously used a home-grown visual test suite that took 4 hours to complete. Instead of running their tests infrequently, they can now run their application test suite as part of every software build. That’s what a 48x improvement can do.

Another company has tests that used to require every developer to run, analyze, and evaluate on their own. Visual tests were incorporated into their suite and had to be validated manually. Today, the tests get run automatically and require just a single engineer to review and either approve or delegate for rework, the results of tests.

You might be surprised to find competitors in your industry using Applitools. And, you might find that they feel guarded about sharing that information among competitors. Some companies see Applitools as a secret weapon in making their teams more efficient.

We look forward to sharing more with you in the weeks and months ahead.

Happy Testing, and Happy 2021.

Featured photo by alex bracken on Unsplash

The post Thriving Through Visual AI – Applitools Customer Insights 2020 appeared first on Automated Visual Testing | Applitools.

]]>
Leading With Visual AI – Applitools Achievements In 2020 https://applitools.com/blog/applitools-achievements-2020/ Wed, 23 Dec 2020 07:41:04 +0000 https://applitools.com/?p=25300 As we complete 2020, we want to share our take on the past year. We had a number of achievements in 2020. And, we celebrated a number of milestones.

The post Leading With Visual AI – Applitools Achievements In 2020 appeared first on Automated Visual Testing | Applitools.

]]>

As we complete 2020, we want to share our take on the past year. We had a number of achievements in 2020. And, we celebrated a number of milestones.

Any year in review article must include the effects of the pandemic, along with the threats on social justice. We also want to give thanks to our customers for their support.

Achievements: Product Releases in 2020

Ultrafast Grid

Among our achievements in 2020, Applitools launched the production version of Ultrafast Grid and the Ultrafast Test Cloud Platform. With Ultrafast Grid, you can validate your UI across multiple desktop client operating systems, browsers, and viewport sizes using only a single test run. We take care of the validation and image management, and you don’t need to set up and manage that infrastructure.

Ultrafast Grid works so quickly because we assume your application uses a common server response for all your clients. You only need to capture one server response. Ultrafast Grid captures the DOM state each snapshot and compares that snapshot in parallel across every client/operating system/viewport combination you wish to test. A single test run means less server time. Parallel validation means less test time. Ultrafast Grid simultaneously increases your test coverage while reducing both your test run time and infrastructure requirements.

“Accelerating time to production without sacrificing quality has become table stakes for Agile and DevOps professionals, the team at Applitools has taken a fresh approach to cross browser testing with the Ultrafast Grid. While traditional cloud testing platforms are subject to false positives and slow execution, Applitools’ unique ability to run Visual AI in parallel containers can give your team the unfair advantage of stability, speed, and improved coverage. This modern approach to testing is something that all DevOps professionals should strongly consider.”

Igor Draskovic, VP, Developer Specialist at BNY Mellon

A/B Testing

We introduced a new feature to support A/B testing. As more of our customers use A/B testing to conduct live experiments on customer conversion and retention, Applitools now supports the deployment and visual validation of parallel application versions.

“A/B testing is a business imperative at GoodRx – it helps our product team deliver the absolute best user experience to our valued customers. Until now, our quality team struggled to automate tests for pages with A/B tests – we’d encounter false positives and by the time we wrote complex conditional test logic, the A/B test would be over. Applitools implementation of A/B testing is incredibly easy to set up and accurate. It has allowed our quality team to align and rally behind the business needs and guarantee the best experience for our end-users.”

Priyanka Halder, Sr. Manager, Quality Engineering at GoodRx

GitHub, Microsoft, and Slack Integrations

Applitools now integrates with Slack, adding to our range of application and collaboration integrations. Applitools can now send alerts to your engineering team members, including highlights of changes and the test runs on which they occurred.

As a company, we also announced integrations with GitHub Actions and Microsoft Visual Studio App Center.  The integrations allow developers to seamlessly add Visual AI-powered testing to every build and pull request (PR), resulting in greater UI version control and improved developer workflows. As we have seen, this integration into the software build workflow provides visual testing at code check-in time. Instead of waiting for end-to-end tests to expose rendering problems and conflicts, developers can use Applitools to validate prior to code merge.

“We’re excited to welcome Applitools to the GitHub Partner Program and for them to expand their role within the GitHub ecosystem. Applitools’ Visual AI powered testing platform and GitHub’s automated, streamlined developer workflow pair perfectly to support our shared vision of making it easier to ship higher quality software, faster.”

Jeremy Adams, Director of Business Development and Alliances at GitHub

Auto Maintenance and Smart Assist

We also introduced major improvements with Auto Maintenance and Smart Assist. With Smart Assist, we help you deploy your tests to address unique visual test challenges, such as dynamic data and graphical tests. With Auto Maintenance, you can validate an intended visual change in one page of your application and then approve that change on every other page where that change occurs. If you update your logo or your color scheme, you can validate identical changes across your entire application in one click. Smart Assist and Auto Maintenance reduce the time and effort you need to maintain your visual tests – saving hours of effort in your development and release process.

“We use Applitools extensively in our regression testing at Branch. Visual AI is incredibly accurate, but equally impressive are the AI-powered maintenance features. With the volume of tests that we run, the time savings that the AI auto-maintenance features afford us are extensive.”

Joe Emison, CTO at Branch Financial

Achievements: Milestones in 2020

Applitools also achieved a number of major milestones in 2020.

1,000,000,000 Page Images Collected

We recorded one billion page images collected across our customer base. Many of our customers now include Applitools validation as part of every CICD check-in and build. You will find out more in our customer insights discussion, below. We celebrated that achievement earlier in 2020.

Test Automation University

We launched Test Automation University (TAU) as a way to help expand test knowledge among practitioners. Among our achievements in 2020, TAU now has over 50 courses to teach test techniques and programming languages. You can take any of these courses free of charge. Whether you are an experienced test programmer or just getting started, you will find a range of courses to match your interests and abilities. We introduced 19 new courses in 2020. We also saw significant numbers of new students using Test Automation University. In early 2020, we announced that we had 35,000 students taking courses. Later in the year we celebrated reaching the 50,000 user milestone. Look forward to another announcement in early 2021.

Hackathons

In 2019, Applitools launched our Visual AI Rockstar Hackathon. Hackathon participants ran a series of test cases comparing legacy locator-based functional testing with Applitools visual validation. In 2020, we shared the results of that Hackathon.  Engineers wrote tests faster, wrote test code that ran more quickly, and wrote tests that required less maintenance over time. We were able to show your achievements in 2020.

Also in 2020, we hosted a cross browser test hackathon.  Participant results demonstrated that Ultrafast Grid sets up more easily than a traditional farm of multiple browsers. The real value of Ultrafast Grid, though, comes with test maintenance as applications update over time. In November, we hosted a hackathon based on a retail shopping application. We look forward to sharing the insights from that hackathon in early 2021.

Future of Testing

Lastly in 2020 Applitools launched the Future of Testing Conference. Applitools gathered engineering luminaries across a range of industries and companies – from brand names like Microsoft and Sony to tech leaders like GoodRX and Everfi. Their stories show how companies continue to deliver quality products more quickly by using the right approaches and the right tools. Applitools has planned more Future of Testing Conferences for 2021.

Achievement: Customer Growth In 2020

Another of our achievements in 2020 involved customers. We want to thank our customers for their commitment to using Applitools in 2020. Not only did we pass the 1,000,000,000 page capture mark, but we also learned about the many exciting ways our customers are using Applitools.

During the COVID-19 coronavirus pandemic, our customers have appreciated how we have worked to ensure that they continued to get full use and value from Applitools. Though our support team worked largely from home during the year, we used tools to ensure that our customers got the support they needed to succeed with Applitools.

We continued to see our existing customers use more and more page checks over time. A number of companies run Applitools to validate code check-in on daily, and even hourly, code builds. Our customers are also using Applitools to validate component libraries they are building and modifying in React, Angular, and Vue.

We also saw a large number of companies experimenting with and adopting Cypress for development validation. Some companies used Cypress in development to complement an existing Selenium test infrastructure. Others were starting their Cypress validation in new areas or on new products.

Our World in Review – 2020

While many issues affected the world in 2020, two dominated the Applitools world.

The first issue, the COVID-19 pandemic, required our team to work from home for much of the year. Dan Levy offered his suggestions on how to work from home more efficiently.  As we continued to work remotely, we saw how the pandemic affected our team and the world around us. At this point, all of us know people who have been infected. Some in our circles have been hospitalized. And, some have died.

As a company, we are fortunate that Applitools has provided its employees with the ability to work from home. As a company, we want to thank the first responders and health care workers who cannot shelter in safety. We thank them for risking their lives to make all of us safe.

And, we also share condolences with those of you who have lost family, friends, and other loved ones in 2020.

The second issue, social justice, has continued to capture the spirit of our company and its employees. For 8 minutes 46 seconds, the world saw one human’s casual indifference while kneeling on another human’s neck. While not the only incident of 2020, the video of George Floyd’s struggle to live affected all of us. How can there be justice if our civil guardians cannot treat all of us equally? If we want a just world, we need to support those who advocate for social justice.

Applitools and its employees support creating a more just world for all. We continue to encourage our employees to support social justice movements for all. They can support Black Lives Matter, or any other organization actively combatting racism and injustice.

We know there are some who sow division for their own gain. As a company, we think we are stronger together.

Next Up – Customer Insights in 2020

In our next blog post, learn more about Applitools customers. We will share some details we learned about Applitools driving our customers’ productivity. We will be sharing more details in 2021 with a series of customer success stories. Before we release those, read our next blog post on customer insights. Learn how your peers and colleagues benefit from using our highly-accurate Visual AI infrastructure.

The post Leading With Visual AI – Applitools Achievements In 2020 appeared first on Automated Visual Testing | Applitools.

]]>