
The post Top 10 Visual Testing Tools appeared first on Automated Visual Testing | Applitools.
]]>Visual regression testing, a process to validate user interfaces, is a critical aspect of the DevOps and CI/CD pipelines. UI often determines the drop-off rate of an application and is directly concerned with customer experience. A misbehaving front end is detrimental to a tech brand and must be avoided like the plague.
Manual testing procedures are not enough to understand intricate UI modifications. Automation scripts could be a solution but are often tedious to write and deploy. Visual testing, therefore, is a crucial element that determines changes to the UI and helps devs flag unwanted modifications.
Every visual regression testing cycle has a similar structure – some baseline images or screenshots of a UI are captured and stored. After every change to the source code, a visual testing tool takes snapshots of the visual interface and compares them with the initial baseline repository. The test fails if the images do not match and a report is generated for your dev team.
Revolutionizing visual testing is Visual AI – a game-changing technology that automates the detection of visual issues in user interfaces. It also enables software testers to improve the accuracy and speed of testing. With machine learning algorithms, Visual AI can analyze visual elements and compare them to an established baseline to identify changes that may affect user experience.
From font size and color to layout inconsistencies, Visual AI can detect issues that would otherwise go unnoticed. Automated visual testing tools powered by Visual AI, such as Applitools, improve testing efficiency and provide faster and more reliable feedback. The future of visual testing lies in Visual AI, and it has the potential to significantly enhance the quality of software applications.
Visual testing is a critical aspect of software testing that involves analyzing the user interface and user experience of an application. It aims to ensure that the software looks and behaves as expected, and all elements are correctly displayed on different devices and platforms. Visual testing detects issues such as layout inconsistencies, broken images, and text overlaps that can negatively impact the user experience.
Automated visual testing tools like Applitools can scan web and mobile applications and identify any changes to visual elements. Effective visual testing can help improve application usability, increase user satisfaction, and ultimately enhance brand loyalty.
Visual testing and functional testing are two essential components of software testing that complement each other. While functional testing ensures the application’s features work as expected, visual testing verifies that the application’s visual elements, such as layout, fonts, and images, are displayed correctly. Visual testing benefits functional testing by enhancing test coverage, reducing testing time and resources, and improving the accuracy of the testing process.
Some more benefits of visual testing for functional testing are as follows:
Further reading: https://applitools.com/solutions/functional-testing/
The following section consists of 10 visual testing tools that you can integrate with your current testing suite.
A visual regression tool, often underrated, Aye Spy is open-source and heavily inspired by BackstopJS and Wraith. At its core, the creators had one issue they wanted to challenge- performance. The visual regression tools in the market are missing this key element that Aye Spy finally decided to incorporate with 40 UI comparisons in under 60 seconds (with optimum setup, of course)!
Features:
Advantages:
One of the most popular tools in the market, Applitools, is best known for employing AI in visual regression testing. It offers feature-rich products like Eyes, Ultrafast Test Cloud, and Ultrafast Grid for efficient, intelligent, and automated testing.
Applitools is 20x faster than conventional test clouds, is highly scalable for your growing enterprise, and is super simple to integrate with all popular frameworks, including Selenium, WebDriver IO, and Cypress. The tool is state of the art for all your visual testing requirements, with the ‘smarts’ to know what minor changes to ignore, without any prior settings.
Applitools’ Auto-Maintenance and Auto-Grouping features are handy. According to the World Quality Report 2022-23, maintainability is the most important factor in determining test automation approaches, but it often requires a sea of testers and DevOps professionals on their toes, ready to resolve a wave of bugs.
Cumbersome and expensive, this can break your strategies and harm your reputation. Auto-Grouping categorizes the bugs as Auto-Maintenance resolves them while offering you the flexibility to jump in wherever needed. Applitools enters the movie here.
Applitools Eyes is a Visual AI product that dramatically minimizes coding while maximizing bug detection and test updation. Eyes mimics the human eye to catch visual regressions with every app release. It can identify dynamic elements like ads or other customizations and ignore or compare them as desired.
Features:
Advantages:
Read more: Applitools makes your cross-browser testing 20x faster. Sign up for a free account to try this feature.
Hermione, an open-source tool, streamlines integration and visual regression testing although only for more straightforward websites. It is easier to kickstart Hermione with prior knowledge of Mocha and WebdriverIO, and the tool facilitates parallel testing across multiple browsers. Additionally, Hermione effectively uses subprocesses to tackle the computation issues associated with parallel testing. Besides this, the tool allows you to segregate tests from a test suite by only adding a path to the test folder.
Features:
Advantages:
Needle, supported by Selenium and Nose, is an open-source tool that is free to use. It follows the conventional visual testing structure and uses a standard suite of previously collected images to compare the layout of an app.
Features:
Advantages:
Vizregress, a popular open-source tool, was created as a research project based on AForge.Net. Colin Williamson, the creator of the tool, tried to resolve a crucial issue- Selenium WebDriver (that Vizregress uses in the background) could not distinguish between layouts if the CSS elements stayed the same and only the visual representation was modified. This was a problem that could disrupt a website.
Vizregress uses AForge attributes to compare every pixel of the images (new and baseline) to determine if they are equal. This is a complex task that does not deny its fragility.
Features:
Advantages:
Created by Jonathan Dann and Todd Krabach, iOSSnapshotTestCase was previously known as FBSnapshotTestCase and developed within Facebook – although Uber now maintains it. The tool uses the visual testing structure, where test screenshots are compared with baseline images of the UI.
iOSSnapshotTestCase uses tools like Core Animation and UIKit to generate screenshots of an iOS interface. These are then compared to specimen images in a repository. The test inevitably fails if the snapshots do not match.
Features:
Advantages:
VisualCeption uses a straightforward, 5-step process to perform visual regression testing. It uses WebDriver to capture a snapshot, JavaScript for calculating element sizes and positions, and Imagick for cropping and comparing visual components. An exception, if raised, is handled by Codeception.
It is essential to note here that VisualCeption is a function created for Codeception. Hence, you cannot use it as a standalone tool – you must have access to Codeception, Imagick, and WebDriver to make the most out of it.
Features:
Advantages:
BackstopJS is a testing tool that can be seamlessly integrated with CI/CD pipelines for catching visual regressions. Like others mentioned above, BackstopJS compares webpage screenshots with a standard test suite to flag any modifications exceeding a minimum threshold.
A popular visual testing tool, BackstopJS has formed the basis of similar tools like Aye Spy.
Features
Advantages:
Visual Regression Tracker is an exciting tool that goes the extra mile to protect your data. It is self-hosted, meaning your information is unavailable outside your intranet network.
In addition to the usual visual testing procedure, the tool helps you track your baseline images to understand how they change over time. Moreover, Visual Regression Tracker supports multiple languages including Python, Java, and JavaScript.
Features:
Advantages:
Galen Framework is an open-source tool for testing web UI. It is primarily used for interactive websites. Although developed in Java, the tool offers multi-language support, including CSS and JavaScript. Galen Framework runs on Selenium Grid and can be integrated with any cloud testing platform.
Features:
Advantages:
Here is a quick recap of all the 10 tools mentioned above:
The following comparison chart gives you an overview of all crucial features at a glance. Note how most tools have attributes that are ambiguous or undocumented. Applitools stands out in this list, giving you a clear view of its properties.
This summary gives you a good idea of the critical features of all the tools mentioned in this article. However, if you are looking for one tool that does it all with minimal resources and effort, select Applitools. Not only did they spearhead Visual AI testing, but they also fully automate cross-browser testing, requiring little to no intervention from you.
Customers have reported excellent results – 75% less time required for testing and 50% minimization in upkeep endeavors. To know how Applitools can seamlessly integrate with your DevOps pipeline, request your demo today.
Register for a free Applitools account.
The post Top 10 Visual Testing Tools appeared first on Automated Visual Testing | Applitools.
]]>The post Improving Engineering Productivity with Visual AI appeared first on Automated Visual Testing | Applitools.
]]>There are many metrics that drive the efficiency of an engineering team. These are easier to meet when your team is small but after the team crosses 50 engineers, it is reasonably hard to manage engineering productivity. Most engineering managers spend all their time ensuring that the team does not have bottlenecks. The north star for teams is usually some well defined metrics at this stage. We interviewed a group of 20 engineering managers from leading companies in Australia and India to find the ones that are really important to their success. These are the ones we found are most important to them.
Cycle time is a universal engineering metric that determines how effective a team is. A group must spend a certain amount of time on a feature from start to finish. Usually this includes planning, development, and testing. The metrics measure how quickly the development team can deliver the feature but it may not necessarily be deployed to production.
Faster cycle time is a goal for every development team. The ability to monitor cycle times allows engineering managers to identify potential bottlenecks in the delivery process. A lot of compromises are sometimes made to meet higher cycle time as agility is very important to every business today.
You can determine how often your team can release code into production by calculating the deployment frequency. Note that cycle time does not include deployment time. Teams working on development aim to distribute smaller pieces of code more frequently and in smaller batches.
It allows deployments to be more manageable to test and release. It also improves your overall efficiency.
This appears to be the most important metric for many teams and happens to also be a big area of concern. The rework ratio indicates the amount of code that must be changed after the team delivers it to production. The rework can be a bug or feature enhancement. If you have a high rework percentage, it can reduce your overall efficiency.
Meeting high deployment frequency and cycle time can create an impact on the amount of testing done to push to production. This can lead to higher rework ratio as issues get raised by users later in the cycle. Any bug raised later leads to lost time in fixing old code which reduces the overall efficiency of the team.
An insufficient level of communication or a flawed review process could lead to quality issues in the future.
As a result of various obstacles, team members must switch between issues in context. When the team switches context frequently, they are not working efficiently. To maintain focus, the appropriate adjustments should be made in this case. A huge reason for context switching is the process around fixing bugs in the development process. Sometimes the development process makes development teams use tools that make it hard to remain in the context of the development process. Most of the times, it is the testing cycle that leads to context switching due to the lack of adequate integrations across the development lifecycle.
As you may have observed, there is one common thing that we found becomes an obstacle in achieving better engineering productivity metrics. The desire to drop cycle time and deploy frequently is usually done at a compromise of testing coverage. Eventually, it leads to more rework ratio and more context switching for the teams. Most teams that scaled their engineering process start by paying acute attention to the testing process. The idea is to automate what can be automated with tools that can allow developers to move faster.
Testing fast and at scale is the key to engineering efficiency. Spotify coined the term for this called “Quality at Speed”. To maintain Quality at Speed, a Quality Engineering counterforce is required. At Applitools, our customers have helped in achieving quality at ultrafast speed. Visual AI from Applitools provides you with the ability to extend human eyes on the testing process without having to increase the QA/Dev ratio in the team. Some quotes from engineering managers that have used Applitools for building products include:
“Any engineering team can reduce the manual testing resource and time by at least 70%. It also avoids overloading of SDETs.”
Engineering Manager at Dunzo (India)
“At Pushpay, our success stems from a technology-forward culture which drives our behavior, how we solve problems, and what tools we use to solve them. Since partnering with Applitools over 5 years ago, we have been able to improve quality, gain productivity and thus save time and money. We could not be more pleased with the efficiency boost our team has experienced since adopting Applitools and more recently, the Ultrafast Grid.”
Engineering Manager at Pushpay (New Zealand)
If the above is something you wish to improve within your team, then you will be surprised that it takes just a few days to get to this degree of speed at quality.
The picture below shows how Applitools integrates with your application.
I will not get into how to install Applitools, as that is fairly well described in the tutorials. This also includes how you can integrate Applitools within your CI/CD pipeline. In the remainder of this article, I will like to tell you about some great examples of improving these metrics using Applitools.
Most efficient teams would start a day with the below dashboard. This gives you a comprehensive view of all your tests that have been executed across your entire coverage list. Having high coverage is made easy by using an ultrafast grid that reduces your rework ratio later for devices or browsers that may not have been included before. At last a number of re-work happens due to poor coverage of testing in the first phase of development. Of course there is an element of scope creep that leads to the re-work as well which can be easily avoided by involving cross-functional teams in the development process. Applitools provide a visual abstraction of your application that can be accessed by everyone on a GUI. This drastically brings down the areas of scope creep unless the requirements have totally changed by business.
When you are doing testing for high coverage it becomes important that you are not getting slowed down by the process of reviewing the bugs. A big reason why deployment frequency gets reduced is because of the time it takes to review and fix the issues. This is exactly where Visual AI plays a big role.
Visual AI also lets you troubleshoot the defects really quickly.
Finally, developers and testers can use the same platform which integrates seamlessly with Jira or their preferred communication channel for faster feedback. Email and Slack notifications help the team get the feedback fast without any context switching.
To conclude, the engineering manager needs to explore more deeply how engineering processes are structured as the engineering team grows. Businesses are demanding faster releases of standard quality products, and Visual AI is an effective method of improving both efficiency and coverage of testing.
Learn about more about visual AI with Applitools Eyes.
The post Improving Engineering Productivity with Visual AI appeared first on Automated Visual Testing | Applitools.
]]>The post Enhancing UI/UX Testing with AI appeared first on Automated Visual Testing | Applitools.
]]>This article is based on our recent webinar, How to Enhance UI/UX Testing by Leveraging AI, led by Chris Rolls from TTC and Andrew Knight from Applitools. Editing by Marii Boyken.
Last week, I hosted a webinar with Chris Rolls from TTC. In the webinar, Chris and I talked about the current state of software testing, where it’s going, and how visual AI and automation will impact the future of software testing. In this article, I’ll be recapping the insights shared from the webinar.
Software testing is often seen in businesses as a necessary evil at the end of the software development lifecycle to find issues before they reach production. Chris and I strongly agree that software quality is crucial to modern businesses to help to achieve modern businesses’ goals and needs to be thought of throughout the software development lifecycle.
The largest and most relevant companies today have embraced digital transformation and technology to run their businesses and meet their customers’ needs. To keep up with digital transformation, you need modern software development practices like DevOps and continuous delivery. DevOps requires continuous testing to be successful, but we’re seeing that reliance on manual testing is the biggest challenge that organizations face when adopting DevOps.
The software world is changing, so we need to change how we deliver technology. That requires modern software development approaches, which requires modern software testing and software quality approaches. Thankfully, testing and quality are far more top of mind now than in previous years.
From the audience poll results from our webinar, we see that continuous delivery is here to stay. When asked how often do you deploy changes to production, over 40% of people stated that they deploy either daily or multiple times per day. This wasn’t the case 10 to 20 years ago for most organizations unless technology was the product. Now, daily deployments are pretty common for most organizations.
However, we’re seeing that getting high test automation coverage is still a huge challenge. In the survey, 55% of the respondents automate less than half of their testing.
The numbers may have a bit of sample bias because Applitools users actually automate on average 1.7 times higher than other respondents. The responses align with anecdotal experience that a lot of organizations are still in lower test automation coverage around 20% to 50%.
Testing complexity and the amount of testing needed are going up increasingly, and this shows in our survey as well. More than 50% of the respondents test two languages or more, three browsers or more, three devices or more, and two applications or more.
With two thirds of the respondents saying that one of the hardest parts of testing UIs is that they are constantly changing, traditional automation tools can’t handle testing at that speed scale and complexity.
We know that there’s a lot of excitement around AI tools, and the survey shows that.
When asked What parts of the testing process are you supporting or planning on supporting with AI, test offering, test prioritization, test execution, test management, and visual regression we all mentioned. The top two answers among respondents were test execution and visual regression.
It’s important to remember that continuous testing is about more than just test automation.
While test automation is key to the process, you still need to incorporate other software testing and software quality practices like load testing, security, user experience, and accessibility.
What we’re trying to achieve in the future of testing is to support modern software development practices. The best way we’re seeing to do this is to have software testing and software quality more tightly integrated into the process. Let’s talk about what a modern software approach looks like.
To get tighter integration of quality into the process, testing can’t just be an activity at the end of the development lifecycle. Testing has to happen continuously for teams to be able to provide fast feedback across disciplines and ensure a quality product. When this is done, we see increased speed not just of testing, but of overall software development and deployment. We also see reduced costs and increased quality. More defects are found before production, and we see quicker responses when finding defects in production.
Traditional testing approaches tend to be done mostly manually. Increasing test coverage doesn’t mean manual testing goes away.
Automating your test cases frees up time to do more exploratory testing. Exploratory testing should be assisted by different tools, so AI has a good role to play here. Tools like ChatGPT are useful to brainstorm things like what to test next. Obviously we want to increase test automation coverage at all levels, including unit, API, and UI. Intelligent automated UI tests provide us more information than functional tests alone.
What does the future of testing with AI look like? It’s a combination of people, processes, and technology. Software testers need to be thinking about what skills we need to have to support these new ways of testing and delivering quality software.
We need to uncover if a use case is better served by AI and machine learning than an algorithmic solution. To do this, we need to ask the following questions:
“It’s quite trendy to talk about artificial intelligence, but the reason why we’re partnered with Applitools is that they apply real machine learning and artificial intelligence to a problem that is not well solved by other types of solutions on the market.”
Chris Rolls, CEO, Americas, TTC
Let’s talk about how we can integrate AI into our testing to get some of those advantages of increased speed and coverage discussed earlier.
I like to explain visual AI visually. Do you remember those spot-the-difference pictures we had in our activity books from when we were kids?
As humans, we could sit around and play with this to find the differences manually. But what we want is to be able to find these differences immediately. And that is what visual AI has the power to do.
Even when the images are a little bit skewed or off by a couple pixels here or there, visual AI can pinpoint any differences between one view and another.
Now you might be thinking, Andy, that’s cute, but how’s this technology gonna help me in the real world? Is it just gonna solve little activity book problems? Well, think about all the apps that you would develop – whether web, mobile, desktop, whatever you have – and all the possible ways that you could have visual bugs in your apps.
Here we’ve got three different views from mobile apps. One for Chipotle, one for a bank, and another one for a healthcare provider.
You can see that visual bugs are pervasive and they come in all different shapes and sizes. Sometimes the formatting is off, sometimes a particular word or phrase or title is just nulled out. What’s really pesky is that sometimes you might have overlapping text.
Traditional automation struggles to find these issues because traditional automation usually hinges on text content purely or on particular attributes of elements on a page. So as long as something appears and it’s enactable, most traditional scripts will pass. Even though we as humans visually can inspect and see when something is completely broken and unuseable.
This is where visual AI can help us, because what we can do is we can take snapshots of our app over time and use visual AI to detect when we have visual regressions. Because if, let’s say, one day the title went from being your bank’s name to null, it’ll pick it up right away in your continuous testing.
In the webinar, I gave a live demo of automated visual testing using Applitools Eyes. In case you missed the demo, you can check it out here:
So all this really cool stuff is powered by visual AI, a real world application of AI looking at images and being able to find things in them like they were humanized. Now you may think this is really cool, but what’s even cooler is this is just the beginning of what we can do with the power of AI and machine learning in the testing and automation space.
What we’re going to be seeing in the next couple years is a new thing called autonomous testing where not only are we automating our tests, but we’re automating the process of developing and maintaining our tests. The tests are kind of almost writing themselves in a sense. And visual AI is going to be a key part of that, because if testing is interaction and verification, what we want to make autonomous is both interaction and verification. And visual AI has already made verification autonomous. We’re halfway there, folks.
Be sure to check out our upcoming events page for new webinars coming soon!Learn how Applitools Eyes uses AI to catch visual differences between releases while reducing false positives. Happy testing!
The post Enhancing UI/UX Testing with AI appeared first on Automated Visual Testing | Applitools.
]]>The post Autonomous Testing: Test Automation’s Next Great Wave appeared first on Automated Visual Testing | Applitools.
]]>The word “automation” has become a buzzword in pop culture. It conjures things like self-driving cars, robotic assistants, and factory assembly lines. They don’t think about automation for software testing. In fact, many non-software folks are surprised to hear that what I do is “automation.”
The word “automation” also carries a connotation of “full” automation with zero human intervention. Unfortunately, most of our automated technologies just aren’t there yet. For example, a few luxury cars out there can parallel-park themselves, and Teslas have some cool autopilot capabilities, but fully-autonomous vehicles do not yet exist. Self-driving cars need several more years to perfect and even more time to become commonplace on our roads.
Software testing is no different. Even when test execution is automated, test development is still very manual. Ironic, isn’t it? Well, I think the day of “full” test automation is quickly approaching. We are riding the crest of the next great wave: autonomous testing. It’ll arrive long before cars can drive themselves. Like previous waves, it will fundamentally change how we, as testers, approach our craft.
Let’s look at the past two waves to understand this more deeply. You can watch the keynote address I delivered at Future of Testing: Frameworks 2022, or you can keep reading below.
In their most basic form, tests are manual. A human manually exercises the behavior of the software product’s features and determines if outcomes are expected or erroneous. There’s nothing wrong with manual testing. Many teams still do this effectively today. Heck, I always try a test manually before automating it. Manual tests may be scripted in that they follow a precise, predefined procedure, or they may be exploratory in that the tester relies instead on their sensibilities to exercise the target behaviors.
Testers typically write scripted tests as a list of steps with interactions and verifications. They store these tests in test case management repositories. Most of these tests are inherently “end-to-end:” they require the full product to be up and running, and they expect testers to attempt a complete workflow. In fact, testers are implicitly incentivized to include multiple related behaviors per test in order to gain as much coverage with as little manual effort as possible. As a result, test cases can become very looooooooooooong, and different tests frequently share common steps.
Large software products exhibit countless behaviors. A single product could have thousands of test cases owned and operated by multiple testers. Unfortunately, at this scale, testing is slooooooooow. Whenever developers add new features, testers need to not only add new tests but also rerun old tests to make sure nothing broke. Software is shockingly fragile. A team could take days, weeks, or even months to adequately test a new release. I know – I once worked at a company with a 6-month-long regression testing phase.
Slow test cycles forced teams to practice Waterfall software development. Rather than waste time manually rerunning all tests for every little change, it was more efficient to bundle many changes together into a big release to test all at once. Teams would often pipeline development phases: While developers are writing code for the features going into release X+1, testers would be testing the features for release X. If testing cycles were long, testers might repeat tests a few times throughout the cycle. If testing cycles were short, then testers would reduce the number of tests to run to a subset most aligned with the new features. Test planning was just as much work as test execution and reporting due to the difficulty in judging risk-based tradeoffs.
Slow manual testing was the bane of software development. It lengthened time to market and allowed bugs to fester. Anything that could shorten testing time would make teams more productive.
That’s when the first wave of test automation hit: manual test conversion. What if we could implement our manual test procedures as software scripts so they could run automatically? Instead of a human running the tests slowly, a computer could run them much faster. Testers could also organize scripts into suites to run a bunch of tests at one time. That’s it – that was the revolution. Let software test software!
During this wave, the main focus of automation was execution. Teams wanted to directly convert their existing manual tests into automated scripts to speed them up and run them more frequently. Both coded and codeless automation tools hit the market. However, they typically stuck with the same waterfall-minded processes. Automation didn’t fundamentally change how teams developed software, it just made testing better. For example, during this wave, running automated tests after a nightly build was in vogue. When teams would plan their testing efforts, they would pick a few high-value tests to automate and run more frequently than the rest of the manual tests.
Unfortunately, while this type of automation offered big improvements over pure manual testing, it had problems. First, testers still needed to manually trigger the tests and report results. On a typical day, a tester would launch a bunch of scripts while manually running other tests on the side. Second, test scripts were typically very fragile. Both tooling and understanding for good automation had not yet matured. Large end-to-end tests and long development cycles also increased the risk of breakage. Many teams gave up attempting test automation due to the maintenance nightmare.
The first wave of test automation was analogous to cars switching from manual to automatic transmissions. Automation made the task of driving a test easier, but it still required the driver (or the tester) to start and stop the test.
The second test automation wave was far more impactful than the first. After automating the execution of tests, focus shifted to automating the triggering of tests. If tests are automated, then they can run without any human intervention. Therefore, they could be launched at any time without human intervention, too. What if tests could run automatically after every new build? What if every code change could trigger a new build that could then be covered with tests immediately? Teams could catch bugs as soon as they happen. This was the dawn of Continuous Integration, or “CI” for short.
Continuous Integration revolutionized software development. Long Waterfall phases for coding and testing weren’t just passé – they were unnecessary. Bite-sized changes could be independently tested, verified, and potentially deployed. Agile and DevOps practices quickly replaced the Waterfall model because they enabled faster releases, and Continuous Integration enabled Agile and DevOps. As some would say, “Just make the DevOps happen!”
The types of tests teams automated changed, too. Long end-to-end tests that covered “grand tours” with multiple behaviors were great for manual testing but not suitable for automation. Teams started automating short, atomic tests focused on individual behaviors. Small tests were faster and more reliable. One failure pinpointed one problematic behavior.
Developers also became more engaged in testing. They started automating both unit tests and feature tests to be run in CI pipelines. The lines separating developers and testers blurred.
Teams adopted the Testing Pyramid as an ideal model for test count proportions. Smaller tests were seen as “good” because they were easy to write, fast to execute, less susceptible to flakiness, and caught problems quickly. Larger tests, while still important for verifying workflows, needed more investment to build, run, and maintain. So, teams targeted more small tests and fewer large tests. You may personally agree or disagree with the Testing Pyramid, but that was the rationale behind it.
While the first automation wave worked within established software lifecycle models, the second wave fundamentally changed them. The CI revolution enabled tests to run continuously, shrinking the feedback loop and maximizing the value that automated tests could deliver. It gave rise to the SDET, or Software Development Engineer in Test, who had to manage tests, automation, and CI systems. SDETs carried more responsibilities than the automation engineers of the first wave.
If we return to our car analogy, the second wave was like adding cruise control. Once the driver gets on the highway, the car can just cruise on its own without much intervention.
Unfortunately, while the second wave enabled teams to multiply the value they can get out of testing and automation, it came with a cost. Test automation became full-blown software development in its own right. It entailed tools, frameworks, and design patterns. The continuous integration servers became production environments for automated tests. While some teams rose to the challenge, many others struggled to keep up. The industry did not move forward together in lock-step. Test automation success became a gradient of maturity levels. For some teams, success seemed impossible to reach.
Now, these two test automation waves I described do not denote precise playbooks every team followed. Rather, they describe the general industry trends regarding test automation advancement. Different teams may have caught these waves at different times, too.
Currently, as an industry, I think we are riding the tail end of the second wave, rising up to meet the crest of a third. Continuous Integration, Agile, and DevOps are all established practices. The innovation to come isn’t there.
Over the past years, a number of nifty test automation features have hit the scene, such as screen recorders and smart locators. I’m going to be blunt: those are not the next wave, they’re just attempts to fix aspects of the previous waves.
You may agree or disagree with my opinions on the usefulness of these tools, but the fact is that they all share a common weakness: they are vulnerable to behavioral changes. Human testers must still intervene as development churns.
These tools are akin to a car that can park itself but can’t fully drive itself. They’re helpful to some folks but fall short of the ultimate dream of full automation.
The first two waves covered automation for execution and scheduling. Now, the bottleneck is test design and development. Humans still need to manually create tests. What if we automated that?
Consider what testing is: Testing equals interaction plus verification. That’s it! You do something, and you make sure it works correctly. It’s true for all types of tests: unit tests, integration tests, end-to-end tests, functional, performance, load; whatever! Testing is interaction plus verification.
During the first two waves, humans had to dictate those interactions and verifications precisely. What we want – and what I predict the third wave will be – is autonomous testing, in which that dictation will be automated. This is where artificial intelligence can help us. In fact, it’s already helping us.
Applitools has already mastered automated validation for visual interfaces. Traditionally, a tester would need to write several lines of code to functionally validate behaviors on a web page. They would need to check for elements’ existence, scrape their texts, and make assertions on their properties. There might be multiple assertions to make – and other facets of the page left unchecked. Visuals like color and position would be very difficult to check. Applitools Eyes can replace almost all of those traditional assertions with single-line snapshots. Whenever it detects a meaningful change, it notifies the tester. Insignificant changes are ignored to reduce noise.
Automated visual testing like this fundamentally simplifies functional verification. It should not be seen as an optional extension or something nice to have. It automates the dictation of verification. It is a new type of functional testing.
The remaining problem to solve is dictation of interaction. Essentially, we need to train AI to figure out proper app behaviors on its own. Point it at an app, let it play around, and see what behaviors it identifies. Pair those interactions with visual snapshot validation, and BOOM – you have autonomous testing. It’s testing without coding. It’s like a fully-self-driving car!
Some companies already offer tools that attempt to discover behaviors and formulate test cases. Applitools is also working on this. However, it’s a tough problem to crack.
Even with significant training and refinement, AI agents still have what I call “banana peel moments:” times when they make surprisingly awful mistakes that a human would never make. Picture this: you’re walking down the street when you accidentally slip on a banana peel. Your foot slides out from beneath you, and you hit your butt on the ground so hard it hurts. Everyone around you laughs at both your misfortune and your clumsiness. You never saw it coming!
Banana peel moments are common AI hazards. Back in 2011, IBM created a supercomputer named Watson to compete on Jeopardy, and it handily defeated two of the greatest human Jeopardy champions at that time. However, I remember watching some of the promo videos at the time explaining how hard it was to train Watson how to give the right answers. In one clip, it showed Watson answering “banana” to some arbitrary question. Oops! Banana? Really?
While Watson’s blunder was comical, other mistakes can be deadly. Remember those self-driving cars? Tesla autopilot mistakes have killed at least a dozen people since 2016. Autonomous testing isn’t a life-or-death situation like driving, but testing mistakes could be a big risk for companies looking to de-risk their software releases. What if autonomous tests miss critical application behaviors that turn out to crash once deployed to production? Companies could lose lots of money, not to mention their reputations.
So, how can we give AI for testing the right training to avoid these banana peel moments? I think the answer is simple: set up AI for testing to work together with human testers. Instead of making AI responsible for churning out perfect test cases, design the AI to be a “coach” or an “advisor.” AI can explore an app and suggest behaviors to cover, and the human tester can pair that information with their own expertise to decide what to test. Then, the AI can take that feedback from the human tester to learn better for next time. This type of feedback loop can help AI agents not only learn better testing practices generally but also learn how to test the target app specifically. It teaches application context.
AI and humans working together is not just a theory. It’s already happened! Back in the 90s, IBM built a supercomputer named Deep Blue to play chess. In 1996, it lost 4-2 to grandmaster and World Chess Champion Garry Kasparov. One year later, after upgrades and improvements, it defeated Kasparov 3.5-2.5. It was the first time a computer beat a world champion at chess. After his defeat, Kasparov had an idea: What if human players could use a computer to help them play chess? Then, one year later, he set up the first “advanced chess” tournament. To this day, “centaurs,” or humans using computers, can play at nearly the same level as grandmasters.
I believe the next great wave for test automation belongs to testers who become centaurs – and to those who enable that transformation. AI can learn app behaviors to suggest test cases that testers accept or reject as part of their testing plan. Then, AI can autonomously run approved tests. Whenever changes or failures are detected, the autonomous tests yield helpful results to testers like visual comparisons to figure out what is wrong. Testers will never be completely removed from testing, but the grindwork they’ll need to do will be minimized. Self-driving cars still have passengers who set their destinations.
This wave will also be easier to catch than the first two waves. Testing and automation was historically a do-it-yourself effort. You had to design, automate, and execute tests all on your own. Many teams struggled to make it successful. However, with the autonomous testing and coaching capabilities, AI testing technologies will eliminate the hardest parts of automation. Teams can focus on what they want to test more than how to implement testing. They won’t stumble over flaky tests. They won’t need to spend hours debugging why a particular XPath won’t work. They won’t need to wonder what elements they should and shouldn’t verify on a page. Any time behaviors change, they rerun the AI agents to relearn how the app works. Autonomous testing will revolutionize functional software testing by lowering the cost of entry for automation.
If you are plugged into software testing communities, you’ll hear from multiple testing leaders about their thoughts on the direction of our discipline. You’ll learn about trends, tools, and frameworks. You’ll see new design patterns challenge old ones. Something I want you to think about in the back of your mind is this: How can these things be adapted to autonomous testing? Will these tools and practices complement autonomous testing, or will they be replaced? The wave is coming, and it’s coming soon. Be ready to catch it when it crests.
The post Autonomous Testing: Test Automation’s Next Great Wave appeared first on Automated Visual Testing | Applitools.
]]>The post What is Visual AI? appeared first on Automated Visual Testing | Applitools.
]]>In this guide, we’ll explore Visual Artificial Intelligence (AI) and what it means. Read on to learn what Visual AI is, how it’s being applied today, and why it’s critical across a range of industries – and in particular for software development and testing.
From the moment we open our eyes, humans are highly visual creatures. The visual data we process today increasingly comes in digital form. Whether via a desktop, a laptop, or a smartphone, most people and businesses rely on having an incredible amount of computing power available to them and the ability to display any of millions of applications that are easy to use.
The modern digital world we live in, with so much visual data to process, would not be possible without Artificial Intelligence to help us. Visual AI is the ability for computer vision to see images in the same way a human would. As digital media becomes more and more visual, the power of AI to help us understand and process images at a massive scale has become increasingly critical.
Artificial Intelligence refers to a computer or machine that can understand its environment and make choices to maximize its chance of achieving a goal. As a concept, AI has been with us for a long time, with our modern understanding informed by stories such as Mary Shelley’s Frankenstein and the science fiction writers of the early 20th century. Many of the modern mathematical underpinnings of AI were advanced by English mathematician Alan Turing over 70 years ago.
Since Turing’s day, our understanding of AI has improved. However, even more crucially, the computational power available to the world has skyrocketed. AI is able to easily handle tasks today that were once only theoretical, including natural language processing (NLP), optical character recognition (OCR), and computer vision.
Visual AI is the application of Artificial Intelligence to what humans see, meaning that it enables a computer to understand what is visible and make choices based on this visual understanding.
In other words, Visual AI lets computers see the world just as a human does, and make decisions and recommendations accordingly. It essentially gives software a pair of eyes and the ability to perceive the world with them.
As an example, seeing “just as a human does” means going beyond simply comparing the digital pixels in two images. This “pixel comparison” kind of analysis frequently uncovers slight “differences” that are in fact invisible – and often of no interest – to a genuine human observer. Visual AI is smart enough to understand how and when what it perceives is relevant for humans, and to make decisions accordingly.
Visual AI is already in widespread use today, and has the potential to dramatically impact a number of markets and industries. If you’ve ever logged into your phone with Apple’s Face ID, let Google Photos automatically label your pictures, or bought a candy bar at a cashierless store like Amazon Go, you’ve engaged with Visual AI.
Technologies like self-driving cars, medical image analysis, advanced image editing capabilities (from Photoshop tools to TikTok filters) and visual testing of software to prevent bugs are all enabled by advances in Visual AI.
One of the most powerful use cases for AI today is to complete tasks that would be repetitive or mundane for humans to do. Humans are prone to miss small details when working on repetitive tasks, whereas AI can repeatedly spot even minute changes or issues without loss of accuracy. Any issues found can then either be handled by the AI, or flagged and sent to a human for evaluation if necessary. This has the dual benefit of improving the efficiency of simple tasks and freeing up humans for more complex or creative goals.
Visual AI, then, can help humans with visual inspection of images. While there are many potential applications of Visual AI, the ability to automatically spot changes or issues without human intervention is significant.
Cameras at Amazon Go can watch a vegetable shelf and understand both the type and the quantity of items taken by a customer. When monitoring a production line for defects, Visual AI can not only spot potential defects but understand whether they are dangerous or trivial. Similarly, Visual AI can observe the user interface of software applications to not only notice when changes are made in a frequently updated application, but also to understand when they will negatively impact the customer experience.
Traditional testing methods for software testing often require a lot of manual testing. Even at organizations with sophisticated automated testing practices, validating the complete digital experience – requiring functional testing, visual testing and cross browser testing – has long been difficult to achieve with automation.
Without an effective way to validate the whole page, Automation Engineers are stuck writing cumbersome locators and complicated assertions for every element under test. Even after that’s done, Quality Engineers and other software testers must spend a lot of time squinting at their screens, trying to ensure that no bugs were introduced in the latest release. This has to be done for every platform, every browser, and sometimes every single device their customers use.
At the same time, software development is growing more complex. Applications have more pages to evaluate and increasingly faster – even continuous – releases that need testing. This can result in tens or even hundreds of thousands of potential screens to test (see below). Traditional testing, which scales linearly with the resources allocated to it, simply cannot scale to meet this demand. Organizations relying on traditional methods are forced to either slow down releases or reduce their test coverage.
At Applitools, we believe AI can transform the way software is developed and tested today. That’s why we invented Visual AI for software testing. We’ve trained our AI on over a billion images and use numerous machine learning and AI algorithms to deliver 99.9999% accuracy. Using our Visual AI, you can achieve automated testing that scales with you, no matter how many pages or browsers you need to test.
That means Automation Engineers can quickly take snapshots that Visual AI can analyze rather than writing endless assertions. It means manual testers will only need to evaluate the issues Visual AI presents to them rather than hunt down every edge and corner case. Most importantly, it means organizations can release better quality software far faster than they could without it.
Additionally, due to the high level of accuracy, and efficient validation of the entire screen, Visual AI opens the door to simplifying and accelerating the challenges of cross browser and cross device testing. Leveraging an approach for ‘rendering’ rather than ‘executing’ across all the device/browser combinations, teams can get test results 18.2x faster using the Applitools Ultrafast Test Cloud than traditional execution grids or device farms.
As computing power increases and algorithms are refined, the impact of Artificial Intelligence, and Visual AI in particular, will only continue to grow.
In the world of software testing, we’re excited to use Visual AI to move past simply improving automated testing – we are paving the way towards autonomous testing. For this vision (no pun intended), we have been repeatedly recognized as an industry leader by the industry and our customers.
What is Visual Testing (blog)
The Path to Autonomous Testing (video)
What is Applitools Visual AI (learn)
Why Visual AI Beats Pixel and DOM Diffs for Web App Testing (article)
How AI Can Help Address Modern Software Testing (blog)
The Impact of Visual AI on Test Automation (report)
How Visual AI Accelerates Release Velocity (blog)
Modern Functional Test Automation Through Visual AI (free course)
Computer Vision defined (Wikipedia)
The post What is Visual AI? appeared first on Automated Visual Testing | Applitools.
]]>The post How AI is Making Test Automation Smarter appeared first on Automated Visual Testing | Applitools.
]]>From facial recognition to self-driving cars, Artificial Intelligence (AI) and machine learning (ML) have become commonplace for many industries in recent years. In parallel, the software development industry has undergone a transformation of its own.
As customers look to engage more through digital experiences, businesses have been forced to evolve faster than ever before. Enticing and delighting customers in every aspect of product delivery has become “business critical,” determining if the customer chooses, and continues, to do business with you over a competitor. Although the discipline of Quality Engineering has remained unchanged, every aspect of how quality is delivered has evolved. Businesses can no longer trade off quality vs. speed, as both quality and speed must be achieved for modern digital-first businesses.
Recently, two reports came out that speak directly to the intersection of these two trends and discuss how industry leaders are leveraging AI to modernize their approach to Quality Engineering in 2021 and beyond.
The first, from EMA (Enterprise Management Associates), is titled Disrupting the Economics of Software Testing Through AI. In this report, author Torsten Volk, Managing Research Director at EMA, discusses the reasons why traditional approaches to software quality cannot scale to meet the needs of modern software delivery. He highlights 5 key categories of AI and 6 critical pain points of test automation that AI addresses.
In addition, over the last couple of months Sogeti has been releasing sections of their State of AI applied to Quality Engineering 2021-22 report (with still more sections to come through February 2022). This comprehensive report is created in partnership with leading technology providers to provide a detailed examination of the current state of artificial intelligence across many use-cases in the field of Quality Engineering and centers around a key question — how can AI make quality validation smarter?
As the application of AI to testing continues to advance, it is important to understand its potential and how it can help improve the quality, velocity, and efficiency of Quality Engineering activities. Below, I’ll discuss some of the highlights from both of these reports and what they define as the future of Quality Engineering.
First, let’s talk about the reason why traditional approaches to software quality and test automation are no longer sufficient. The first section of the Sogeti report gives an overview of the business pressure to release faster and increasingly complex technical environments. As Torsten discusses in his report, modern software development teams are faced with many challenges that have driven up the complexity and cost of quality, such as the explosion of device/browser combinations and application complexity. Multiply this by the number of releases per month and you can quickly see that the traditional test automation tools can no longer scale to the challenges of modern software delivery.
The biggest problem with the traditional approach to test automation is that it scales linearly. The more, or faster, you need to test the more human and non-human resources you need — which only works if you have an infinite amount of resources (do you?).
With this in mind, the EMA and Sogeti reports discuss the ways modern organizations can leverage AI/ML to streamline their test automation practices and scale to meet the increased pace of software delivery.
When it comes to Quality Engineering, certain tools are capable of controlling the graphical user interface (GUI) or an application programming interface (API). Others analyze coverage and recommend additional actions, and some analyze log files in search of specific behaviors. These are just a few examples. But to increase developer productivity, there needs to be an understanding of each of these tasks and how they can be optimized.
How does this relate to AI? AI has the ability to apply algorithms and approaches used in tools to perform human-like tasks. For example, a developer can reason by examining an application to determine whether or not it has been properly tested. If the testing cycle has fallen short, they can then determine what additional testing needs to be done. AI has the potential to act in a similar manner. Although AI may require some training, once it has been trained it has the potential to continue to test the function even as the application evolves.
The EMA report details five key AI capabilities that can help organizations streamline and automate parts of their quality and testing workflow:
The report highlights the key advantage of each capability and then details how the capabilities can bridge the gap between the “ideal scenario” and “in real life” situation for six critical pain points of Test Automation: false positives, test maintenance, insufficient feedback, application complexity, device/use-case coverage and toolchain complexity.
Each capability is assigned a rating, ranking their current impact in 2021 and predicting their future impact in 2024. Visual inspection, implemented with Visual AI, has the highest rating for both current and future impact with the key advantage that it “Provides complete and accurate coverage of the user experience. It learns and adapts to new situations without the need to write and maintain code-based rules.”
The EMA report goes on to add that “Smart crawling, self-healing, anomaly detection, and coverage detection each are point solutions that help organizations lower their risk of blind spots while decreasing human workload. Visual inspection (Visual AI) goes further compared to these point solutions by aiming to understand application workflows and business requirements.”
See how Applitools Visual AI can make your automated testing activities easier, more efficient and more scalable. Get a free demo or sign up for a free account today.
As discussed in the most recent section of the Sogeti report, Shorten Release Cycles with Visual AI, Visual AI is already a mature technology, currently being adopted by leading brands across industries to accelerate the delivery of their digital experiences. The high levels of accuracy, and ability to handle dynamic and shifting content, ensures teams do not get overwhelmed with false positives. The automated grouping and categorization of regressions, coupled with root cause analysis, accelerates feedback and reduces test maintenance efforts. Visual AI provides test engineers with an additional “pair of eyes,” leaving them to focus on areas that really need human intelligence — the power and impact of this approach is enormous.
Currently the industry is focused on having AI remove repetitive and mundane tasks, freeing humans to focus on the creative/complex tasks that require human intelligence. And as Torsten mentions in his report, “AI-based test automation technologies can deliver real ROI today and have the potential to address, and ultimately eliminate, today’s critical automation bottlenecks.”
The ROI will further increase as we look to the future and the next big innovation for Quality Engineering, Autonomous Testing. Autonomous Testing will change the role of developers and testers from testing the application to training the AI how to use the application, leaving it to perform the testing activities, and then reviewing the results. This change will deliver a fundamental increase in team efficiency, reducing the overall cost of quality and enabling businesses to establish truly scalable Quality Engineering practices.
Want to see how Applitools Visual AI can help you improve the quality of your test automation as you scale up? Schedule a free demo or sign up for a free account today.
Editor’s Note: This post first appeared on devopsdigest.com.
The post How AI is Making Test Automation Smarter appeared first on Automated Visual Testing | Applitools.
]]>The post How Visual AI Accelerates Release Velocity appeared first on Automated Visual Testing | Applitools.
]]>We’re honored to be co-authors with Sogeti on their “State of AI applied to Quality Engineering 2021-22” report. In the latest chapter, learn how you can use Visual AI today to release software faster and with fewer bugs.
In the world of software development, there is a very clear trend – greater application complexity and a faster release cadence. This presents a massive (and growing) challenge for Quality Engineering teams, who must keep up with the advancing pace of development. We think about this a lot at Applitools, and we were glad to be able to collaborate with Sogeti on the latest chapter of their landmark “State of AI applied to Quality Engineering 2021-22” report, entitled Shorten release cycles with Visual AI. This chapter is focused around this QE challenge and offers a vision for how Visual AI can help organizations that have not yet adopted it – not far in the future but today.
Visual AI is the ability for machine learning and deep learning algorithms to truly mimic a human’s cognitive understanding of what is seen. This may seem fantastical, but it’s far from science fiction. Our own Visual AI has already been trained on over a billion images, providing 99.9999% accuracy, and leading digital brands are already using it today to accelerate their delivery of innovation.
Visual AI can be used in a number of ways, and it may be tempting to think of it as a tool that can help you conduct your automated end-to-end tests at the end of development cycles more quickly. Yes, it can do that, but its biggest strength lies elsewhere. Visual AI allows you to shift left and begin to conduct testing “in-sprint” as part of an Agile development cycle.
Testing “in-sprint” means conducting visual validation alongside data validation and gaining complete test coverage of UI changes and visual regressions at every check-in. Bottlenecks are removed and releases are both faster and contain fewer errors, delivering an uncompromised user experience without jeopardizing your brand.
Teams that incorporate automated visual testing throughout their development process simply release faster and higher quality software.
Wondering how you can move your organization or your team over to the left side of the bar charts above? Fortunately, it’s not hard to get started, and this chapter from Sogeti is an excellent place to begin. Keep reading to learn more about:
Most users start out by applying Applitools’ Visual AI to their end-to-end tests and quickly discover several things about Applitools. First, it is highly accurate, meaning it finds real differences – not pixel differences. Second, the compare modes give the flexibility needed to handle expected differences no matter what kind of page is being tested. And third, the application of AI goes beyond visual verification and includes capabilities such as auto-maintenance and root cause analysis
State of AI applied to Quality Engineering 2021-22
Ultimately, what we’re all looking for is to be able to deliver quality code faster, even as complexity grows. Keeping up with the growing pace of change can feel daunting when you’re relying on traditional test automation that only scales linearly with the resources allocated – AI-powered automation is the only way to scale your team’s productivity at the pace today’s software development demands.
Applitools’ Visual AI integrates into your existing test automation practise and is already being used by the world’s leading top companies to greatly accelerate their ability to deliver innovation to their clients, customers and partners, while protecting their brand and ensuring digital initiatives have the right business outcomes. And it’s only getting better. Visual AI continues to progress as it advances the industry towards a future of truly Autonomous Testing, when the collaboration between humans and AI will change. Today, we’re focused on an AI that can handle repetitive/mundane tasks to free up humans for more creative/complex tasks, but we see a future where Visual AI will be able to handle all testing activities, and the role of humans will shift to training the AI and then reviewing the results.
Check out the full chapter, “Shorten release cycles with Visual AI,” below.
The post How Visual AI Accelerates Release Velocity appeared first on Automated Visual Testing | Applitools.
]]>The post How AI Can Help Address Modern Software Testing appeared first on Automated Visual Testing | Applitools.
]]>AI is needed to meet the scale and complexities of modern software delivery. According to the EMA report, traditional test automation tools will continue to struggle to keep up.
A growing challenge for organizations reliant on software (in other words, just about every organization today) is the ever-rising scale and speed of software delivery. Software is growing more complex, users are demanding more from their experiences, and release cycles are getting shorter and shorter. All of this puts an enormous strain on the testing teams charged with ensuring that applications are error-free and delivering the desired user experience.
AI is one technology that can ease this burden on today’s testers, according to EMA Research, which has just released a research paper on the topic. The need to create better software, faster, has never been greater.
“Business’s ability to accelerate the delivery of customer value through software innovation, at lower cost, has become critical for achieving competitive advantages,” said Torsten Volk, Enterprise Management Associates Managing Research Director.
The report highlights a number of data points showing the increasing complexity of testing environments. The number of test automation-related questions posted to StackOverflow has nearly doubled over the past year. Smartphones continue to proliferate at a very high rate (30% CAGR since 2017 for Android alone), adding yet more configurations that need to be tested. Apps residing in a growing number of cloud services has risen 225% since 2015, compounding the complexity of software delivery.
In an article on the topic, VentureBeat recently highlighted another report on enterprise software development by Gatepoint Research which emphasizes some of the same struggles. According to that report, 77% of respondents said that they experience setbacks in releasing new software. A smaller but still high 34% said that fixing bugs takes anywhere from days to months.
Overall, EMA found that the increase in the complexity of technology, combined with faster release cycles and the daily tasks that already exists for test engineers, combines to yield an exponential increase in testing effort required. It doesn’t help that, as EMA puts it, “test automation frameworks typically rely on a jungle of test scripts written in different languages, using different sets of runtime parameters, and lacking consistent compliance test capabilities.”
The research by Torsten Volk and his team makes clear in no uncertain terms that “you cannot scale automated testing without AI.” The paper outlines how the latest AI solutions help in five key categories today, and digs into how AI can help address six of the biggest test automation pain points.
“AI-based test automation technologies can deliver real ROI today,” said Volk, “and come with the potential of addressing, and ultimately eliminating, today’s critical automation bottlenecks that stifle modern software delivery.”
EMA’s research discusses several essential AI capabilities that can combined and customized according to an organization’s requirements. Visual inspection (with Visual AI) was rated as having the highest overall impact of these capabilities, both today and in the future:
Smart crawling, self-healing, anomaly detection, and coverage detection each are point solutions that help organizations lower their risk of blind spots while decreasing human workload. Visual inspection goes further compared to these point solutions by aiming to understand application workflows and business requirements.
-Disrupting the Economics of Software Testing Through AI
To learn more, download the complementary report, “Disrupting the Economics of Testing Through AI.”
The post How AI Can Help Address Modern Software Testing appeared first on Automated Visual Testing | Applitools.
]]>The post The “State of AI applied to Quality Engineering 2021-2022” Report Released appeared first on Automated Visual Testing | Applitools.
]]>Applitools was invited to share our expertise in applying AI to quality engineering, and we’re honored to be co-authors of this comprehensive report by Sogeti.
Sogeti has just released the first section of their State of AI applied to Quality Engineering 2021-22 report, including two chapters co-authored by Applitools. The report is a detailed examination of the current state of artificial intelligence in the field of quality engineering. It centers around a key question – how can AI make our quality validation smarter? In the words of the executive introduction:
This report aims to assist you in understanding the potential of AI and how it can help improve the quality, velocity, and efficiency of your quality engineering activities.
As one of the pioneers in the application of AI to quality engineering through Visual AI, we were honored to be asked to participate in this report and share our expertise. We co-authored several chapters, including two that have been released today in the first section.
In this chapter, you’ll get an overview of the business and technical environment which has led us to where we are today and the current need for assistance from AI. It discusses the shortcomings of traditional testing practices and the emergence of modern quality engineering. What does a successful Quality Engineer do today? What are the challenges faced? What is the future of quality engineering, and what role could AI play in that? Check out this opening chapter for a great introduction into the topic of AI in QE.
This chapter digs a little deeper into how you can get started in your journey with AI. Moshe starts by relating a personal story about a customer service experience that left him frustrated. How can we use AI to eliminate waste from our days and spend more time on quality engineering and address issues before they impact end users? The chapter goes on to cover the difference between routine and error-prone tasks and opens up the discussion of how we can optimize each type. You’ll also get some great info on how to define AI, understand possible use cases, and thoroughly research your options. Head over to the second chapter to read more.
In chapter 3 and chapter 4, you can explore further with technical deep dives into machine learning and deep learning.
Sogeti has put together a strong report on this important topic and we’re excited to share the opening section with you today. Starting in September, you can expect to find new sections released bi-weekly, including another chapter from Applitools that will be out in the coming months. To learn more, check out the full “State of AI applied to Quality Engineering 2021-22” report.
The post The “State of AI applied to Quality Engineering 2021-2022” Report Released appeared first on Automated Visual Testing | Applitools.
]]>The post How to Maintain High Quality Design Systems with Storybook by Leveraging Visual AI appeared first on Automated Visual Testing | Applitools.
]]>Component libraries and design systems are important development and design tools that allow teams to focus on building consistent and high quality experiences. Storybook is a tool that helps us with those experiences, but as your library and system grows, it becomes more difficult to maintain that level of high quality.
With the Applitools Eyes Storybook SDK, we’re able to provide visual testing coverage for all stories in your Storybook. The cool thing is this works for whatever UI framework you choose! Whether you’re using Storybook with React, Vue, Angular, Svelte, or even React Native, Applitools is able to support any framework Storybook supports.
Let’s dig in and find out what exactly Storybook is, where it helps, and how Applitools can easily provide coverage for any number of stories that a Storybook library supports.
Storybook is a JavaScript-based tool that helps teams build component libraries, design systems, and beyond.
It does this by providing a library-like interface, that shows different components and pages in isolation allowing developers, designers, testers, and users of the library to consume and work in each “story” in a focused environment.
Each Storybook story is composed of a component or set of components that represent one piece of what someone would interact with or visualize in an application.
While most stories don’t show an entire page built with these components, each of them are fit together in various ways to build the actual interfaces that are used by visitors of that app.
This gives the development team or users of the library an easy and flexible way to make sure each component works as expected from the start, where they can then have confidence to use it however they’d like in the application.
But the tricky part is making sure we’re providing proper coverage of each of the components in a way that will help us build that confidence along with finding a scalable solution as the library grows.
Solutions exist to provide code-based testing, where we might validate the string-based output of a component or test the components virtually mounted, but they’re not actually capturing what the users of the components are actually seeing.
Visual testing helps us solve that. We can capture what the user is seeing right in the browser, and use that to make sure our components and stories are working exactly like they should, giving us that needed confidence.
With the Applitools Eyes Storybook SDK, we’re able to easily provide coverage for any number of stories that our Storybook library supports.
After installing @applitools/eyes-storybook, all you need to do is simply run the following command:
npx eyes-storybook
Once run, Applitools will find all stories and start to capture each story much like other SDKs available for Eyes.
Because Storybook is ultimately web-based, where it renders any UI framework it supports right in a browser, the Eyes SDK is able to capture a DOM snapshot of that rendered output.
That means, the Eyes Storybook SDK can support any framework that Storybook can support!
The SDK uses Puppeteer, similar to the Eyes Puppeteer SDK, where each story is loaded in a new Puppeteer tab.
It then collects a DOM snapshot which is used to render each page cross-browser in the Applitools cloud, where you’ll then get the same AI-powered Visual Testing coverage that you would with any other Applitools integration.
This helps so that we can visually identify the issues with our application, based on what people are actually seeing. Saving us time when trying to solve where the issue occurred or if it’s a false positive in the first place.
If you want to get started with integrating Applitools Eyes with your Storybook library, we have a variety of resources including a walkthrough for adding Applitools Eyes to Storybook along with dedicated tutorials for React, React using CSF, Vue, and Angular.
You can also learn more about the importance of design systems with my free webinar Bringing Quality Design Systems to Life with Storybook & Applitools.
The post How to Maintain High Quality Design Systems with Storybook by Leveraging Visual AI appeared first on Automated Visual Testing | Applitools.
]]>The post Visual Assertions – Hype or Reality? appeared first on Automated Visual Testing | Applitools.
]]>There is a lot of buzz around Visual Testing these days. You might have read or heard stories about the benefits of visual testing. You might have heard claims like, “more stable code,” “greater coverage,” “faster to code,” and “easier to maintain.” And, you might be wondering, is this a hype of a reality?
So I conducted an experiment to see how true this really is.
I used the instructions from this recently concluded hackathon to conduct my experiment.
I was blown away by the results of this experiment. Feel free to try out my code, which I published on Github, for yourself.
Before I share the details of this experiment, here are the key takeaways I had from this exercise:
Let us now look at details of the experiment.
We need implement the following tests to check the functionality of https://demo.applitools.com/tlcHackathonMasterV1.html
For this automation, I chose to use Selenium-Java for automation with Gradle as a build tool.
The code used for this exercise is available here: https://github.com/anandbagmar/visualAssertions
Once I spent time in understanding the functionality of the application, I was quickly able to automate the above mentioned tests.
Here is some data from the same.
Refer to HolidayShoppingWithSeTest.java
Activity | Data (Time / LOC / etc.) |
---|---|
Time taken to understand the application and expected tests | 30 min |
Time taken to implement the tests | 90 min |
Number of tests automated | 3 |
Lines of code (actual Test method code) | 65 lines |
Number of locators used | 23 |
Test execution time: Part 1: Chrome browser | 32 sec |
Test execution time: Part 2: Chrome browser | 57 sec |
Test execution time: Part 3: Chrome: 29 sec | 29 sec |
Test execution time: Part 3: Firefox: 65 sec | 65 sec |
Test execution time: Part 3: Safari: 35 sec | 35 sec |
A few interesting observations from this test execution:
When I added Applitools Visual AI to the already created Functional Automation (in Step 1), the data was very interesting.
Refer to HolidayShoppingWithEyesTest.java
Activity | Data (Time / LOC / etc.) |
---|---|
Time taken to add Visual Assertions to existing Selenium tests | 10 min |
Number of tests automated | 3 |
Lines of code (actual Test method code) | 7 lines |
Number of locators used | 3 |
Test execution time: Part 1: Chrome browser |
81 sec (test execution time) 38 sec (Applitools processing time) |
Test execution time: Part 2: Chrome browser |
92 sec (test execution time) 42 sec (Applitools processing time) |
Test execution time: (using Applitools Ultrafast Test Cloud) Part 3: Chrome + Firefox + Safari + Edge + iPhone X |
125 sec (test execution time) 65 sec (Applitools processing time) |
Here are the observations from this test execution:
See these below examples of the nature of validations that were reported by Applitools:
Version Check – Test 1:
Filter Check – Test 2:
Product Details – Test 3:
Lastly, an activity I thoroughly enjoyed in Step 2 was the aspect of deleting code that now became irrelevant because of using Visual Assertions.
To conclude, the experiment made it clear – Visual Assertions are not a hype. The below table shows in summary the differences in the 2 approaches discussed earlier in the post.
Activity | Pure Functional Testing | Using Applitools Visual Assertions |
---|---|---|
Number of Tests automated | 3 | 3 |
Time taken to implement tests | 90 min (implement + add relevant assertions) | – |
Time taken to add Visual Assertions to existing Selenium tests | – |
10 min Includes time taken to delete the assertions and locators that now became irrelevant |
Lines of code (actual Test method code) | 65 lines | 7 lines |
Number of locators used | 23 | 3 |
Number of assertions in Test implementation |
16 This approach validates only specific behavior based on the assertions. The first failing assertion stops the test. Remaining assertions do not even get executed |
3 (1 in for each test) Validates the full screen, captures all regressions and new changes as well in 1 validation |
Test execution time: Chrome + Firefox + Safari browser |
129 sec (for 3 browsers) | – |
Test execution time: (using Applitools Ultrafast Test Cloud) Part 3: Chrome + Firefox + Safari + Edge + iPhone X | – |
125 sec (test execution time) 65 sec (Applitools processing time) (for 4 browsers + 1 device) |
Visual Assertions help in the following ways:
You can get started with Visual Testing by registering for a free account here. Also, you can take this course from the Test Automation University on “Automated Visual Testing: A Fast Path To Test Automation Success”
The post Visual Assertions – Hype or Reality? appeared first on Automated Visual Testing | Applitools.
]]>The post Why Learn Modern Cross Browser Testing? appeared first on Automated Visual Testing | Applitools.
]]>Why Learn Modern Cross Browser Testing?
100 Cross Browser Testing Hackathon Winners Share the Answer.
Today, we celebrate the over 2,200 engineers who participated in the Applitools Ultrafast Cross Browser Hackathon. To complete this task, engineers needed to create their own cross-browser test environment using the legacy multi-client, repetitive test approach. Then, they ran modern cross browser tests using the Applitools Ultrafast Grid, which required just a single test run that Applitools re-rendered on different clients and viewport specified by the engineers.
Participants discovered what you can discover as well:
Applitools Ultrafast Grid changes your approach from, “How do I justify an investment in cross-browser testing?” to “Why shouldn’t I be running cross-browser tests?”
Of the 2,200 participants, we are pleased to announce 100 winners. These engineers provided the best, most comprehensive responses to each of the challenges that made up the Hackathon.
Before we go forward, let’s celebrate the winners. Here is the table of the top prize winners:
Each of these engineers provided a high-quality effort across the hackathon tests. They demonstrated that they understood how to run both legacy and modern cross-browser tests successfully.
Collectively the 2,200 engineers provided 1,600 hours of engineering data as part of their experience with the Ultrafast Grid Hackathon. Over the coming weeks we will be sharing conclusions about modern cross-browser testing based on their experiences.
At its core, cross-browser testing guards against client-specific failures.
Let’s say you write your application code, compile it to run in containers on a cloud-based service. For your end-to-end tests, you use Chrome on Windows. You write your end-to-end browser test automation using Cypress (or Selenium, etc.). You validate for the viewport size of your display? What happens if that is all you test?
Lots depends on your application. If you have a reactive application, how do you ensure that your application resizes properly around specific viewport break points? If your customers use mobile devices, have you validated the application on those devices? But, if HTML, CSS, and Javascript are standards, who need cross-brower testing?
Until Applitools Ultrafast Grid, that question used to define the approach organizations took to cross-browser testing. Some organizations did cross browser tests. Others avoided it.
If you have thought about cross-browser testing, you know that most quality teams possessed a prejudice about the expense of cross-browser infrastructure. If asked, most engineers would include the cost and complexity of setting up a multi-client and mobile device lab, the effort to define and maintain cross-browser test software, and the tools to measure application behavior across multiple devices.
When you look back on how quality teams approached cross-browser testing, most avoided it. Given the assumed expense, teams needed justification to run cross-browser tests. They approached the problem like insurance. If the probability of a loss exceeded the cost of cross-browser testing, they did it. Otherwise, no.
Even when companies provided the hardware and infrastructure as a cross-browser testing service, the costs still ran high enough that most organizations skipped cross-browser testing.
Some of our first customers recognized that Applitools Visual AI provides huge productivity gains for cross-browser tests. Some of our customers used popular third-party services for cross-browser infrastructure. All the companies that ran cross-browser tests did have significant risk associated with an application failure. Some had experienced losses associated with browser-specific failures.
We had helped our customers use Applitools to validate the visual output of cross-browser tests. We even worked with some of the popular third-party services that helped cross-browser tests without having to install or maintain an on-premise cross-browser lab.
Our experience with cross-browser testing gave us several key insights.
First, we rarely saw applications that had been coded separately for different clients. The vast majority of applications depended on HTML, CSS and JavaScript as standards for user interface. No matter which client ran the tests, the servers responded with the same code. So, each browser at a given step in the test had the same DOM.
Second, if differences arose in cross-browser tests, they were visual differences. Often, they were rendering differences – either due to the OS, browser, or for a given viewport size. But, they were clearly differences that could affect usability and/or user experience.
This led us to realize that organizations were trying to uncover visual behavior differences for a common server response. Instead of running the server multiple times, why not grab the DOM state on one browser and then duplicate the DOM state on every other browser? You need less server hardware. And you need less software – since you only need to automate a single browser.
Using these insights, we created Applitools Ultrafast Grid. For each visual test, we capture the DOM state and reload for every other browser/os/viewport size we wish to test. We use cloud-based clients, but they do not need to access the server to generate test results. All we need to do is reload the server response on those cloud-based clients.
Ultrafast Grid provides a cloud-based service with multiple virtual clients. As a user, you specify the browser and viewport size to test against as part of the test specification. Applitools captures a visual snapshot and a DOM snapshot at each point you tell it to make a capture in an end-to-end, functional, or visual test. Applitools then applies the captured DOM on each target client and captures the visual output. This approach requires fewer resources and increases flexibility.
This infrastructure provides huge savings for anyone used to a traditional approach to cross-browser testing. And, Applitools is by far the most accurate visual testing solution, meaning we are the right solution for measuring cross-browser differences.
You might also be interested in using a flexible but limited test infrastructure. For example, Cypress.io has been a Chrome-only JavaScript browser driver. Would you rewrite tets in Selenium to run them on Firefox, Safari, or Android? No way.
We knew that so many organizations might benefit from a low-cost, highly-accurate cross-browser testing solution. If cost had held people back from trying cross-browser testing, a low-cost, easy-to-deploy, accurate cross-browser solution might succeed. But, how do we get the attention of organizations that have avoided cross-browser testing because their risks could not justify the costs?
We came up with the idea of a contest – the Ultrafast Grid Hackathon. This is our second Hackathon. In the first, the Applitools Visual AI Rockstar Hackathon, we challenged engineers who used assertion code to validate their functional tests to use Applitools Visual AI for the assertion instead. The empirical data we uncovered from our first Hackathon made it clear to participants that using Applitools increased test coverage even as it reduced coding time and code maintenance effort.
We hoped to upskill a similar set of engineers by getting the to learn Ultrafast Grid with a hackathon. So, we announced the Applitools Ultrafast Grid Hackathon. Today, we announced the winners. Shortly, we will share some of the empirical data and lessons gleaned from the experiences of hackathon participants.
These participants are engineers just like you. We think you will find their experiences insightful.
Here are two of the insights.
“The efforts to implement a comprehensive strategy using traditional approaches are astronomical. Applitools has TOTALLY changed the game with the Ultrafast Grid. What took me days of work with other approaches, only took minutes with the Ultrafast Grid! Not only was it easier, it’s smarter, faster, and provides more coverage than any other solution out there. I’ll be recommending the Ultrafast Grid to all of the clients I work with from now on.” – Oluseun Olugbenga Orebajo, Lead Test Practitioner at Fujitsu
“It was a wonderful experience which was challenging in multiple aspects and offered a great opportunity to learn cross browser visual testing. It’s really astounding to realize the coding time and effort that can be saved. Hands down, Applitools Ultrafast Grid is the tool to go for when making the shift to modern cross environment testing . Cheers to the team that made this event possible.” – Tarun Narula, Technical Test Manager at Naukri.com
Look out for more insights and empirical data about the Applitools Ultrafast Grid Hackathon. And, think about how running cross-browser tests could help you validate your application and reduce some support costs you might have been incurring because you couldn’t justify the cost of cross-browser testing. With Applitools Ultrafast Grid, adding an affordable cross-browser testing solution to your application test infrastructure just makes sense.
The post Why Learn Modern Cross Browser Testing? appeared first on Automated Visual Testing | Applitools.
]]>