The post Welcome Preflight To The Applitools Family appeared first on Automated Visual Testing | Applitools.
]]>We are excited to share some fantastic news with our valued customers and the broader testing community. Applitools has acquired Preflight, a pioneering no-code platform that streamlines the creation, execution, and management of complex end-to-end tests. This acquisition marks a significant step in our journey to provide you with breakthrough technology that empowers your teams to increase test coverage, reduce test execution time, and deliver superior applications that your customers will love.
Preflight is a robust no-code testing tool that empowers teams of all skill levels to automate complex testing scenarios. It runs these tests at an impressive scale across various browsers and screen sizes. Preflight’s user-friendly web recorder captures every element accurately and includes a data generator to simulate even the most complex test cases. This is a game-changer for manual testers, QA engineers, and product teams as it empowers them to automate test scenarios regardless of their skillset, effectively multiplying their QA abilities instantly.
Preflight ensures businesses achieve the test coverage necessary to consistently delight customers with each new experience, all without writing a single line of code.
Simplified Test Creation: With Preflight, anyone on the team can create and run tests, democratizing the testing process. This inclusivity leads to more thorough testing and faster feedback cycles.
Expanded Test Coverage: Preflight enables teams to create comprehensive test suites that cover more functionality in less time. It can easily create UI tests, API tests, verify emails during sign-up, generate synthetic data, and more. This means teams can test more scenarios and edge cases that may have been overlooked with manual testing or traditional automated testing.
Enhanced Maintainability and Reusability: Preflight allows customers to reuse sections of test suites, workflows, login profiles, data, and more across different tests, reducing redundancy. It also simplifies test maintenance with a powerful test editor and live test replay that makes editing tests fast and intuitive, reducing one of the biggest gripes of record-and-replay tools.
While Preflight will continue to be available as a standalone product, we are actively integrating it into the Applitools platform to bring Visual AI to the masses! To get an exclusive first look at Preflight today, we invite you to sign up for a demo with one of our engineers.
The post Welcome Preflight To The Applitools Family appeared first on Automated Visual Testing | Applitools.
]]>The post Introducing Applitools Execution Cloud Self-Healing Test Infrastructure appeared first on Automated Visual Testing | Applitools.
]]>Introducing Applitools Execution Cloud: The World’s First Self-Healing Test Infrastructure for Open-Source Test Frameworks
We are excited to announce the launch of Applitools Execution Cloud, a revolutionary self-healing, cloud-based testing platform that enables teams to run their existing tests against AI-powered testing infrastructure. This new addition to the Applitools Ultrafast Test platform is designed to provide teams that use open-source frameworks like Selenium or WebDriver.io, with best-in-class AI capabilities, such as self-healing, that are only currently available in proprietary tools.
For years, Applitools Eyes has brought Visual AI to the validation portion of tests, helping engineers reduce assertion code while boosting test coverage. While Eyes has continued to grow as the industry leader in AI validation, we were able to work closely with our customers to help solve problem with the other portion of testing: interaction.
Test flakiness most often occurs during the interaction phase of tests – and more specifically when a test uses a locator as it’s anchor for navigation that has changed for some reason. This can be due to dynamic Class or ID generation on certain builds or just some changes to the framework from the dev team. Either situation can wreak havoc on tests running soundly.
With Execution Cloud, teams can run tests at infinite scale in parallel while quickly healing broken tests as they run, reducing flakiness and execution time. Small changes to the UI, like text, color, or slight layout changes that would normally fail a Selenium test will be able to heal themselves.
The platform also allows for testing at extreme scale, allowing teams to run tests in the cloud in parallel for faster CI/CD pipelines.
And remember Execution Cloud, teams can easily run both functional and visual tests, as well as any Selenium and WebDriver.io tests using any binding language.
The platform also features implicit waits, which automatically waits for all critical elements to load before running its next process, drastically reducing test flakiness. Furthermore, teams can access test logs, including video, command logs, and console browser logs, to help debug faster.
Unlike its competitors, Execution Cloud is the world’s first intelligent test infrastructure for running open-source test frameworks. It is not locked in with a specific test creation tool, and it operates on a pay-as-you-go model that is cost-effective for developers and test engineers. Additionally, Execution Cloud is designed with AI capabilities, which its open-source competitors lack, making it the smart choice for teams that want to accelerate their product delivery speed and improve testing resilience.
Overall, Applitools Execution Cloud offers a complete testing solution that helps teams improve their testing process and accelerate their product delivery speed. Run faster, more resilient Selenium tests with the Applitools Self-Healing Execution Cloud. Try it today and see the difference for yourself!
Learn more about Applitools Execution Cloud in our upcoming webinar.
The post Introducing Applitools Execution Cloud Self-Healing Test Infrastructure appeared first on Automated Visual Testing | Applitools.
]]>The post Announcing Applitools Centra: UI Validation From Design to Implementation appeared first on Automated Visual Testing | Applitools.
]]>The user interface (UI) is the last frontier of differentiation for companies of all sizes. When you think about financial institutions, a lot of the services that they offer digitally are exactly the same. A lot of the services and the data they all tap into have been commoditized. What hasn’t been commoditized is the actual digital online experience – what it looks like and how you complete actions.
“Examined at an organizational level, a mature design thinking practice can achieve an ROI between 71% and 107%, based on a consistent series of inputs and outputs.”
The ROI Of Design Thinking, Forrester Business Case Report
Modern UIs today are built by a diverse set of teams that work together at different parts of the process. The pace at which these design, development, QA, operations, marketing, and product teams ship their work is continuing to accelerate – creating new challenges around communication, collaboration, and validation across the workflow.
Getting from design mock-ups in Figma to live UI is a process that includes a lot of feedback and testing. It starts with the designer who passes to the product manager for approval before the developer can start building. Feedback in the development process requires rework to make those updates before it can get approval from the product manager. This is all before the testing team has even started their review.
You can see the game of telephone that’s played through different stakeholders into production, and we get something that’s slightly different at multiple levels. This makes measuring what actually happened and what actually needs to change incredibly hard, making it a huge burden on teams to ship clean UIs at a fast pace. Some of our main challenges here are:
Applitools’ newest product Centra is a collaboration platform for teams of all sizes to alleviate these challenges. Applitools Centra enables organizations to track, validate, and collaborate on UIs from design to production. Centra uploads application designs from tools like Figma to the Applitools Test Cloud. Then, Centra compares the designs against current baselines in local, staging, or production environments. Designers, developers, testers, or digital leaders then validate that their application interface looks exactly as it was intended.
Check out the full demo of Centra in our announcement webinar. Centra is free to use for teams, and you can sign up for the waitlist to start using it on your teams.
The post Announcing Applitools Centra: UI Validation From Design to Implementation appeared first on Automated Visual Testing | Applitools.
]]>The post Ensuring a Reliable Digital Shopping Experience appeared first on Automated Visual Testing | Applitools.
]]>Last month, we hosted a webinar about ensuring a reliable digital eCommerce experience for the holiday season. This blog post will expand upon what we talked about in the webinar. In case you missed it, the webinar is available on-demand.
In this blog, we’ll talk about:
Traditional testing methods don’t always test your eCommerce applications in the same way that your customers shop. Properly capturing user scenarios based on how customers behave in your app is challenging, and writing stable tests that properly test these scenarios scale is tedious and can slow releases.
There’s a lot of data involved in eCommerce apps – product data, user data, app state, and so many different combinations of this data for different users as they shop in your virtual store. All of this data creates a lot of different potential user paths and testing scenarios to cover the different combinations of states based on product information and buyer information.
Shoppers may be on different browsers, on desktop or different mobile devices, or even using them in tandem to find the product they want – finding the product on their mobile device but completing the purchase on their laptop. A truly omnichannel experience requires that your app looks good and works on every combination of these screen sizes and operating systems.
Experimenting with the look and feel of your app can increase conversions and purchase. This puts a lot of pressure on QA to ensure the A/B experiments are properly tested, but these experiments often have complex logic on their own. And to test these A/B experiments, the test will have to mimic the complex experiment logic and understand the context of the situation. Your product team is doing the work of a permanent test case for an experiment that may be fleeting and not in production for long if it fails.
There are a lot of global markets and vendors that sell online in many different countries across many different languages. Testing the languages supported is not only a requirement but also a challenge. Building the test cases and resourcing the right people who can test the implemented languages in context takes a lot of time and specific skill sets. To multiply the challenge, each time we add a language, we need to create and maintain new tests across browsers.
Your product or marketing teams may be updating or adding content that enters the app outside of the development process through a CMS, or analysts may be entering information about a new product into an ERP system. Something like adding a new headline or a new percentage to a product may truncate on a smaller screen, going from one line to two lines, which affect spacing elsewhere on the screen or even overlap onto other elements. You’ll need to create a set of tests that are able to be flexibly kicked off and monitored across small aspects of the content that may change without triggering a CI/CD build when changed.
Customer-generated content like reviews and product photos are not going to be in your development pipeline. With post-production content, you may have a style guide or requirements for copy and images, but customers aren’t adhering to a style guide. Ensuring that the content appears correctly within your templates for this content is important, and setting character limits and image resolution requirements can keep this content more consistent.
The traditional way to test frontend functionality includes hundreds of assertions of what the app should do in what we deem are the most important aspects of the experience – like product labels, buy-now buttons, or add-to-cart buttons. Even just testing the priority aspects of your app with these assertions creates technical debt, as things change and tests need to be maintained.
We are big believers in doing testing in a layered approach to provide proper coverage of your app. No one way of testing can cover everything. Here are some common methods to include in frontend testing your eCommerce app or website.
Component testing is essentially atomic unit tests on components of your frontend. Due to the repeated nature of eCommerce apps – with elements being used across product pages and category pages – testing components of a design system can save a lot of time over checking assertions for individual elements. These components still lack context of the entire application, which leaves room for errors to occur when these components are together in production.
There’s a lot you can do from a validation standpoint for the web both locally and in test environments to continually make sure that all of your products are showing up. Smoke testing consists of fast tests that validate that the app is able to run without failure, missing bugs that aren’t mission critical. This can include simple crawling of URLs to make sure that they don’t return an error or that uniquely created query parameters are getting created as expected. These tests are the fastest way to get quick uptime coverage for multi-page application e-commerce sites.
A common final approach in testing is end-to-end testing around specific scenarios, but even those have challenges. You have to experience end-to-end scenarios to understand them, which means that you’re only able to test for what you know. There may be common bugs that come up in customer tickets or problem areas that you see a lot in QA tickets during your sprints. We can only run so many scenario tests, so you need to take time to establish testing priorities.
When it comes to prioritizing what to test, it can help to follow the Pareto principle, where 80% of the consequences come from 20% of the causes. This means that you want to test the most impactful aspects of your eCommerce experience to cover the most impact. By this principle, around 20% of your templates are going to drive 80% of your actual revenue, so it’s really important to prioritize the most important 20% of your app or website.
For eCommerce, the most important parts of the app revolve around things like the:
You start to see that you’re testing a lot of the same things. You just need to make sure that you can either use data-driven testing to make sure you’re getting the most scenarios or cross-browser testing of the most important parts across all the experiences that an end user is getting. Visual testing can kind of combine all those aspects – templated pages, localization, cross-browser testing – all at the same time in a much faster way.
With eCommerce being more accessible than ever, more people are shopping online, which brings in different kinds of user behavior. We’ve put together a few different shopper personas that cover some of this user behavior. These personas help us research and design test cases that better match how customers interact with an eCommerce app.
These shoppers browse items and pages before purchasing. You can potentially see this trend of users through the pages per visit. These shoppers use categories and filters to find items. They may have multiple tabs of their browser open to different pages of your site or web app.
These shoppers are brought in from social ads, paid media, or nurture emails. You can potentially see this trend of users who come in with referral cookies or through tracked links. These shoppers interact with paid ads in your app, and they find other products on your site through internal ads. They give product ratings and write reviews for other shoppers.
These shoppers take their time making their decision. They may take days between first viewing a product and purchasing, revisiting the site or app. This revisiting may even be on different devices and browsers to view the site when they visit the site or app.
Visual testing captures screenshots of your app at various stages and compares them to known baselines to catch unexpected visual changes. Adding this approach to your testing pipeline speeds up your frontend testing by giving you a way to test entire screens at a time, instead of individual components with individual assertions.
While visual testing allows us to cover aspects of functional testing with less code, using visual AI will unlock even more test automation capabilities like:
To learn more about visual testing, read our Enhance your testing strategy with visual testing two-pager. Now that we’ve covered different methods of testing frontends, we’ll go into a few different types of users to keep in mind when testing your eCommerce app or website.
Applitools is a test automation platform that uses AI to help teams ship flawless digital experiences without the hassle of the traditional testing practices.
With Applitools Eyes and our next-gen testing cloud, Ultrafast Grid, developers and QA engineers can run tests to quickly validate frontend functionality, accessibility, and visual correctness with unprecedented speed and accuracy.
Applitools enables you to test for dynamic content, but also unlock a variety of AI-powered visual testing capabilities:
The live demo in the webinar covered visual regression testing, accessibility testing, and multi-baseline testing. To check out the live demo of using Applitools Eyes to test an eCommerce site, you can view the on-demand webinar. If you’re ready to try it yourself, you can create a free account or reach out to our sales team. Happy testing!
The post Ensuring a Reliable Digital Shopping Experience appeared first on Automated Visual Testing | Applitools.
]]>The post How Applitools Eyes Uses AI To Automate Test Analysis and Maintenance at Scale appeared first on Automated Visual Testing | Applitools.
]]>As teams get bigger and mature their testing strategy alongside the needs of business, new challenges in their process often arise. One of those challenges is that the analysis and maintenance of tests and their results at scale can be incredibly cumbersome and time-consuming.
While a lot of emphasis gets put on “creating” new tests and reducing the time it takes to run them across different environments, there doesn’t seem to be the same emphasis on dealing with the results and repercussions of them.
Let’s say you have a test that validates a checkout experience and you want to expand that testing to the top 10 browsers. Just two bugs along that test scenario would produce 20 errors that need to be analyzed and then actioned on. This entire back and forth can become untenable in the rapid CI/CD environments present in many businesses. We basically have to choose to ignore our test results at this point if we want to get anything productive done.
This is where Auto-Grouping and Auto-Maintenance from Applitools come in, as it allows AI to quickly and accurately assess results just as an army of testers would!
Applitools Auto-Grouping helps group together similar bugs that occur in different environments like browsers, devices, screen sizes, etc. Applitools even allows you to group these bugs between entire test runs, test steps, or even specific environments allowing you to really fine-tune your automation.
In the above scenario, let’s assume we found 2 bugs across our 20 browsers for a total of 40 bugs! When we enable Auto Grouping, our errors are grouped together to present only 2 bugs – making it much easier to analyze what actually is going wrong in our interface and cutting down on chasing repeat bugs.
Auto-Maintenance builds on Auto-Grouping by automating the process of updating tests based on their test results. Auto-Maintenance also enables users to set granular controls over what gets updated automatically between checkpoints, test runs, and more.
Again, taking a look at the above example, if we accepted a new baseline on one browser, we’d have to accept it on the other 19 browsers manually – taking up a ton of time. When a new baseline is accepted, Auto-Maintenance can apply that acceptance across all similar environments saving you hours of writing new tests that would accommodate those new baselines.
Jamie Whitehouse and everyone on the development team spent time on each release working to uncover and address new failures and bugs across different browsers. Often, this work occurred as spot checks of the 1,000+ pages of the application during development. In reality, this work, and the inherent risk of unintended changes, slowed the delivery of the product to market.
Now, if Sonatype engineers make a change in their margins across a number of pages, all the differences show up as highlights in Applitools. Features in Applitools like Auto-Maintenance make visual validation a time saver. Auto-Maintenance lets engineers quickly accept identical changes across a number of pages – leaving only the unanticipated differences. As Jamie says, Applitools takes the guesswork out of testing the rendered pages.
To get started with automatically maintaining and analyzing your tests, you can check out our documentation here.
You’ll need a free account, so be sure to sign up for Applitools.
The post How Applitools Eyes Uses AI To Automate Test Analysis and Maintenance at Scale appeared first on Automated Visual Testing | Applitools.
]]>The post Introducing Monorepo Support, New APIs, and more with Applitools 10.15 appeared first on Automated Visual Testing | Applitools.
]]>We’re excited to announce the latest release of Applitools Eyes, which comes with a number of new enhancements that our customers have been asking for. Applitools Eyes 10.15 is now available and can be accessed in the dashboard.
Applitools now supports Monorepos for all major git providers, allowing teams to add Visual AI to large, complex, single-tenant code repositories using tags and PR titles to separate teams and logic inside Applitools. A monorepo is a popular method for repository organization in teams looking for maximum speed and collaboration across their codebase, but it can introduce complexity when it comes to tools that work with the repo. Applitools now has the ability to granularly run and test sections of the repo as if they were separated.
Continuing on our Git hot streak, Applitools now also supports integrating multiple GitHub organizations into a single Applitools team. For partners, agencies, or just large organizations that separate organizations you can now work on multiple projects with one Applitools account.
When using coded regions based on an element identifier, Applitools Eyes 10.15 can now adjust the region automatically and make sure it covers the most up-to-date element dimensions. This ignores irrelevant diffs and saves more of your time!
The Applitools REST API has a few new endpoints that enable teams to interact with Applitools at scale. In Applitools Eyes 10.15 we’ve added the ability to validate API keys and edit batches programmatically.
The post Introducing Monorepo Support, New APIs, and more with Applitools 10.15 appeared first on Automated Visual Testing | Applitools.
]]>The post Storybook Play Functions Now Supported in Applitools appeared first on Automated Visual Testing | Applitools.
]]>Summer is a time for new things and a time for play. We’re excited to announce that the Applitools Storybook SDK now supports Play Functions, giving modern frontend teams even more power when it comes to testing their component systems before production. Play Functions enable rich functionality in your Storybook designs, enabling you to interact with components and test scenarios that otherwise required user intervention. This means interacting and testing interactions such as form fills or date pickers in your component system is now possible! This capability was made available in Storybook version 6.4 and now is available in Applitools Storybook SDK version 3.28.
Applitools Storybook SDK can now consume these interactions through Play Functions and apply Visual AI to help your team spot any visual regression or defect in a component. For stories that have the play function, Applitools will automatically take a screenshot after the play function is finished.
To learn more about this specific feature, you can read our Storybook readme on NPM or the official Storybook Play Article.
Learn how to automatically do ultrafast cross-browser testing for Storybook components without needing to write any new test automation code in Testing Storybook Components in Any Browser by Andrew Knight.
Happy testing!
The post Storybook Play Functions Now Supported in Applitools appeared first on Automated Visual Testing | Applitools.
]]>The post Introducing Applitools Native Mobile Grid appeared first on Automated Visual Testing | Applitools.
]]>Last year, Applitools launched the Ultrafast Grid, the next generation of browser testing clouds for faster testing across multiple browsers in parallel. The success of the new grid with our customer base has been nothing short of amazing, having over 200 customers using Ultrafast Grid in the last year. But our customers are hungry for more innovation and we wanted to focus on extending our Applitools Test Cloud to the next frontier: native mobile apps.
Today, Applitools is excited to announce that the Native Mobile Grid is now ready for general availability – giving companies’ engineering and QA teams access to the next generation of cross-device testing.
For those developing native mobile apps, there are often many challenges with testing across multiple devices and orientations, resulting in a high number of bugs slipping into production. Local devices are hard to set up and owning a vast collection don’t work well across remote companies in a post-Covid world. Not to mention each different device takes a bit of custom configuration and wizardry to get running without flakiness. And mobile test frameworks are often flaky on the big cloud providers.
Applitools Native Mobile Grid is a cloud based testing grid that allows testers and developers to automate testing of their mobile applications across different iOS and Android devices quickly, accurately, and without hassle. After running just one test locally, the Applitools Native Mobile Grid will asynchronously run the tests in parallel using Visual AI, speeding up total execution tremendously and reducing flakiness. We’ve seen test time reduce by over 80% when run against other popular testing clouds.
With access to over 40 devices, Applitools revolutionary async parallel test execution can reduce testing time by up to 90% compared to traditional device clouds while still expanding coverage over that single device you’ve been testing with.
Visual AI helps power Applitools industry leading stability and reliability, with flakiness and false positives reduced by 99%.
Testing faster, on more devices, with Visual AI means that more bugs & defects are caught without having to write more tests.
The Native Mobile Grid does not need to open a tunnel into your network so your application stays safe and secure
To get started with Native Mobile Grid, just head on over and fill out this form.
The post Introducing Applitools Native Mobile Grid appeared first on Automated Visual Testing | Applitools.
]]>The post Spring4Shell Vulnerability does not affect Applitools Eyes appeared first on Automated Visual Testing | Applitools.
]]>On March 31st, Spring announced a critical vulnerability within the popular SpringMVC and Spring WebFlux frameworks for Java (also now known as “Spring4Shell”, CVE-2022-22965).
Security has always been a top priority for Applitools, and our engineers are fully aware of the recent RCE vulnerability introduced in JDK 9+, affecting numerous applications. Our security specialists immediately conducted a complete impact assessment, and validated that throughout our environment, neither SpringMVC or Spring WebFlux is used or depended on by any services we use.
Therefore, Applitools services including the Eyes and Ultrafast Grid services are unaffected. Customers with on-premise installations of Applitools are also unaffected, and won’t need to upgrade or patch any components to address this particular vulnerability. Our security specialists are confident that Applitools products can continue to be safely used without exposure to the Spring4Shell RCE vulnerability.
Our engineers and security team continue to monitor emerging security vulnerabilities and threats and are ready for rapid response should any new vulnerabilities emerge in the future.
For more information about our security, read our security guarantees.
Running into issues with Applitools Eyes or Ultrafast Grid? Submit a request through our support page, and our team will get back to you.
Thanks and happy testing!
The Applitools Team
The post Spring4Shell Vulnerability does not affect Applitools Eyes appeared first on Automated Visual Testing | Applitools.
]]>The post New Release: Applitools Eyes 10.14 appeared first on Automated Visual Testing | Applitools.
]]>We are excited to announce the latest update to Applitools Eyes, 10.14. Applitools Eyes 10.14 ensures that users in large and small organizations are able to get their work done – and to that as fast as possible. With this new release, you’ll be able to easily onboard new team members, manage your teams’ test results, and share them with all team members. We’ve also implemented several usability and accessibility enhancements to meet your organization’s compliance needs. We hope you’ll find these enhancements useful!
The ‘Assign test’ functionality has been extended in Applitools Eyes 10.14 and improved to allow users to follow up on their assigned tests not only for sessions. Users can now filter by assignee and efficiently manage their tests.
Learn more
The “Reject” functionality has been updated so that now, when users reject a specific checkpoint image, Eyes will automatically mark this checkpoint image as rejected on the next test runs as well. This will reduce the amount of time needed for reviewing the test results.
Learn more
You can now export full Batch Results via our API, making it easy to teams to pull down large sets of test results from the Applitools Test Cloud to use in their own internal systems and workflows.
Learn more
One of our most requested features, you can now filter test results and batches by who ran the test. A new “Run by” filter makes it easy to deep dive into a particular team members tests quickly.
Users who are new to Applitools Eyes or looking to spruce up their knowledge will benefit from a new video tours section in the learning center. This section includes short video tours for both new and advanced users.
To upgrade your version of Applitools Eyes, just login to the Applitools Test Cloud, and you’ll be updated to the latest version 10.14.
The post New Release: Applitools Eyes 10.14 appeared first on Automated Visual Testing | Applitools.
]]>