
The post Functional Testing’s New Friend: Applitools Execution Cloud appeared first on Automated Visual Testing | Applitools.
]]>In the fast-paced and competitive landscape of software development, ensuring the quality of applications is of utmost importance. Functional testing plays a vital role in verifying the robustness and reliability of software products. With the increasing complexity of applications with a long list of use cases and the need for faster release cycles, organizations are challenged to conduct thorough functional testing across different platforms, devices, and screen resolutions.
This path to a better quality of software products is where Applitools, a leading provider of functional testing solutions, becomes a must-have tool with its innovative offering, the Execution Cloud.
Applitools’ Execution Cloud is a game-changing platform that revolutionizes functional testing practices. By harnessing the power of cloud computing, the Execution Cloud eliminates the need for resource-heavy local infrastructure, providing organizations with enhanced efficiency, scalability, and reliability in their testing efforts. The cloud-based architecture integrates with existing testing frameworks and tools, empowering development teams to execute tests across various environments effortlessly.
This article explores how the Execution Cloud and its self-healing capabilities can be used to run our functional test coverage. We demonstrate this cloud platform’s features, like auto-fixing selectors caused by a change in the production code.
As discussed, the Applitools Execution Cloud is a great tool to enhance any team’s quality pipeline.
One of the main features of this cloud platform is that it can “self-heal” our tests using AI. For example, if, during refactoring or debugging, one of the web elements had its selectors changed and we forgot to update related test coverage, the Execution Cloud would automatically fix our tests. This cloud platform would use one of the previous runs to deduce another relevant selector and let our tests continue running.
This self-healing capability of the Execution Cloud allows us to focus on actual production issues without getting distracted by outdated tests.
It’s fair to say that Applitools has been one of the leading innovators and pioneers in visual testing with its Eyes platform. However, with the Execution Cloud in place, Applitools offers its users broader, more scalable test capabilities. This cloud platform lets us focus on all types of functional testing, including non-Visual testing.
One of the best features of the Execution Cloud is that it’s effortless to integrate into any test case with just one line. There is also no requirement to use the Applitools Eyes framework. In other words, we can run any functional test without creating screenshots for visual validation while utilizing the self-healing capability of the Execution Cloud.
As we mentioned earlier, the Execution Cloud can be integrated with most test cases we already have in place! The only consideration is at the time of writing this post, the current version of the Execution Cloud only supports Selenium WebDriver across all languages (Java, JavaScript, Python, C#, and Ruby), WebdriverIO, and any other WebDriver-based framework. However, more test frameworks will be supported in the near future.
Fortunately, Selenium is a highly used testing framework, giving us plenty of room to demonstrate the power of the Execution Cloud and functional testing.
Our demo application will be a documentation site built using the Vercel Documentation template. It’s a simple app that uses Next.js, a React framework created by Vercel, a cloud platform that lets us deploy web apps quickly and easily.
To note, all the code for our version of the application is available here.
First, we need to clone the demo app’s repository:
git clone git@github.com:dmitryvinn/docs-demo-app.git
We will need Node.js of version 10.13 to work with this demo app, which can be installed by following the steps here.
After we set up Node.js, we should open a terminal and run the following command to install the necessary dependencies:
npm install
The next step is to navigate into the project’s directory and start the app locally:
cd docs-demo-app
npm run dev
Now our demo app is accessible at ‘http://localhost:3000/’ and ready to be tested.
Docs Demo App
While the Execution Cloud allows us to run the tests against a local deployment, we will simulate the production use case by running our demo app on Vercel. The steps for deploying a basic app are very well outlined here, so we won’t spend time reviewing them.
After we deploy our demo app, it will appear as running on the Vercel Dashboard:
Demo App Deployed on Vercel
Now, we can write our tests for a production URL of our demo application available at `https://docs-demo-app.vercel.app/`.
Execution Cloud offers great flexibility when it comes to working with our tests. Rather than re-writing our test suites to run against this self-healing cloud platform, we simply need to update a few lines of code in the setup part of our tests, and we can use the Execution Cloud.
For our article, our test case will validate navigating to a specific page and pressing a counter button.
To make our work even more effortless, Applitools offers a great set of quickstart examples that were recently updated to support the Execution Cloud. We will start with one of these samples using JavaScript with Selenium WebDriver and Jest as our baseline.
We can use any Integrated Development Environment (IDE) to write tests like IntelliJ IDEA or Visual Studio Code. Since we use JavaScript as our programming language, we will rely on NPM for the build system and our test runner.
Our tests will use Jest as its primary testing framework, so we must add a particular configuration file called `jest.config.js`. We can copy-paste a basic setup from here, but in its shortest form, the required configurations are the following.
module.exports = {
clearMocks: true,
coverageProvider: "v8",
};
Our tests will require a `package.json` file which should include Jest, Selenium WebDriver, and Applitools packages. Our dependencies’ part of the `package.json` file should eventually look like the one below:
"dependencies": {
"@applitools/eyes-selenium": "^4.66.0",
"jest": "^29.5.0",
"selenium-webdriver": "^4.9.2"
},
After we install the above dependencies, we are ready to write and execute our tests.
Since we are running a purely functional Applitools test with its Eyes disabled (meaning we do not have a visual component), we will need to initialize the test and have a proper wrap-up for it.
In `beforeAll()`, we can set our test batching and naming along with configuring an Applitools API key.
To enable Execution Cloud for our tests, we need to ensure that we activate this cloud platform on the account level. After that’s done, in our tests’ setup, we will need to initialize the WebDriver using the following code:
let url = await Eyes.getExecutionCloudUrl();
driver = new Builder().usingServer(url).withCapabilities(capabilities).build();
For our test case, we will open a demo app, navigate to another page, press a counter button, and validate that the click incremented the value of clicks by one.
describe('Documentation Demo App', () => {
…
test('should navigate to another page and increment its counter', async () => {
// Arrange - go to the home page
await driver.get('https://docs-demo-app.vercel.app/');
// Act - go to another page and click a counter button
await driver.findElement(By.xpath("//*[text() = 'Another Page']")).click();
await driver.findElement(By.className('button-counter')).click();
// Assert - validate that the counter was clicked
const finalClickCount = await driver.findElement(By.className('button-counter')).getText();
await expect(finalClickCount).toContain('Clicked 1 times');
}
…
Another critical aspect of running our test is that it’s a non-Eyes test. Since we are not taking screenshots, we need to tell the Execution Cloud when a test begins and ends.
To start the test, we should add the following snippet inside the `beforeEach()` that will name the test and assign it to a proper test batch:
await driver.executeScript(
'applitools:startTest',
{
'testName': expect.getState().currentTestName,
'appName': APP_NAME,
'batch': { "id": batch.getId() }
}
)
Lastly, we need to tell our automation when the test is done and what were its results. We will add the following code that sets the status of our test in the `afterEach()` hook:
await driver.executeScript('applitools:endTest',
{ 'status': testStatus })
Now, our test is ready to be run on the Execution Cloud.
To run our test, we need to set the Applitools API key. We can do it in a terminal or have it set as a global variable:
export APPLITOOLS_API_KEY=[API_KEY]
In the above command, we need to replace [API_KEY] with the API key for our account. The key can be found in the Applitools Dashboard, as shown in this FAQ article.
Now, we need to navigate to the location where our tests are located and run the following npm test command in the terminal:
npm test
It will trigger the test suite that can be seen on the Applitools Dashboard:
Applitools Dashboard with Execution Cloud enabled
It’s a well-known fact that apps go through a lifecycle. They get created, get bugs, change, and ultimately shut down. This ever-changing lifecycle of any app is what causes our tests to break. Whether it’s due to a bug or an accidental regression, it’s widespread for a test to fail after a change in an app.
Let’s say a developer working on a counter button component changes its class name to `button-count` from the original `button-counter`. There could be many reasons this change could happen, but nevertheless, these modifications to the production code are extremely common.
What’s even more common is that the developer who made the change might forget or not find all the tests using the original class name, `button-counter`, to validate this component. As a result, these outdated tests would start failing, distracting us from investigating real production issues, which could significantly impact our users.
Execution Cloud and its self-healing capabilities were built specifically to address this problem. This cloud platform would be able to “self-heal” our tests that were previously running against a class name `button-counter`, and rather than failing these tests, the Execution Cloud would find another selector that hasn’t changed. With this highly scalable solution, our test coverage would remain the same and let us focus on correcting issues that are actually causing a regression in production.
Although we are running non-Eyes tests, the Applitools Dashboard still allows us to see several valuable materials, like a video recording of our test or to export WebDriver commands!
Want to see more? Request a free trial of Applitools Execution Cloud.
Whether you are a small startup that prioritizes quick iterations, or a large organization that focuses on scale, Applitools Execution Cloud is a perfect choice for any scenario. It offers a reliable way for tests to become what they should be – the first line of defense in ensuring the best customer experience for our users.
With the self-healing capabilities of the Execution Cloud, we get to focus on real production issues that actively affect our customers. With this cloud platform, we are moving towards a space where tests don’t become something we accept as constantly failing or a detriment to our developer velocity. Instead, we treat our test coverage as a trusted companion that raises problems before our users do.
With these functionalities, Applitools and its Execution Cloud quickly become a must-have for any developer workflow that can supercharge the productivity and efficiency of every engineering team.
The post Functional Testing’s New Friend: Applitools Execution Cloud appeared first on Automated Visual Testing | Applitools.
]]>The post Top 10 Visual Testing Tools appeared first on Automated Visual Testing | Applitools.
]]>Visual regression testing, a process to validate user interfaces, is a critical aspect of the DevOps and CI/CD pipelines. UI often determines the drop-off rate of an application and is directly concerned with customer experience. A misbehaving front end is detrimental to a tech brand and must be avoided like the plague.
Manual testing procedures are not enough to understand intricate UI modifications. Automation scripts could be a solution but are often tedious to write and deploy. Visual testing, therefore, is a crucial element that determines changes to the UI and helps devs flag unwanted modifications.
Every visual regression testing cycle has a similar structure – some baseline images or screenshots of a UI are captured and stored. After every change to the source code, a visual testing tool takes snapshots of the visual interface and compares them with the initial baseline repository. The test fails if the images do not match and a report is generated for your dev team.
Revolutionizing visual testing is Visual AI – a game-changing technology that automates the detection of visual issues in user interfaces. It also enables software testers to improve the accuracy and speed of testing. With machine learning algorithms, Visual AI can analyze visual elements and compare them to an established baseline to identify changes that may affect user experience.
From font size and color to layout inconsistencies, Visual AI can detect issues that would otherwise go unnoticed. Automated visual testing tools powered by Visual AI, such as Applitools, improve testing efficiency and provide faster and more reliable feedback. The future of visual testing lies in Visual AI, and it has the potential to significantly enhance the quality of software applications.
Visual testing is a critical aspect of software testing that involves analyzing the user interface and user experience of an application. It aims to ensure that the software looks and behaves as expected, and all elements are correctly displayed on different devices and platforms. Visual testing detects issues such as layout inconsistencies, broken images, and text overlaps that can negatively impact the user experience.
Automated visual testing tools like Applitools can scan web and mobile applications and identify any changes to visual elements. Effective visual testing can help improve application usability, increase user satisfaction, and ultimately enhance brand loyalty.
Visual testing and functional testing are two essential components of software testing that complement each other. While functional testing ensures the application’s features work as expected, visual testing verifies that the application’s visual elements, such as layout, fonts, and images, are displayed correctly. Visual testing benefits functional testing by enhancing test coverage, reducing testing time and resources, and improving the accuracy of the testing process.
Some more benefits of visual testing for functional testing are as follows:
Further reading: https://applitools.com/solutions/functional-testing/
The following section consists of 10 visual testing tools that you can integrate with your current testing suite.
A visual regression tool, often underrated, Aye Spy is open-source and heavily inspired by BackstopJS and Wraith. At its core, the creators had one issue they wanted to challenge- performance. The visual regression tools in the market are missing this key element that Aye Spy finally decided to incorporate with 40 UI comparisons in under 60 seconds (with optimum setup, of course)!
Features:
Advantages:
One of the most popular tools in the market, Applitools, is best known for employing AI in visual regression testing. It offers feature-rich products like Eyes, Ultrafast Test Cloud, and Ultrafast Grid for efficient, intelligent, and automated testing.
Applitools is 20x faster than conventional test clouds, is highly scalable for your growing enterprise, and is super simple to integrate with all popular frameworks, including Selenium, WebDriver IO, and Cypress. The tool is state of the art for all your visual testing requirements, with the ‘smarts’ to know what minor changes to ignore, without any prior settings.
Applitools’ Auto-Maintenance and Auto-Grouping features are handy. According to the World Quality Report 2022-23, maintainability is the most important factor in determining test automation approaches, but it often requires a sea of testers and DevOps professionals on their toes, ready to resolve a wave of bugs.
Cumbersome and expensive, this can break your strategies and harm your reputation. Auto-Grouping categorizes the bugs as Auto-Maintenance resolves them while offering you the flexibility to jump in wherever needed. Applitools enters the movie here.
Applitools Eyes is a Visual AI product that dramatically minimizes coding while maximizing bug detection and test updation. Eyes mimics the human eye to catch visual regressions with every app release. It can identify dynamic elements like ads or other customizations and ignore or compare them as desired.
Features:
Advantages:
Read more: Applitools makes your cross-browser testing 20x faster. Sign up for a free account to try this feature.
Hermione, an open-source tool, streamlines integration and visual regression testing although only for more straightforward websites. It is easier to kickstart Hermione with prior knowledge of Mocha and WebdriverIO, and the tool facilitates parallel testing across multiple browsers. Additionally, Hermione effectively uses subprocesses to tackle the computation issues associated with parallel testing. Besides this, the tool allows you to segregate tests from a test suite by only adding a path to the test folder.
Features:
Advantages:
Needle, supported by Selenium and Nose, is an open-source tool that is free to use. It follows the conventional visual testing structure and uses a standard suite of previously collected images to compare the layout of an app.
Features:
Advantages:
Vizregress, a popular open-source tool, was created as a research project based on AForge.Net. Colin Williamson, the creator of the tool, tried to resolve a crucial issue- Selenium WebDriver (that Vizregress uses in the background) could not distinguish between layouts if the CSS elements stayed the same and only the visual representation was modified. This was a problem that could disrupt a website.
Vizregress uses AForge attributes to compare every pixel of the images (new and baseline) to determine if they are equal. This is a complex task that does not deny its fragility.
Features:
Advantages:
Created by Jonathan Dann and Todd Krabach, iOSSnapshotTestCase was previously known as FBSnapshotTestCase and developed within Facebook – although Uber now maintains it. The tool uses the visual testing structure, where test screenshots are compared with baseline images of the UI.
iOSSnapshotTestCase uses tools like Core Animation and UIKit to generate screenshots of an iOS interface. These are then compared to specimen images in a repository. The test inevitably fails if the snapshots do not match.
Features:
Advantages:
VisualCeption uses a straightforward, 5-step process to perform visual regression testing. It uses WebDriver to capture a snapshot, JavaScript for calculating element sizes and positions, and Imagick for cropping and comparing visual components. An exception, if raised, is handled by Codeception.
It is essential to note here that VisualCeption is a function created for Codeception. Hence, you cannot use it as a standalone tool – you must have access to Codeception, Imagick, and WebDriver to make the most out of it.
Features:
Advantages:
BackstopJS is a testing tool that can be seamlessly integrated with CI/CD pipelines for catching visual regressions. Like others mentioned above, BackstopJS compares webpage screenshots with a standard test suite to flag any modifications exceeding a minimum threshold.
A popular visual testing tool, BackstopJS has formed the basis of similar tools like Aye Spy.
Features
Advantages:
Visual Regression Tracker is an exciting tool that goes the extra mile to protect your data. It is self-hosted, meaning your information is unavailable outside your intranet network.
In addition to the usual visual testing procedure, the tool helps you track your baseline images to understand how they change over time. Moreover, Visual Regression Tracker supports multiple languages including Python, Java, and JavaScript.
Features:
Advantages:
Galen Framework is an open-source tool for testing web UI. It is primarily used for interactive websites. Although developed in Java, the tool offers multi-language support, including CSS and JavaScript. Galen Framework runs on Selenium Grid and can be integrated with any cloud testing platform.
Features:
Advantages:
Here is a quick recap of all the 10 tools mentioned above:
The following comparison chart gives you an overview of all crucial features at a glance. Note how most tools have attributes that are ambiguous or undocumented. Applitools stands out in this list, giving you a clear view of its properties.
This summary gives you a good idea of the critical features of all the tools mentioned in this article. However, if you are looking for one tool that does it all with minimal resources and effort, select Applitools. Not only did they spearhead Visual AI testing, but they also fully automate cross-browser testing, requiring little to no intervention from you.
Customers have reported excellent results – 75% less time required for testing and 50% minimization in upkeep endeavors. To know how Applitools can seamlessly integrate with your DevOps pipeline, request your demo today.
Register for a free Applitools account.
The post Top 10 Visual Testing Tools appeared first on Automated Visual Testing | Applitools.
]]>The post Future-Proofing Your Test Automation Pipeline appeared first on Automated Visual Testing | Applitools.
]]>Learn how to future-proof your test automation pipeline with Cypress and Applitools by adding tests that run from GitHub Actions. In this article, we’ll share how to ensure your test automation pipeline can scale while staying reliable and easy to maintain.
To illustrate our different types of test automation, we’ll be using the example project Cypress Heroes. In this full-stack TypeScript app, users can take the following actions:
ICYMI: Watch the on-demand recording of Future-Proofing Your Automation Pipeline to see Ely Lucas from Cypress demo the example project.
Cypress is traditionally known for end-to-end testing. You automate user interactions for specific scenarios from start to finish in the browser, and then run functional assertions to check the state of elements at each step. End-to-end tests are hidden in an actual web server and hit the site just like a user would.
Measurable stats for code coverage of your end-to-end testing can act as a health metric for your website or app. Adding coverage reports to your automation pipeline as commits can help ensure you’re testing all parts of your code.
If you’re using a component-based framework like React or Angular or a design system like Storybook, you can also do component testing to test UI components. In this example, we have a button component with a few tests that pass, the hero card test, and a test for the login form. These components are being mounted in isolation outside of your typical web server.
Think of component tests as “UI unit” tests. While they don’t give end-to-end coverage, they’re quick and easy to run.
For your back end, you’ll need to automate API tests. The example project is using a community-built plugin called cypress-plugin-api. This plugin provides an interface inside the Cypress app to test APIs. It’s really cool and it’s super fun, and it allows you to write tests that you would have to do manually in a tool like Postman.
Fun fact: Cypress Ambassador Filip Hric developed the cypress-plugin-api. Check out Filip’s Test Automation University courses.
The API tests in our example are in the separate server project. We can use the command npx cypress open, and then we can run those tests in Chrome. We can see all of our results that we’re getting the response of the status codes. We can view a post request, the headers that were sent, the headers that were returned, and other stuff that you normally get from a tool like Postman.
And it’s just baked into the app, which is really nice. Cypress is basically a web app that tests a web app. And so, you could extend Cypress as an app with things like this to help you do your testing and to have it all seamlessly integrated.
The example project uses GitHub Actions to set up the test automation pipeline. When working on smaller projects, it’s easy to have CI interactions baked into your repository, all in one place.
With GitHub Actions, you declare everything you need in a YAML file in the .github/workflows folder. Your actions become part of your repository and are covered by version control. If you make any changes, you can review them easily with a simple line-by-line diff. GitHub Actions make it easy to automate processes alongside other interactions you make with your repository. For example, if you open a pull request, you can have it automatically kick off your tests and do linting. You can even perform static code analysis before merging changes.
Some environment variables are set at the top of the YAML file. The API URL is what the client app uses to communicate to the API. The example app is hooked up to send test results to the Cypress Cloud. Those results can then be used for analytics, diagnostics, reporting, and optimizing our test workflows. The Cypress cloud also requires a GitHub token, so it can do things like correctly identify what pull request is being merged.
For those new to GitHub Actions: You can define environment variables per step in a job, but declaring them at the top helps you update them painlessly.
To keep things simple, there is only one job right now in this GitHub Action. First, it checks out the code straight from GitHub. Next, it builds the project using the Cypress GitHub Action. The Cypress GitHub Action does a few things for you like building your application or npm installing or yarn installing the dependencies.
Building first means that subsequent jobs don’t have to build the app again. We’ve set run test to false, which is a parameter to the Cypress GitHub Action, because we don’t want to run the tests here. We’ll be running the tests separately below.
We have our component tests in our GitHub Action. We tell it to install false, since we installed it up above. And then we run our custom test command, which opens Cypress in run mode and initiates component testing. This record tells the GitHub Action to send the results to Cypress Cloud.
And then we have to start the client and server. For both end-to-end tests and API tests, the application must be up and running component tests. For the end-to-end test and API tests, the example app is hitting live servers.
This run command will start both the React app and the Node server, and then it will run the end-to-end test. We’re telling it again to not install the dependency, since it was already installed. Then, we’re running the command to start the end-to-end testing. The wait command will wait to make sure that both the client URL and the API URL are both up and running before it will start the test. If the test starts before both URLs get up and running, you’ll have some tests fail.
Another thing that the Cypress GitHub Action does is that you have the option to wait for these services to be live before the testing starts. By default, the npm run test commands are going to use the Chromium browser built into Electron. If you want to test on other browsers, you must make sure those browsers are installed on the runner. Cypress provides Docker images that you can add to your configuration to download the different browsers. However, downloading additional browsers increases the file size and makes the runs take longer.
Make sure that the Cypress binary itself is downloaded and installed. It’s going to run headlessly. This is because the command set up in these scripts is run mode, which is headless, whereas open mode is with the UI.
And then it will run the API test, which is very similar to end-to-end tests, except that since we’re not hitting the actual client app – only hitting the API app – we’re only waiting to make sure that the API URL is up and running.
If you write test cases per the local database for end-to-end testing before pushing to GitHub Actions, someone else running those test cases on their system could potentially fail. In whatever kind of test automation you develop, you’ll need to handle test data properly to avoid collisions. There are many different strategies you can follow. For more information on this and solving sample data dependency, watch my talk Managing the Test Data Nightmare.
When running your tests with Cypress and GitHub Actions, the results are uploaded to Cypress Cloud. You can go into Cypress Cloud and actually watch replays of all these tests that happened. The entire pipeline run in the example was 3 minutes and 50 seconds for all three test suites.
The individual test suites we ran took the following times:
Since all the Cypress tests are run inside of the browser window, you can visually see them and inspect to make sure that they’re looking correctly. But this type of review is a manual step. If someone accidentally makes a change to the stylesheet, the site could no longer be running properly, but if we run the tests, they’ll pass.
We can use Applitools Eyes to fix this issue.
Visual testing is meant to automate the things that traditional automation is not so good at. For example, as long as particular IDs on your page are in the DOM somewhere, your traditional automation scripts with something like Cypress are still going to find and interact with the elements. Applitools Eyes uses visual AI to look at an app and be able to detect these kinds of visual differences that traditional assertions struggle to capture. Let’s add some visual snapshots to these end-to-end tests.
First, you’ll need an Applitools account. You can register a free Applitools account with your GitHub username or your email, and you’ll be good to go. You’ll need your Applitools API key for when we run tests with Applitools.
Next, we’ll need to install the Applitools SDK using npm install @applitools/eyes-cypress.
It can be a dev dependency or it can be a regular dependency – whichever you prefer. In the example project, we use a dev dependency. In the example project, we’re using the Applitools Eyes SDK for Cypress, we have Applitools SDKs for basically every tool framework you got.
Next, we’ll need to create an Applitools configuration file. Where in Cypress projects, you have your cypress.config.js file, basically we want one that’s called applitools.config.js.
In the Applitools config file, we will specify the configuration for running visual tests. There’s a separation between declaring configuration and actually adding test steps.
One of the settings we want is called batchName, and we’re going to set that to “cy heroes visual tests” to reflect the name of our demo app. The batch name will appear in the Eyes Test Manager (or the Applitools “dashboard”) after we run our visual tests.
Next, we’ll set the browsers. This will be a list, with each item being an entry that specifies a browser configuration, including name, width, and height.
Typically, since Cypress runs inside of an Electron app, it can be challenging to test mobile browsers. However, the Applitools Ultrafast Grid enables us to render our visual snapshots on mobile devices. The settings for mobile devices are going to be a bit different than those for browsers. Instead of having a name, we’re going to have a device name.
Our applitools.config.js file is complete. When we run our tests – either locally or in the GitHub Action – Applitools will render the snapshots it captures on these four browser configurations in the Ultrafast Grid and report results using the batch name. Furthermore, the local platform doesn’t matter. Even if you run this test on Windows, the Ultrafast Grid can still render snapshots on Safari and mobile emulators. A snapshot is just going to be a capture of that full page. Applitools will do the re-rendering with the appropriate size, the appropriate browser configuration, and all that will happen in the cloud. Essentially you can do multi-browser and multi-platform testing with simple declarations.
Now that we have completed the configuration, let’s update the tests to capture visual snapshots, starting with the homepage.
You need to make sure that your tests aren’t interfering with other tests. In these tests, we’re going through and modifying some of the heroes that are in the application. The state of the application changes per test, so to get around that, we’re creating a new hero just for working with our tests and deleting the hero after the tests.
In the example, we’re using Cypress tasks, which is code that actually runs on the Node process part of Cypress. It’s directly communicating with our database to add the hero, delete the hero, and all the other types of setup tasks that we want to do before we actually run our test.
So it’s going to happen for each of the tests, and then we’re visiting the homepage and getting access to the hero.
We get our new hero and then we call cy.deleteHero, which is going to call the database to delete the hero. From the describe block at the start of every test, we get our hero. And then, finally, we have the hero card by its name, and we find the button that has the right selectors, so we can actually select it and click the button.
This test is making sure that you’re logged in before you can like this hero. We’re making sure that the modal popped up, clicking the okay on the modal, and then making sure that modal disappears and does not exist anymore.
Down below we have another suite for when a normal user is logged in. And so we’re using a custom Cypress command to log in with this username and password. You can define these custom commands that are like making your own function, encapsulating a little bit of logic so that it could be reusable.
So what we’re doing to test the login is going to the homepage, running the login process, and verifying the login was actually successful. The cy.session is caching a session for us to restore the session later from cookies. This helps speed up your test so you’re not having to go through the whole flow of actually logging in again.
We have another suite here for when an admin user is logged in, because an admin user can edit users and delete heroes.
In the example, negative login tests – where you use the wrong username and/or password – are under the component tests.
In the login form component test, when an email and password is invalid, an error should show. The example uses cy.intercept to mock the API request that goes to the cert, which goes to the off endpoint and returns a status code 401, which represents an invalid login.
You can either write a component test or an end-to-end test. In this case, a component test makes it easier to set up the mock data.
With the test suites set up, we’re ready to add some visual snapshots here. We need to call an Eyes session using the Applitools Eyes SDK. The idea is that we open our eyes, and we can take visual snapshots. And then at the end of the test, we will close our eyes to say that we’ve captured all the snapshots for that session or for that test. And at that point, Applitools Eyes will upload the snapshots that are captured to the Applitools Eyes server, do all of the re-rendering of the things of those four browsers in the Ultrafast Grid. Then we can log into the Applitools dashboard and we can see exactly what happened with our testing.
To get the autocomplete for Eyes commands, we need to set up the Applitools Eyes stuff with the Cypress project. We already did npm install on the package, so we’ll need to run npx eyes-setup.
We’ll want to use the command cy.eyesOpen in the homepage describe block under the beforeEach method. We want to pass an app name and test name for logging and reporting purposes. You might also put their Cypress eyes open code in the beforeEach of the test cases, so the call doesn’t need to be duplicated.
Then, in the afterEach block, you’ll call cy.eyesClose.
In this test, you must log in, make sure that the modal pops up, log in, and then click okay in the modal and make sure the modal disappears, so we’ll need a snapshot when the modal is up and one when the modal goes away. In this case, we’ll capture the whole window.
If we didn’t want to capture everything, we could actually capture a region, like a div or even an individual element. On a small scale, using the region option does not make a measurable difference in execution speed, but it gives you a way to tune the type of snapshot we want.
For capturing the next step, we can basically copy the whole call there and paste it, changing the tag to homepage with the modal dismissed.
These snapshots are very straightforward to write, and something that we could consider is that some of those other assertions you might arguably be able to remove. The visual snapshot is going to capture everything on that window, so if it’s there and visible, we’re going to capture it and track it over time.
You would still need to keep all of your interactions, but you can remove most of your assertions checking visible elements. However, there are certainly things where if you want to check a very specific numeric value, you still want to keep those assertions.
All we need to do to run this test is make sure that we have our Applitools API key from our account saved as an environment variable of the Cypress application.
Note: If you happen to steal someone’s API key, it doesn’t really help you. It just means they’ll see your results, and you won’t. API keys should be kept secret and safe.
So to see the visual testing results, we will need to view them in the Applitools Eyes dashboard.
You can view your test results in a grid to see the UI quickly, or you can view your results in a list to see your configurations quickly.
On the left, you’ve got the batch name that was set. Then on the main part of the body, you’ll see there are actually four tests. We only wrote one test, but each test is run once per browser configuration we specified, providing cross-browser and cross-platform testing without additional steps.
If we open up the snapshots, you can see the two snapshots that we captured. These results are new, because this is the first time we’ve run the test.
We’ve established the snapshot as a baseline image, meaning anything in the future will be checked against that.
That’s where that visual aspect of the testing comes in. Your Cypress results will essentially tell you if it was bare bones basic functional, and then Eyes will tell you what it actually looked like. You get richer results together.
Let’s see what this looks like if we make that visual change.
In the main file, we’ve updated the stylesheet and run the test again. There is no need to do anything in the Applitools Eyes dashboard before re-running the test.
The new test batch is in an unresolved state because Eyes detected a visual difference. In theory, a visual difference could be good or bad. You could be making an intentional change. Visual AI is basically a change detector that makes it obvious to you, the human, to decide what is good or bad. Then anytime Applitools Eyes sees the same kind of passing or failing behavior in the future, it’ll remember.
It’s important to note that the unresolved test results won’t stop your test automation job or your automation flow. Test automation would complete normally. You as the human tester would review visual test results in the Eyes Test Manager (the “dashboard”) afterwards. The pipeline would not wait for you to manually mark visual test results.
Let’s open up one of those snapshots so we can see it full screen.
In the upper left, below the View menu in the ribbon, there’s a dropdown to show both so that you can see the baseline and test side by side.
In the example, we had removed the stylesheet, so we can see very clearly that it’s very different. It’s not always this obvious. In this case, pretty much the whole screen is different. But if it were like a single button that was missing or something shunted a little bit, it would show that a specific area was different. That’s the power of the visual AI check.
Whenever Applitools detects a visual change, you can mark it as “passing” with a thumbs-up. Then that snapshot automatically becomes the new baseline against which future checkpoints are compared. Applitools will go to the background and track similar images. And it will automatically update those appropriately as well.
Note: If you ever want to “reset” snapshots, you can also delete the baselines and run your tests “fresh” as if for the first time. The snapshots they capture will automatically become new baseline images.
Once we’ve resolved all test results, we’ll need to save. And now if we were to rerun our test again, Applitools Eyes would see the new snapshots and pass tests as appropriate. If you have dynamic content or test data, you add region annotations, which will ignore anything in the region box.
It is possible to compare your production and staging environments. You can use our GitHub Integration to manage different branches or versions of your application. We also support different baselines for A/B testing.
That’s basically how you would do visual testing with Applitools and Cypress. There are two big points to remember if you want to add visual testing to your own test suites:
We hope this guide has helped you to build out your test automation pipeline to be more reliable and scalable. If you liked the guide, check out our Applitools tutorials for other guides on building your test automation pipeline. Watch the on-demand recording of Future-Proofing Your Automation Pipeline to see the full walkthrough. To keep up-to-date with test automation, you can peruse our latest courses taught by industry-leading testing experts on Test Automation University. Happy testing!
The post Future-Proofing Your Test Automation Pipeline appeared first on Automated Visual Testing | Applitools.
]]>The post What’s New in Cypress 12 appeared first on Automated Visual Testing | Applitools.
]]>Right before the end of 2022, Cypress surprised us with their new major release: version 12. There wasn’t too much talk around it, but in terms of developer experience (DX), it’s arguably one of their best releases of the year. It removes some of the biggest friction points, adds new features, and provides better stability for your tests. Let’s break down the most significant ones and talk about why they matter.
If you are a daily Cypress user, chances are you have seen an error that said something like, “the element was detached from DOM”. This is often caused by the fact that the element you tried to select was re-rendered, disappeared, or detached some other way. With modern web applications, this is something that happens quite often. Cypress could deal with this reasonably well, but the API was not intuitive enough. In fact, I listed this as one of the most common mistakes in my talk earlier this year.
Let’s consider the example from my talk. In a test, we want to do the following:
As we type into the search box, an HTTP request is sent with every keystroke. Every response from that HTTP request then triggers re-rendering of the results.
The test will look like this:
it('Searching for item with the text "abc"', () => {
cy.visit('/')
cy.realPress(['Meta', 'k'])
cy.get('[data-cy=search-input]')
.type('abc')
cy.get('[data-cy=result-item]')
.first()
.should('contain.text', 'abc')
})
The main problem here is that we ignore the HTTP requests that re-render our results. Depending on the moment when we call cy.get() and cy.first() commands, we get different results. As the server responds with search results (different with each keystroke), our DOM is getting re-rendered, making our “abc” item shift from second position to first. This means that our cy.should() command might make an assertion on a different element than we expect.
Typically, we rely on Cypress’ built-in retry-ability to do the trick. The only problem is that the cy.should() command will retry itself and the previous command, but it will not climb up the command chain to the cy.get() command.
It is fairly easy to solve this problem in versions v11 and before, but the newest Cypress update has brought much more clarity to the whole flow. Instead of the cy.should() command retrying only itself and the previous command, it will retry the whole chain, including our cy.get() command from the example.
In order to keep retry-ability sensible, Cypress team has split commands into three categories:
These categories are reflected in Cypress documentation. The fundamental principle brought by version 12 is that a chain of queries is retried as a whole, instead of just the last and penultimate command. This is best demonstrated by an example comparing versions:
// Cypress v11:
cy.get('[data-cy=result-item]') //
not retried
.first() // retried
.should('contain.text', 'abc') // retried
// Cypress v12:
cy.get('[data-cy=result-item]') //
retried
.first() // retried
.should('contain.text', 'abc') // retried
cy.get() and cy.first() are commands that both fall into queries category, which means that they are going to get retried when cy.should() does not pass immediately. As always, Cypress is going to keep on retrying until the assertion passes or until a time limit runs up.
One of the biggest criticisms of Cypress.io has been the limited ability to visit multiple domains during a test. This is a huge blocker for many test automation engineers, especially if you need to use a third-party domain to authenticate into your application.
Cypress has advised to use programmatic login and to generally avoid trying to test applications you are not in control of. While these are good advice, it is much harder to execute them in real life, especially when you are in a hurry to get a good testing coverage. It is much easier (and more intuitive) to navigate your app like a real user and automate a flow similar to their behavior.
This is why it seems so odd that it took so long for Cypress to implement the ability to navigate through multiple domains. The reason for this is actually rooted in how Cypress is designed. Instead of calling browser actions the same way as tools like Playwright and Selenium do, Cypress inserts the test script right inside the browser and automates actions from within. There are two iframes, one for the script and one for the application under test. Because of this design, browser security rules limit how these iframes interact and navigate. Laying grounds for solving these limitations were actually present in earlier Cypress releases and have finally landed in full with version 12 release. If you want to read more about this, you should check out Cypress’ official blog on this topic – it’s an excellent read.
There are still some specifics on how to navigate to a third party domain in Cypress, best shown by an example:
it('Google SSO login', () => {
cy.visit('/login') // primary app login page
cy.getDataCy('google-button')
.click() // clicking the button will redirect to another domain
cy.origin('https://accounts.google.com', () => {
cy.get('[type="email"]')
.type(Cypress.env('email')) // google email
cy.get('[type="button"]')
.click()
cy.get('[type="password"]')
.type(Cypress.env('password')) // google password
cy.get('[type="button"]')
.click()
})
cy.location('pathname')
.should('eq', '/success') // check that we have successfully
})
As you see, all the actions that belong to another domain are wrapped in the callback of cy.origin() command. This separates actions that happen on the third party domain.
The Cypress team actually developed this feature alongside another one that came out from beta, cy.session(). This command makes authenticating in your end-to-end tests much more effective. Instead of logging in before every test, you can log in just once, cache that login, and re-use it across all your specs. I recently wrote a walkthrough of this command on my blog and showed how you can use it instead of a classic page object.
This command is especially useful for the use case from the previous code example. Third-party login services usually have security measures in place that prevent bots or automated scripts from trying to login too often. If you attempt to login too many times, you might get hit with CAPTCHA or some other rate-limiting feature. This is definitely a risk when running tens or hundreds of tests.
it('Google SSO login', () => {
cy.visit('/login') // primary app login page
cy.getDataCy('google-button')
.click() // clicking the button will redirect to another domain
cy.session('google login', () => {
cy.origin('https://accounts.google.com', () => {
cy.get('[type="email"]')
.type(Cypress.env('email')) // google email
cy.get('[type="button"]')
.click()
cy.get('[type="password"]')
.type(Cypress.env('password')) // google password
cy.get('[type="button"]')
.click()
})
})
cy.location('pathname')
.should('eq', '/success') // check that we have successfully
})
When running a test, Cypress will make a decision when it reaches the cy.session() command:
You can create multiple of these sessions and test your application using different accounts. This is useful if you want to test different account privileges or just see how the application behaves when seen by different accounts. Instead of going through the login sequence through UI or trying to log in programmatically, you can quickly restore the session and reuse it across all your tests.
This also means that you will reduce your login attempts to a minimum and prevent getting rate-limited on your third party login service.
Cypress GUI is a great companion for writing and debugging your tests. With the version 10 release, it has dropped support for the “Run all specs” button in the GUI. The community was not very happy about this change, so Cypress decided to bring it back.
The reason why it was removed in the first place is that it could bring some unexpected results. Simply put, this functionality would merge all your tests into one single file. This can get tricky especially if you use before(), beforeEach(), after() and afterEach() hooks in your tests. These would often get ordered and stacked in unexpected order. Take following example:
// file #1
describe('group 1', () => {
it('test A', () => {
// ...
})
})
it('test B', () => {
// ...
})
// file #2
before( () => {
// ...
})
it('test C', () => {
// ...
})
If this runs as a single file, the order of actions would go like this:
This is mainly caused by how Mocha framework executes blocks of code. If you properly wrap every test into describe() blocks, you would get much less surprises, but that’s not always what people do.
On the other hand, running all specs can be really useful when developing an application. I use this feature to get immediate feedback on changes I make in my code when I work on my cypress plugin for testing API. Whenever I make a change, all my tests re-run and I can see all the bugs that I’ve introduced. ?
Running all specs is now behind an experimental flag, so you need to set experimentalRunAllSpecs to true in your cypress.config.js configuration file.
It is always a good idea to keep your tests isolated. If your tests depend on one another, it may create a domino effect. First test will make all the subsequent tests fail as well. Things get even more hairy when you bring parallelisation into the equation.
You could say that Cypress is an opinionated testing framework, but my personal take on this is that this is a good opinion to have. The way Cypress enforces test isolation with this update is simple. In between every test, Cypress will navigate from your application to a blank page. So in addition to all the cleaning up Cypress did before (clearing cookies, local storage), it will now make sure to “restart” the tested application as well.
In practice the test execution would look something like this:
it('test A', () => {
cy.visit('https://staging.myapp.com')
// ...
// your test doing stuff
})
// navigates to about:blank
it('test B', () => {
cy.get('#myElement') // nope, will fail, we are at about:blank
})
This behavior is configurable, so if you need some time to adjust to this change, you can set testIsolation to false in your configuration.
Some of the APIs and commands reached end of life with the latest Cypress release. For example, cy.route() and cy.server() have been replaced by the much more powerful cy.intercept() command that was introduced back in version 6.
The more impactful change was the deprecation of Cypress.Cookies.default() and Cypress.Cookies.preserveOnce() APIs that were used for handling the behavior of clearing up and preserving cookies. With the introduction of cy.session(), these APIs didn’t fit well into the system. The migration from these commands to cy.session() might not seem as straightforward, but it is quite simple when you look at it.
For example, instead of using Cypress.Cookies.preserveOnce() function to prevent deletion of certain cookies you can use cy.session() like this:
beforeEach(() => {
cy.session('importantCookies', () => {
cy.setCookie('authentication', 'top_secret');
})
});
it('test A', () => {
cy.visit('/');
});
it('test B', () => {
cy.visit('/');
});
Also, instead of using Cypress.Cookies.defaults() to set up default cookies for your tests, you can go to your cypress/support/e2e.js support file and set up a global beforeEach() hook that will do the same as shown in the previous example.
Besides these there were a couple of bug fixes and smaller tweaks which can all be viewed in Cypress changelog. Overall, I think that the v12 release of Cypress is one of the unsung heroes. Rewriting of query commands and availability of cy.session() and cy.origin() commands may not seem like a big deal on paper, but it will make the experience much smoother than it was before.
New command queries might require some rewriting in your tests. But I would advise you to upgrade as soon as possible, as this update will bring much more stability to your tests. I’d also advise to rethink your test suite and integrate cy.session() to your tests as it might not only handle your login actions more elegantly but shave off minutes of your test run.
If you want to learn more about Cypress, you can come visit my blog, subscribe to my YouTube channel, or connect with me on Twitter or LinkedIn.
The post What’s New in Cypress 12 appeared first on Automated Visual Testing | Applitools.
]]>The post UI Testing: A Getting Started Guide and Checklist appeared first on Automated Visual Testing | Applitools.
]]>Learn everything you need to know about how to perform UI testing, including why it’s important, a demo of a UI test, and tips and tricks to make UI testing easier.
When users explore web, mobile or desktop applications, the first thing they see is the User Interface (UI). As digital applications become more and more central to the way we all live and work, the way we interact with our digital apps is an increasingly critical part of the user experience.
There are many ways to test an application: Functional testing, regression testing, visual testing, cross-browser testing, cross-device testing and more. Where does UI testing fit into this mix?
UI testing is essential to ensure that the usability and functionality of an application performs as expected. This is critical for delivering the kinds of user experiences that ensure an application’s success. After all, nobody wants to use an app where text is unreadable, or where buttons don’t work. This article will explain the fundamentals of UI testing, why it’s important, and supply a UI testing checklist and examples to help you get started.
UI testing is the process of validating that the visual elements of an application perform as expected. In UI Testing, graphical components such as text, radio buttons, checkboxes, buttons, colors, images and menus are evaluated against a set of specifications to determine if the UI is displaying and functioning correctly.
UI testing is an important way to ensure an application has a reliable UI that always performs as expected. It’s critical for catching visual and even functional bugs that are almost impossible to detect using other kinds of testing.
Modern UI testing, which typically utilizes visual testing, works by validating the visual appearance of an application, but it does much more than make sure things simply look correct. Your application’s functionality can be drastically affected by a visual bug. UI testing is critical for verifying the usability of your UI.
Note: What’s the difference between UI testing and GUI testing? Modern applications are heavily dependent on graphical user interfaces (GUIs). Traditional UI testing can include other forms of user interfaces, including CLIs, or can use DOM-based coded locators to try and verify the UI rather than images. Modern UI testing today frequently involves visual testing.
Let’s take an example of a visual bug that slipped into production from the Southwest Airlines website:
Under a traditional functional testing approach this would pass the test suite. All the elements are present on the page and successfully loaded. But for the user, it’s easy to see the visual bug.
This does more than deliver a negative user experience that may harm your brand. In this example, the Terms and Conditions are directly overlapping the ‘continue’ button. It’s literally impossible for the user to check out and complete the transaction. That’s a direct hit to conversions and revenue.
With good UI testing in place, bugs like these will be caught before they become visible to the user.
Manual UI testing is performed by a human tester, who evaluates the application’s UI against a set of requirements. This means the manual tester must perform a set of tasks to validate that the appearance and functionality of every UI element under test meets expectations. The downsides of manual testing are that it is a time-consuming process and that test coverage is typically low, particularly when it comes to cross-browser or cross-device testing or in CI/CD environments (using Jenkins, etc.). Effectiveness can also vary based on the knowledge of the tester.
Record and Playback UI testing uses automation software and typically requires limited or no coding skill to implement. The software first records a set of operations executed by a tester, and then saves them as a test that can be replayed as needed and compared to the expected results. Selenium IDE is an example of a record and playback tool, and there is even one built directly into Google Chrome.
Model-based UI testing uses a graphical representation of the states and transitions that an application may undergo in use. This model allows the tester to better understand the system under test. That means tests can be generated and potentially automated more efficiently. In its simplest form, the approach requires the steps below:
Manual testing, as we have seen above, has a few severe limitations. Because the process relies purely on humans performing tasks one at a time, it is a slow process that is difficult to scale effectively. Manual testing does, however, have advantages:
In most cases automation will help testing teams save time by executing pre-determined tests repeatedly. Automation testing frameworks aren’t prone to human errors and can run continuously. They can be parallelized and executed easily at scale. With automated testing, as long as tests are designed correctly they can be run much more frequently with no loss of effectiveness.
Automation testing frameworks may be able to increase efficiency even further with specialized capabilities for things like cross-browser testing, mobile testing, visual AI and more.
On the surface, UI testing is simple – just make sure everything “looks” good. Once you poke beneath that surface, testers can quickly find themselves encountering dozens of different types of UI elements that require verification. Here is a quick checklist you can use to make sure you’ve considered all the most common items.
Each of the above must be tested across every page, table, form and menu that your application contains.
It’s also a good practice to test the UI for specific critical end-to-end user journeys. For example, making sure that it’s possible to journey smoothly from: User clicks Free Trial Signup (Button) > User submits Email Address (Form) > User Logs into Free Trial (Form) > User has trial access (Product)
UI testing can be a challenge for many reasons. With the proper tooling and preparation these challenges can be overcome, but it’s important to understand them as you plan your UI testing strategy.
Let’s take an example of an app with a basic use case, such as a login screen.
Even a relatively simple page like this one will have numerous important test cases (TC):
Simply testing each scenario on a single page can be a lengthy process. Then, of course, we encounter one of the challenges listed above – the UI changes quickly, requiring frequent regression testing.
Performing this regression testing manually while maintaining the level of test coverage necessary for a strong user experience is possible, but would be a laborious and time-consuming process. One effective strategy to simplify this process is to use automated tools for visual regression testing to verify changes to the UI.
Visual regression testing is a method of ensuring that the visual appearance of the application’s UI is not negatively affected by any changes that are made. While this process can be done manually, modern tools can help you automate your visual testing to verify far more tests far more quickly.
Let’s return to our login screen example from earlier. We’ve verified that it works as intended, and now we want to make sure any new changes don’t negatively impact our carefully tested screen. We’ll use automated visual regression testing to make this as easy as possible.
Applitools has pioneered the best Visual AI in the industry, and it’s able to automatically detect visual and functional bugs just as a human would. Our Visual AI has been trained on billions of images with 99.9999% accuracy and includes advanced features to reduce test flakiness and save time, even across the most complicated test suites.
The Applitools Ultrafast Test Cloud includes unique features like the Ultrafast Grid, which can run your functional & visual tests once locally and instantly render them across any combination of browsers, devices, and viewports. Our automated maintenance capabilities make use of Visual AI to identify and group similar differences found across your test suite, allowing you to verify multiple checkpoint images at once and to replicate maintenance actions you perform for one step in other relevant steps within a batch.
You can find out more about the power of Visual AI through our free report on the Impact of Visual AI on Test Automation. Check out the entire Applitools platform and sign up for your own free account today.
Happy Testing!
Read More
The post UI Testing: A Getting Started Guide and Checklist appeared first on Automated Visual Testing | Applitools.
]]>The post What is Regression Testing? Definition, Tutorial & Examples appeared first on Automated Visual Testing | Applitools.
]]>In this detailed guide, learn everything you need to know about what regression testing is, along with best practices and examples. Learn how you can apply regression testing in your own organization and much more.
While regression testing is practiced in almost every organization, each team may have its own procedures and approaches. This article is a starter kit for organizations seeking a solid start to their regression testing strategy. It also assists teams in delving deeper into the missing links in their current regression testing technique in order to evolve their test strategy.
Regression testing is a type of software testing that verifies an application continues to work as intended after any code revisions, updates, or optimizations. As the application continues to evolve by adding new features, the team must perform regression testing to evaluate that the existing features work as expected and that there are no bugs introduced with the new feature(s).
In this post, we will discuss various techniques for Regression Testing, and which to use depending on your team’s way of working.
However, before we jump onto the how part, let us understand why having a regression test suite is essential.
A software application gets directly modified due to new enhancements (functional, performance or even improved security), tweaks or changes to existing features, bug fixes, and updates. It is also indirectly affected by the third-party services it consumes to provide features through its interface.
Changes in the application’s source code, both planned and unintended, demand verification. Additionally, the impact of modifications to external services used by the application should be verified.
Teams must ensure that the modified component of the application functions as expected and that the change had no adverse effect on the other sections of the application.
A comprehensive regression testing technique aids the team in identifying regression issues, which are subsequently corrected and retested to ensure that the original faults are not present.
Let us quickly understand with the help of an example – Login functionality.
People commonly use the terms smoke, sanity, and regression interchangeably in testing, which is misleading. These terms differ not only in terms of the application’s scope, but also in terms of when they are carried out.
Smoke testing is done at the outset of a fresh build. The main goal is to see if the build is good enough to start testing. Some examples include being able to launch the site by simply hitting in the URL, or being able to run the app after installing a new executable.
Sanity testing is surface level testing on newly deployed environments. For instance, the features are broadly tested on staging environments before passing it on to User Acceptance Testing. Another example could be verifying that the fonts have correctly loaded on the web page, expected components are interactive and overall things appear to be in order without a detailed investigation.
Regression testing has more depth where the potentially impacted areas are thoroughly tested on the environment where new changes have been introduced.
Existing stable features are rigorously tested on a regular basis to ensure their accuracy in the face of purposeful and unintended changes.
The techniques can be grouped into the following categories:
As the name suggests, partial regression testing is an approach where a subset of the entire regression suite is selected and executed as part of regression testing.
This subset selection results from a combination of several logical criteria as follows:
Partial regression testing works excellently when the team successfully identifies the impacted areas and the corresponding test cases through proven ways like the Requirement Traceability Matrix (RTM henceforth) or any other form of metadata approved by the team.
The following situations are more conducive to partial regression testing:
While this method is effective, it is possible to overlook issues if:
In many cases, reasons like significant software updates, changes to the tech stack demand the team to perform comprehensive regression testing to uncover new problems or problems introduced due to the changes.
In this approach, the whole test suite is run to uncover issues every time new code is committed, or, at some agreed time intervals.
This is a significantly time-consuming approach compared to the other techniques and should ideally be adopted only when the situation demands.
To keep the feedback cycle faster, one must embrace automated testing to enable productive complete regression testing in their teams.
Irrespective of the technique adopted, I always suggest that teams prioritize the most business-critical cases and the common use cases performed by end-users when it comes to execution.
Remember, the main goal of regression testing is to ensure that the end-user is not impacted due to an unavailable/incorrect feature, which could affect business outcomes in many ways.
To achieve better testing coverage of your application, plan your regression testing with a combination of technology and business scenarios. Apply the practices across the Test Pyramid.
Arranging the information in the form of a matrix enables the team to quickly identify the potentially impacted areas.
Alternatively, many test case management tools now have started providing inbuilt support to build a regression test suite with the help of appropriate tags and modules. These tools also let you systematically track and identify patterns in the regression test execution to dig into more related areas.
I have seen teams being most effective when they have automated most of their regression suite, and the non-automatable tests organised and represented in a meaningful way that allows quick filtering and meaningful information.
We should leverage the power of automation to create test data instantly across different test environments. We need to ascertain that the updated feature is evaluated against both old and new data.
Ex: A new field added to a user profile, for example, should work consistently for both existing and newly formed accounts.
Production test data plays a vital role in identifying issues that might have been missed during the initial delivery.
In cases where possible, replicate the production environment to identify edge cases and add those scenarios to the regression test suite.
Using production data isn’t always viable, and it can lead to non-compliance issues. Teams frequently conceal / mask sensitive information from production data and use the information to fulfil the requirement for on-the-ground scenario analysis.
If you have multiple environments, we should verify that the application works as intended in each of the environments.
Every time a new person joined the team when the development was already in progress, they asked meaningful questions about the long-forgotten stable features. I also prefer young guns to be part of my regression team to get a raw and holistic testing perspective.
Automate the regression test suite! If you have the budget, great, or else, create supporting mechanisms to utilise the team’s idle time to implement automated tests.
Simply automating the business-critical scenarios or the most used workflows is a good enough start. Initiate this activity and work incrementally.
Either tag/annotate your automated scenarios as per the feature or segregate them into appropriate folders so that you’d be able to run particular automated regression scenarios.
Sequential execution won’t scale with a rising number of test environments and permutations, despite the fact that automated test execution is faster. As a result, concurrent test execution in various settings is required to meet scalability requirements. Selenium Grid and other cloud solutions like Applitools Ultrafast Test Cloud enable you to execute automated tests in parallel across different configurations.
In addition to adhering to best practises when creating the test automation framework, these tests must run at a high pace and in parallel to provide faster feedback.
Always! One cannot ignore the business limitations and the client demands to meet the delivery. Based on your context, adopt the most suitable regression testing techniques.
I have seen it take a long time to automate a regression backlog. To keep making progress on this activity, while estimating the Sprint tasks, always account for regression testing efforts explicitly, or you might be increasing your technical debt in the form of uncovered bugs.
Changes are not always directly related to client needs, nor are they always conveyed. Internally, the development team continually optimises the code for reusability, performance, and other factors. Ensure that these source-code modifications are documented/tracked in a ticket so that the team can perform regression testing accordingly.
An enterprise product results from multiple teams’ contributions across geographies. While the teams will independently conduct regression testing for their part, it mustn’t be done only in silos. The teams must also set up cadence structures/processes to test all integrational regression scenarios.
Crowdsourced testing can help find brand new flaws in the programme, such as functionality, usability, and localization concerns, thereby improving the product’s quality.
Non-functional elements like performance, security, accessibility, and usability must all be examined as part of your regression testing plan, in addition to functionality.
Benchmarking test execution results from past sessions and comparing them to test execution results after the most recent modifications is a simple but effective technique for detecting performance, accessibility, and other degradations.
Due to substantial faults in non-functional areas, applications with the best functionality have either been unable to see production through or have been shelved despite launching successfully.
In a similar vein, application security and accessibility issues have cost businesses millions of dollars in addition to a tarnished reputation.
Regardless of your application architecture or development methodology, the importance of automating the regression tests can never fade away. Be it a small-scale application or an enterprise product, having automated tests will save you time, people’s energy and money in the longer run.
Let’s understand some reasons to automate the regression test suite:
Automated software verification is exponentially faster than humans. Automated continuous testing in the CI-CD pipeline is a powerful approach to identifying regression bugs as close to its introduction because of the increased speed and frequency at which it operates.
Equally important is to look at the test results from each automated suite execution and take meaningful steps to get the product and the test suite progressively better.
Timely identification of issues will avoid defect leakage in the most significant parts of the application and later stages of testing.
Consequently, the slight left shift always profits the organisation in many ways apart from cost.
Before getting to the actual testing, the testing teams spend a significant amount of time generating test data. Automation aids not only in the execution of tests but also in the rapid generation of large amounts of test data. The functional testing team may leverage data generated by scripts (SQL, APIs), allowing them to focus on testing rather than worrying about the data.
Testing features like pagination, infinite scroll, tabular representation, performance of the app are few examples where rapid test data generation helps the team with instant test data.
Banking and insurance are regulated sectors with several complex operations and subtleties. To exercise and address the data models and flows, a variety of test data is required. The ability to automate test data management has shown to be a critical component of successful testing.
The automated test suite’s parallel execution answers the need for faster feedback and does it rapidly. Teams can generate test results across a variety of environments, browsers, devices, and operating systems with the right infrastructure and the prerequisite of having built a scalable automated test suite.
The Applitools Ultrafast Test Cloud is the next step forward in cross-browser testing. You run your functional and visual tests once locally using Ultrafast Grid (part of the Ultrafast Test Cloud), and the grid instantaneously generates all screens across whatever combination of browsers, devices, and viewports you choose.
Repetitive tasks are handled efficiently and consistently through automation. It does not make errors in the same way that people do.
It also allows humans to concentrate their ingenuity on exploratory testing, which machines cannot accomplish. You can deploy new features with a reduced time-to-market thanks to automation.
Now, let’s complete the cycle by ensuring that the corresponding test cases (manual and automated) are also modified immediately with every modification and change request to any existing part of the application. These modified test cases should now be part of the regression suite.
Failing to adjust the test cases would create chaos in the teams involved. The circus might result in incorrect testing of the underlying application and introduce unintended features and rollbacks.
Maintaining the regression test suite consists of adding new tests, modifying existing tests, and deleting irrelevant tests. These changes should be reflected in the manual and automated test suites.
There aren’t separate testing tools categorised as “regression testing tools.” The teams use the same testing tools; however, many test automation tools are utilised to automate the regression test suite.
Depending on the project type, the following regression testing tools may be used in a combination of the above techniques mentioned in the previous section:
APIs are the foundation of modern software development, especially as more and more teams abandon monolithic programmes in favour of a microservices-based strategy.
UI accuracy is unquestionably vital for a successful business because it directly impacts end users.
Even when utilizing the most extraordinary development processes and frontend technology, testing the UI is one of the most significant bottlenecks in a release.
Applitools is a pioneer in AI-powered automated visual regression testing. Their solution allows you to integrate Visual Testing with functional and regression UI automation and in turn get increased test coverage, quick feedback, and seamless scaling by using the Applitools Ultrafast Grid – all while writing less code. You can try out their solutions by signing up for a free account and going through the tutorials available here.
Teams responsible for testing legacy applications often experience the need to explore the application before blindly getting started with the regression test suite.
Utilizing the results from your exploratory testing sessions to populate and validate your impact analysis documents and RTMs proves beneficial in making necessary modifications to the regression test suite.
Exploratory testing tools are incredibly valuable and can assist you in achieving your goal for the session, whether it’s to explore a component of the app, detect flaws, or determine the relationship between features.
Each of the following topics is a specialised field in and of itself, and it is impossible to cover them all in one blog post. This list, on the other hand, will undoubtedly get you thinking in that direction.
A well-thought-out regression testing plan will aid your team in achieving your QA and software development goals, whether the architecture is monolithic or microservices-based, and whether the application is new or old. You can learn about how Applitools can help with functional and visual regression testing here.
Editor’s Note: This post was originally published in January 2022, and has since been updated for completeness and accuracy.
The post What is Regression Testing? Definition, Tutorial & Examples appeared first on Automated Visual Testing | Applitools.
]]>The post What is Functional Testing? Types and Example (Full Guide) appeared first on Automated Visual Testing | Applitools.
]]>Functional testing is a type of software testing where the basic functionalities of an application are tested against a predetermined set of specifications. Using Black Box Testing techniques, functional tests measure whether a given input returns the desired output, regardless of any other details. Results are binary: tests pass or fail.
Functional testing is important because without it, you may not accurately understand whether your application functions as intended. An application may pass non-functional tests and otherwise perform well, but if it doesn’t deliver the key expected outputs to the end-user, the application cannot be considered working.
Functional tests verify whether specified functional requirements are met, where non-functional tests can be used to test non-functional things like performance, security, scalability or quality of the application. To put it another way, functional testing is concerned with if key functions are operating, and non-functional tests are more concerned with how the operations take place.
There are many types of functional tests that you may want to complete as you test your application.
A few of the most common include:
Unit testing breaks down the desired outcome into individual units, allowing you to test whether a small number of inputs (sometimes just one) produce the desired output. Unit tests tend to be among the smallest tests to write and execute quickly, as each is designed to cover only a single section of code (a function, method, object, etc.) and verify its functionality.
Smoke testing is done to verify that the most critical parts of the application work as intended. It’s a first pass through the testing process, and is not intended to be exhaustive. Smoke tests ensure that the application is operational on a basic level. If it’s not, there’s no need to progress to more detailed testing, and the application can go right back to the development team for review.
Sanity testing is in some ways a cousin to smoke testing, as it is also intended to verify basic functionality and potentially avoid detailed testing of broken software. The difference is that sanity tests are done later in the process in order to test whether a new code change has had the desired effect. It is a “sanity check” on a specific change to determine if the new code roughly performs as expected.
Integration testing determines whether combinations of individual software modules function properly together. Individual modules may already have passed independent tests, but when they are dependent on other modules to operate successfully, this kind of testing is necessary to ensure that all parts work together as expected.
Regression testing makes sure that the addition of new code does not break existing functionalities. In other words, did your new code cause the quality of your application to “regress” or go backwards? Regression tests target the changes that were made and ensure the whole application continues to remain stable and function as expected.
Usability testing involves exposing your application to a limited group of real users in a production environment. The feedback from these live users – who have no prior experience with the application and may discover critical bugs that were unknown to internal teams – is used to make further changes to the application before a full launch.
UI/UX testing evaluates the graphical user interface of the application. The performance of UI components such as menus, buttons, text fields and more are verified to ensure that the user experience is ideal for the application’s users. UI/UX testing is also known as visual testing and can be manual or automated.
Other classifications of functional testing include black box testing, white box testing, component testing, API testing, system testing and production testing.
The essence of a functional test involves three steps:
Essentially, when you executed a task with input (e.g.: enter an email address into a text field and click submit), did your application generate the expected output (e.g.: user is subscribed and thank you page is displayed)?
We can understand this further with a quick example.
Let’s begin with a straightforward application: a calculator.
To create a set of functional tests, you would need to:
For more on how to create a functional test, you can see a full guide on how to write an automated functional test for this example.
There are many functional testing techniques you might use to design a test suite for this:
Other common functional testing techniques include equivalence testing, alternate flow testing, positive testing and negative testing.
Manual functional testing requires a developer or test engineer to design, create and execute every test by hand. It is flexible and can be powerful with the right team. However, as software grows in complexity and release windows get shorter, a purely manual testing strategy will face challenges keeping up a large degree of test coverage.
Automated functional testing automates many parts of the testing process, allowing tests to run continuously without human interaction – and with less chance for human error. Tests must still be designed and have their results evaluated by humans, but recent improvements in AI mean that with the right tool an increasing share of the load can be handled autonomously.
One way to automate your functional tests is by using automated visual testing. Automated visual testing uses Visual AI to view software in the same way a human would, and can automatically highlight any unexpected differences with a high degree of accuracy.
Visual testing allows you to test for visual bugs, which are otherwise extremely challenging to uncover with traditional functional testing tools. For example, if an unrelated change caused a “submit” button to be shifted to the far right of a page and it could no longer be clicked by the user, but it was still technically on the page and using the correct identifier, it would pass a traditional functional test. Visual testing would catch this bug and ensure functionality is not broken by a visual regression.
Here are a few key considerations to keep in mind when choosing an automated testing tool:
Automated testing tools can be paid or open source. Some popular open source tools include Selenium for web testing and Appium for mobile testing.
Applitools has pioneered the best Visual AI in the industry, and it’s able to automatically detect visual and functional bugs just as a human would. Our Visual AI has been trained on billions of images with 99.9999% accuracy and includes advanced features to reduce test flakiness and save time, even across the most complicated test suites.
You can find out more about the power of Visual AI through our free report on the Impact of Visual AI on Test Automation. Check out the entire Applitools platform and sign up for your own free account today.
Happy Testing!
Looking to learn more about Functional Testing? Check out the resources below to find out more.
The post What is Functional Testing? Types and Example (Full Guide) appeared first on Automated Visual Testing | Applitools.
]]>The post Codeless End-to-End AI-Powered Cross Browser UI Testing with Applitools and Testim.io appeared first on Automated Visual Testing | Applitools.
]]>As a product manager at Applitools I am excited to announce an enriched and updated integration with Testim.io! This enhanced integration makes it easier for testers of any technical ability to use Applitools and our AI-powered visual testing platform by using Testim.io to easily create your test scripts.
Testim.io is a cloud platform that allows users to create, execute, and maintain automated tests without using code.
It is a perfect tool for getting started with your first automated tests, if you do not have an existing automated testing framework or if you have not started to run tests yet. Testim.io allows you to integrate your own custom code into their steps so you can implement custom validations if you need to.
The visual validation empowered by Applitools Eyes allows you to compare the visual differences between expected results (baseline) with actual results after creating the tests in Testim.io. By using Visual AI to compare snapshots Applitools Eyes can spot any unexpected changes and highlight them visually. This lets you expand your test coverage to include everything on a given page as well as visually verify your results quickly.
As part of the integration, you can modify test parameters to customize Eyes while working with the Testim UI.
This AI-based visual validation functionality is provided by Applitools and requires simple integration setup in the Eyes application. Learn more.
This up-to-date integration provides access to Applitools’ latest and greatest capabilities, including Ultrafast Test Cloud, enabling ultrafast cross-browser and cross-platform functional and visual testing. Testim users also now have access to Root Cause Analysis and many more powerful Applitools features!
The new integration also greatly improves on the user experience of test creators adding Applitools Eyes checkpoints to their Testim.io tests. Visual validations can be added right inside Testim and the maintenance and analysis of test results is much simpler.
You can perform the following visual validations:
Whether you select the element, viewport, or full-page visualization option you can always override the visual setting for that test or step.
The following Applitools Eyes settings can be accessed via the Testim.io UI:
In addition to exposing new features in the Testim UI, we have provided better visibility to Testim tests in Applitools Eyes:
Testim.io allows users to quickly create and maintain tests through record and playback. Adding Applitools visual testing with Ultrafast Test Cloud capabilities will make sure your release cycles are short and test analysis and maintenance are easier than ever!
If you want to learn more about how you can integrate your codeless Testim tests with Applitools and benefit from the latest Applitools capabilities, head over to Testim.io documentation.
Contact us if you have any queries about Applitools!
Happy testing!
The post Codeless End-to-End AI-Powered Cross Browser UI Testing with Applitools and Testim.io appeared first on Automated Visual Testing | Applitools.
]]>The post What is Visual AI? appeared first on Automated Visual Testing | Applitools.
]]>In this guide, we’ll explore Visual Artificial Intelligence (AI) and what it means. Read on to learn what Visual AI is, how it’s being applied today, and why it’s critical across a range of industries – and in particular for software development and testing.
From the moment we open our eyes, humans are highly visual creatures. The visual data we process today increasingly comes in digital form. Whether via a desktop, a laptop, or a smartphone, most people and businesses rely on having an incredible amount of computing power available to them and the ability to display any of millions of applications that are easy to use.
The modern digital world we live in, with so much visual data to process, would not be possible without Artificial Intelligence to help us. Visual AI is the ability for computer vision to see images in the same way a human would. As digital media becomes more and more visual, the power of AI to help us understand and process images at a massive scale has become increasingly critical.
Artificial Intelligence refers to a computer or machine that can understand its environment and make choices to maximize its chance of achieving a goal. As a concept, AI has been with us for a long time, with our modern understanding informed by stories such as Mary Shelley’s Frankenstein and the science fiction writers of the early 20th century. Many of the modern mathematical underpinnings of AI were advanced by English mathematician Alan Turing over 70 years ago.
Since Turing’s day, our understanding of AI has improved. However, even more crucially, the computational power available to the world has skyrocketed. AI is able to easily handle tasks today that were once only theoretical, including natural language processing (NLP), optical character recognition (OCR), and computer vision.
Visual AI is the application of Artificial Intelligence to what humans see, meaning that it enables a computer to understand what is visible and make choices based on this visual understanding.
In other words, Visual AI lets computers see the world just as a human does, and make decisions and recommendations accordingly. It essentially gives software a pair of eyes and the ability to perceive the world with them.
As an example, seeing “just as a human does” means going beyond simply comparing the digital pixels in two images. This “pixel comparison” kind of analysis frequently uncovers slight “differences” that are in fact invisible – and often of no interest – to a genuine human observer. Visual AI is smart enough to understand how and when what it perceives is relevant for humans, and to make decisions accordingly.
Visual AI is already in widespread use today, and has the potential to dramatically impact a number of markets and industries. If you’ve ever logged into your phone with Apple’s Face ID, let Google Photos automatically label your pictures, or bought a candy bar at a cashierless store like Amazon Go, you’ve engaged with Visual AI.
Technologies like self-driving cars, medical image analysis, advanced image editing capabilities (from Photoshop tools to TikTok filters) and visual testing of software to prevent bugs are all enabled by advances in Visual AI.
One of the most powerful use cases for AI today is to complete tasks that would be repetitive or mundane for humans to do. Humans are prone to miss small details when working on repetitive tasks, whereas AI can repeatedly spot even minute changes or issues without loss of accuracy. Any issues found can then either be handled by the AI, or flagged and sent to a human for evaluation if necessary. This has the dual benefit of improving the efficiency of simple tasks and freeing up humans for more complex or creative goals.
Visual AI, then, can help humans with visual inspection of images. While there are many potential applications of Visual AI, the ability to automatically spot changes or issues without human intervention is significant.
Cameras at Amazon Go can watch a vegetable shelf and understand both the type and the quantity of items taken by a customer. When monitoring a production line for defects, Visual AI can not only spot potential defects but understand whether they are dangerous or trivial. Similarly, Visual AI can observe the user interface of software applications to not only notice when changes are made in a frequently updated application, but also to understand when they will negatively impact the customer experience.
Traditional testing methods for software testing often require a lot of manual testing. Even at organizations with sophisticated automated testing practices, validating the complete digital experience – requiring functional testing, visual testing and cross browser testing – has long been difficult to achieve with automation.
Without an effective way to validate the whole page, Automation Engineers are stuck writing cumbersome locators and complicated assertions for every element under test. Even after that’s done, Quality Engineers and other software testers must spend a lot of time squinting at their screens, trying to ensure that no bugs were introduced in the latest release. This has to be done for every platform, every browser, and sometimes every single device their customers use.
At the same time, software development is growing more complex. Applications have more pages to evaluate and increasingly faster – even continuous – releases that need testing. This can result in tens or even hundreds of thousands of potential screens to test (see below). Traditional testing, which scales linearly with the resources allocated to it, simply cannot scale to meet this demand. Organizations relying on traditional methods are forced to either slow down releases or reduce their test coverage.
At Applitools, we believe AI can transform the way software is developed and tested today. That’s why we invented Visual AI for software testing. We’ve trained our AI on over a billion images and use numerous machine learning and AI algorithms to deliver 99.9999% accuracy. Using our Visual AI, you can achieve automated testing that scales with you, no matter how many pages or browsers you need to test.
That means Automation Engineers can quickly take snapshots that Visual AI can analyze rather than writing endless assertions. It means manual testers will only need to evaluate the issues Visual AI presents to them rather than hunt down every edge and corner case. Most importantly, it means organizations can release better quality software far faster than they could without it.
Additionally, due to the high level of accuracy, and efficient validation of the entire screen, Visual AI opens the door to simplifying and accelerating the challenges of cross browser and cross device testing. Leveraging an approach for ‘rendering’ rather than ‘executing’ across all the device/browser combinations, teams can get test results 18.2x faster using the Applitools Ultrafast Test Cloud than traditional execution grids or device farms.
As computing power increases and algorithms are refined, the impact of Artificial Intelligence, and Visual AI in particular, will only continue to grow.
In the world of software testing, we’re excited to use Visual AI to move past simply improving automated testing – we are paving the way towards autonomous testing. For this vision (no pun intended), we have been repeatedly recognized as an industry leader by the industry and our customers.
What is Visual Testing (blog)
The Path to Autonomous Testing (video)
What is Applitools Visual AI (learn)
Why Visual AI Beats Pixel and DOM Diffs for Web App Testing (article)
How AI Can Help Address Modern Software Testing (blog)
The Impact of Visual AI on Test Automation (report)
How Visual AI Accelerates Release Velocity (blog)
Modern Functional Test Automation Through Visual AI (free course)
Computer Vision defined (Wikipedia)
The post What is Visual AI? appeared first on Automated Visual Testing | Applitools.
]]>The post Announcing Applitools Eyes 10.13: Enhanced Team Collaboration, Baseline across Browser/OS Versions and More appeared first on Automated Visual Testing | Applitools.
]]>We are excited to announce the latest release of Applitools Eyes, 10.13. A big focus of this release is helping teams work efficiently, collaborate, and receive notifications on visual changes via the communication systems they are using everyday.
Along with the new and expanded integration options, we’ve added some additional improvements that we hope you’ll find useful!
In addition to sharing test results on both Slack and via email, the new Applitools Eyes-Microsoft Teams integration provides you with the option to receive and view your test results via your Microsoft Teams chat. The Applitools Eyes App sends notifications to your Microsoft Teams chat to inform you when batches have finished running and to share a results summary with you. Learn more.
New browser or OS versions often introduce visual differences, therefore it is important to test across multiple versions to ensure visual perfection across all screens. Applitools Eyes now supports efficient and simple testing of your application on new browsers and OS versions. To save you time and effort, Eyes identifies the most relevant baseline to reuse whenever you test on a new version. It also allows you to easily filter and group baselines according to the browser or OS version you would like to explore. This capability is enabled by default for new accounts, for existing users please contact support to turn on this new & important capability for your accounts. Learn more.
Explore this new release to find out more! Existing customers can upgrade today for free, or if you’re new to Applitools feel free to explore the latest features with a free trial below.
The post Announcing Applitools Eyes 10.13: Enhanced Team Collaboration, Baseline across Browser/OS Versions and More appeared first on Automated Visual Testing | Applitools.
]]>The post What’s New In Selenium 4? appeared first on Automated Visual Testing | Applitools.
]]>(Editor’s Note: This post has been recently updated for accuracy and completeness. It was originally published in June 2020 by Manoj Kumar.)
There are a lot of cool and new things that just arrived in Selenium 4. If you haven’t heard, the official Selenium 4 release came out yesterday, and we’re excited by all the latest updates. We’ve got a full review of this long-awaited release ready for you, but first here’s a quick refresher on a few of the most interesting updates for Selenium 4.
After an extensive alpha and beta period to get everything right, Selenium 4 has now been officially released!
? Release ?
?? Selenium 4.0 is here! ??
Read all about in our blog post:https://t.co/E8ntH7OdaB
We hope you enjoy Selenium 4, and we can’t wait to see what you do with it!#selenium4
— Selenium (@SeleniumHQ) October 13, 2021
In the new release, there have been changes made to the highly anticipated feature, Relative Locators, where the returned elements are now sorted by proximity to make the results more deterministic. By proximity, it means being sorted based on the distance from the midpoints of each element’s bounding client rect. Also new is the ability to use any selector (not just tagname) to find any relative locators.
Also in this release, work for NetworkInterceptor has begun. This functionality, once complete, will be a part of the new ChromeDevTools feature and will allow testers to stub out responses to network requests!
Here are a few links outlining how you can get started with Selenium 4:
Although Selenium 4 is designed as a drop-in replacement for Selenium 3, it has some new tricks to help make your life as a tester easier. These include things like “relative locators,” and new support for intercepting network traffic, changes in how you can create a new Selenium instance, and more! Catch Selenium project lead Simon Stewart as he explains how these new features work, and also demonstrates how to use them. Learn how to take advantage of all that Selenium 4 can offer your tests!
What is your plan to move to Selenium 4.0? If you do not plan to upgrade, why not? What is preventing you from upgrading now that the official release is out?
To recap everything that’s new in the latest version of Selenium, keep reading for a full review of the cool things that have arrived in Selenium 4:
Selenium 4 is now released!
A lot of developments have happened since Selenium 4 was announced during the State of the Union Keynote by Simon Stewart and Manoj Kumar. There has been a significant amount of work done and we’ve released at least six alpha versions and four betas of Selenium 4 for users to try out and report back with any potential bugs so that we can make it right. Now, the official release is here.
It is exciting times for the Selenium community as we have a lot of new features and enhancements that make Selenium WebDriver even more usable and scalable for practical use cases.
Selenium is a suite of tools designed to support different user groups:
Let us dive in and take a look at some of the significant features that were released in each of these tools and share some of the cool upcoming features that are now available in Selenium 4.
One of the main reasons to release WebDriver as a major version (Selenium 4) is because of the complete W3C protocol adoption. The W3C protocol dialect has been available since the 3.8 version of Selenium WebDriver along with the JSON wire protocol. This change in protocol isn’t going to impact the users in any way, as all major browser drivers (such as geckodriver and chromedriver), and many third party projects, have already fully adopted the W3C protocol.
However, there are some notable new APIs, as well as the removal of deprecated APIs in the WebDriver API, such as:
What’s next in WebDriver beyond Selenium 4?
It would be nice to have users extend the locator strategy like FindByImage or FindbyAI (like in Appium) – right now we have a hardcoded list of element location strategies. Providing a lightweight way of extending this set, particularly when using Selenium Grid, is on the roadmap.
The original Selenium IDE reached its end of life in August 2017, when Mozilla released Firefox 55, which switched its add-ons from the Mozilla-specific “XPI” format to the standardised “Web Extension” mechanism. This meant that the original Selenium IDE would no longer work in Firefox versions moving forwards.
Thanks to Applitools, Selenium IDE has been revived! It is one of the significant improvements in Selenium 4 and includes notable changes like:
What’s next in Selenium IDE?
A remarkable milestone for Selenium IDE is that it’s going to be available as a standalone app, re-written to be an Electron app. By binding tightly to the browser, this would allow us to listen out for events from the browser, making test recording more powerful and feature-rich.
One of the essential improvements in Selenium 4 is the ability to use Docker to spin up containers instead of users setting up heavy virtual machines. Selenium Grid has been redesigned so that users can deploy it on Kubernetes for excellent scaling and self-healing capabilities.
Let’s look at some of the significant improvements:
What’s next in Selenium Grid?
As you follow, there have been exciting changes and performance improvements. There are a few more that expected to be added like:
We’ve also refreshed our branding, documentation, and the website, so check out Selenium.dev!
Selenium is an Open-Source project, and we do this voluntarily so there are never definite timelines that can be promised, but thanks for sticking with us and we’re excited that the new release is now here.
Please come and give us a hand if you have the energy and time! Happy hacking!
Thanks Simon Stewart in helping review this post!
Manoj Kumar is a Principal Consultant at ThoughtWorks. Manoj is an avid open-source enthusiast and a committer to the Selenium & Appium project. And a member of the project leadership committee for Selenium. Manoj has also contributed to various libraries and frameworks in the automated testing ecosystem like ngWebDriver, Protractor and Serenity to name a few. An avid accessibility practitioner who loves to share knowledge and is a voluntary member of the W3C ACT-R group. In his free time, he contributes to Open-Source projects or research on Accessibility and enjoys spending time with his family. He blogs at AssertSelenium.
Supercharge Selenium with Applitools Visual AI
Get STartedCover Photo by Sepp Rutz on Unsplash
The post What’s New In Selenium 4? appeared first on Automated Visual Testing | Applitools.
]]>The post Front-End Test Fest 2021 Recap appeared first on Automated Visual Testing | Applitools.
]]>Last month, Applitools and Cypress hosted the Front-End Test Fest, a free event that brought together leading experts in test automation for a full day of learning and discussion around front-end testing. It was a great opportunity to hear about the latest in the industry and get to hear some really innovative and interesting stories.
We’ve got all the videos ready for you here, so feel free to jump right in below, but let’s recap the event below.
This talk opened with Amir Rustamzadeh, Director of Developer Experience at Cypress, getting us all familiar with the latest and greatest in the testing tool. We all already know that Cypress is an excellent tool that is highly interactive and visual, but Amir took us on a tour of two new features that look pretty powerful.
These new features were Test Retries and Component Testing. Test Retries allows you to easily retry a test multiple times, helping you to catch and defeat test flake by highlighting how frequently a test passes with some handy analytics in the Cypress Dashboard. A related feature, Test Burn-In, allows you to do the same thing with brand new tests as they’re introduced. As for Component Testing, Amir noted that while this is typically done in a virtual DOM that you can’t debug, Cypress now has a beta where you can use the real browser DOM to mount a component and test it in isolation. Much better!
Angie Jones, Senior Director of Developer Relations at Applitools, then helped us understand the dangers of all-too-common visual bugs. Angie walked us through how Applitools Eyes can give your code super powers to find visual bugs, thanks to Visual AI. This talk covered visual component testing, visual testing of dynamic content, accessibility and localization testing, as well as cross-browser/viewport testing using the Ultrafast Grid. Check it out for a great overview of how to improve your visual testing.
Azure DevOps is a powerful tool, and if you’re curious about it, this talk will help get you started with it. Busra Alam, Software Quality Analyst, begins by covering the basics about what Azure, DevOps, and of course Azure DevOps means.
Azure Pipelines, part of Azure DevOps, is a tool to build, test and deploy your apps. By running tests in the pipeline, we can discover bugs early and deliver faster feedback with quicker time to market and overall better code quality. Bushra takes us through a live demo that shows how to create a pipeline, run a test and check the results – all automated through Azure, and quickly. She went on to share some advanced tips for running tests in parallel and utilizing release triggers. Check it out for the whole demo.
EverFi is a socially-conscious educational platform with a large number of courses and individualized learner paths. Greg Sypolt joined them as their VP of Quality Assurance to solve a tricky testing challenge they had – with so many different courses and paths for learners to take, traditional testing just couldn’t cover it all, and it would only get worse as EverFi grew.
Greg’s solution was to launch a multi-pronged approach centered around model-based testing. In this eye-opening talk you’ll see the step-by-step approach Greg used to build his models. Cypress and Applitools as critical components of the process, but there’s a lot more to it. This one is hard to sum up in a couple of sentences but is definitely worth watching to get the full story.
Stacy Kirk, CEO/Founder of QualityWorks Consulting Group, moderated this great panel with a trio of testing experts. Kristin Jackvony, Principal Engineer – Quality at Paylocity, Alfred Lucero, Senior Software Engineer at Twilio and Jeff Benton, Staff Software Engineer in Test at RxSaver share their experiences on a range of issues relevant to test engineers everywhere. Learn about the testing tools they used, tips for incorporating testing into the CI/CD process and how you can secure that crucial teamwide buy-in for testing. I won’t spoil it but the parting words from these experts make it clear that the first step for successful testing is to have the conversation with your team on the value of testing, and then just start – it’s ok if you start small with a quick win to get that buy-in quickly.
How can we avoid trapping ourselves underneath tests that are hard to maintain or worse, don’t even deliver any value? Ramona Schwering, a Developer Core at shopware AG, shared her own mistakes here (and yes, her love of Star Wars) to try and make sure you don’t have to make them too. Ramona has worked as both a developer and in testing so she knows how to speak to both experiences, and this was a very easy to follow, relatable talk. She shared three main pain points (or traps) that tests can fall into – they can be slow, they can be tough to maintain, and they can be “Heisen tests” that are so flakey they don’t tell you anything. Check this one out to hear more about traps and solutions and how you can keep your tests simple.
Colby Fayock, Developer Advocate at Applitools, kicked off his talk with a game of “UI Gone Wrong,” taking us through some cringeworthy examples of UI bugs from major organizations that probably cost them revenue or customers. You all know the kind of bug – it happens to everyone sometimes, but does it need to? With Cypress and Applitools working together, Colby showed us that you can do better. He walked us through a live demonstration of how you can easily add Applitools to an existing Cypress test, enhancing the browser automation provided by Cypress with Visual AI to catch any visual bugs. Take a look and see how you can take your testing to the next level.
As projects get increasingly complex, they get harder to maintain and changes become slower to deploy. That was the issue Hector Coronado and Joseph King were running into as frontend and web application engineers, respectively, at Autodesk. They were working on a React app they had built, the “Universal Help Module,” that provides users several types of support while appearing in multiple locations with varying layouts and UIs. To keep up with the growing complexity, they set out to build a fast and thorough CI/CD pipeline that would include an automated testing strategy.
Hector and Joseph moved away from manual testing and tried many tools for automated functional and visual testing. In the end, Cypress won big as a free all-in-one testing framework that is fast an open source, and they loved Applitools for its blazing speed, simple Cypress SDK, strong cross-browser capabilities and excellent customer support. They put them together to achieve a dream they used to get buy-in – more coverage with less code! Check out their full journey below.
You have limited time in your day – should you write that test or fix that bug? That’s the subhead for this talk by Kent C. Dodds, a JavaScript Engineer and Trainer at Kent C. Dodds Tech LLC. Unlike many of the presentations above, which are filled with awesome code examples and demos, Kent’s talk is intended to be a practical one with relatable examples to get you thinking about one key thing: How do you prioritize?
Kent describes his methodology for understanding what’s truly important to your company and its mission and how you can identify your role in pushing that forward. He also reminds all of us that we’re not simply hired as engineers to write code or tests, but as humans to advance a mission. Watch this video for some really humanizing inspiration and to spark some thoughts about how you can get more out of your day.
We’ve got you covered with another free event. Our next live Future of Testing: Mobile event takes place on August 10th, and registration is officially open. Check it out and reserve your spot today.
You can also check out all the videos from our Future of Testing: Mobile event in June here or get a full recap of our Future of Testing: Mobile event from April right here.
Happy testing!
The post Front-End Test Fest 2021 Recap appeared first on Automated Visual Testing | Applitools.
]]>