
The post Creating Your First Test With Google Chrome DevTools Recorder appeared first on Automated Visual Testing | Applitools.
]]>There are many record and playback tools available, such as Selenium IDE, Testim, Katalon, and others.
Just recently Google has decided to launch its own Recorder tool embedded directly into Chrome. When Google joins the game it’s interesting to see. We decided to check it out.
The new tool is called Chrome DevTools Recorder.
Chrome’s new Recorder tool allows you to record and replay tests from the browser, export them as a JSON file (and more), as well as measure test performance. The DevTools Recorder was released in November 2021, and you can read all about it here.
Right off the bat, we were excited to see that the tool is straightforward and simple. Since it is embedded in the browser we have the convenience of not having to context switch or deal with an additional external tool.
Let’s see what Google has in store with us with the tool and check out just how easily and quickly we can run our first test.
We’ll do so by recording a test on the Coffee cart website and exporting it as a Puppeteer Replay script. To top it off, we will be sprinkling some Applitools magic onto it and see how easy it is to integrate visual testing into the new tool. Let’s go!
First things first, let’s open up our new tool and record a test.
Once the recording is done, we have our first automation script ready to run.
Given the recording, we can see some options before us:
Lastly, we have the option to export the test as a JSON file. This is a great feature as you can share the files with other users.
You can also export it as a Puppeteer Replay script right away. It allows you to customize, extend and replay the tests with the Puppeteer Replay library, which makes the tool even more useful for more experienced users.
One of the main ‘weaknesses’ of Chrome’s Recorder tools is the very basic validation and a pretty standardized flow, with no option in the UI to add on top of it.
The ability to quickly record a stable automated test and export it to make it more customizable is an incredible feature. It can help create tests quickly and efficiently.
main.mjs
(we will customize this file to add in Applitools visual testing)Open the main.mjs
file we exported just now. This is what the script looks like:
import url from 'url';
import { createRunner } from '@puppeteer/replay';
export const flow = {
"title": "order-a-coffee",
"steps": [
{
"type": "setViewport",
"width": 380,
"height": 604,
"deviceScaleFactor": 1,
"isMobile": false,
"hasTouch": false,
"isLandscape": false
},
...
]
};
export async function run(extension) {
const runner = await createRunner(flow, extension);
await runner.run();
}
if (process && import.meta.url === url.pathToFileURL(process.argv[1]).href) {
await run();
}
After we npm install
all the dependencies, we can replay the script above with the node main.mjs
command.
The Puppeteer Replay library provides us with an API to replay and stringify recordings created using the Chrome DevTools Recorder.
The flow
variable is our recorded test steps. It is a JSON object. You can replace the flow
value to read from a JSON file instead. Here is an example:
/* main.mjs */
import url from 'url';
import fs from 'fs';
import { createRunner, parse } from '@puppeteer/replay';
// Puppeteer: read the JSON user flow
const recordingText = fs.readFileSync('./your-exported-file.json', 'utf8');
export const flow = parse(JSON.parse(recordingText));
export async function run(extension) {
...
}
...
Run the script again. It returns the same result.
The Puppeteer Replay offers a way to customize how a recording is run using the PuppeteerRunnerExtension
class, which introduces very powerful and simple hooks such as beforeEachStep and afterAllSteps.
Puppeteer must be installed to customize the tests further. For example, the tests will launch in headless mode by default. In order for us to see how the browser runs the automated test we can turn it off.
Below you can see an example on extending this class and running in headful mode:
/* main.mjs */
...
import puppeteer from 'puppeteer';
import { PuppeteerRunnerExtension } from "@puppeteer/replay";
// Extend runner to log message in the Console
class Extension extends PuppeteerRunnerExtension {
async beforeAllSteps(flow) {
await super.beforeAllSteps(flow);
console.log("starting");
}
async afterEachStep(step, flow) {
await super.afterEachStep(step, flow);
console.log("after", step);
}
}
// Puppeteer: launch browser
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
// Puppeteer: read the JSON user flow
..
// Puppeteer: Replay the script
if (process && import.meta.url === url.pathToFileURL(process.argv[1]).href) {
// add extension
await run(new Extension(browser, page));
}
// Puppeteer: clean up
await browser.close();
Now that we understand the code it’s time to kick it up a notch by adding Applitools Eyes to the mix to enable visual testing.
Applitools Eyes is powered by Visual AI, the only AI-powered computer vision that replicates the human eyes and brain to quickly spot functional and visual regressions. Tests infused with Visual AI are created 5.8x faster, run 3.8x more stably, and detect 45% more bugs vs traditional functional testing.
Applitools also offers the Ultrafast Grid, which provides massively parallel test automation across all browsers, devices, and viewports. With the Ultrafast Grid, you run your test setup script once on your local machine, then the Applitools code takes a snapshot of the page HTML & CSS, and sends it to the grid for processing. This provides an out-of-the-box solution for cross-browser tests, so you don’t have to set up and maintain an in-house QA lab with multiple machines and devices.
Incorporating Applitools Eyes into Chrome DevTools Recorder only takes a few steps. Here’s an overview of the process, with the full details about each step below.
npm i -D @applitools/eyes-puppeteer
const {Eyes, Target, VisualGridRunner, BrowserType, DeviceName} = require('@applitools/eyes-puppeteer')
afterEachStep
hooknode <path_to_test.js>
As indicated above, to install the Applitools Puppeteer SDK run the following command:npm i -D @applitools/eyes-puppeteer
/* main.mjs */
import { Eyes, Target, VisualGridRunner } from '@applitools/eyes-puppeteer';
/* applitools.config.mjs */
import { BrowserType, DeviceName } from '@applitools/eyes-puppeteer';
We define an Eyes instance alongside a Visual Grid runner, which is used with Applitools Ultrafast Grid. We can use the runner at the end of the test to gather all the test results across all Eyes instances in the test, therefore, the runner is defined globally. Eyes is usually defined globally as well but may also be defined locally for a specific test case. The terminology for a test in Applitools is equivalent to opening Eyes, performing any number of visual validations, and closing Eyes when we’re done. This will define a batch in Applitools that will hold our test, meaning we can have multiple tests in a single batch.
/* main.mjs */
// Puppeteer: launch browser
...
// Applitools: launch visual grid runner & eyes
const visualGridRunner = new VisualGridRunner({ testConcurrency: 5 });
const eyes = new Eyes(visualGridRunner);
We then create a function, setupEyes
, that will set our configuration to Eyes before starting the test and before opening Eyes.
/* applitools.config.mjs */
import { BrowserType, DeviceName } from '@applitools/eyes-puppeteer';
export async function setupEyes(eyes, batchName, apiKey) {
eyes.setApiKey(apiKey);
const configuration = eyes.getConfiguration();
configuration.setBatch({ name: batchName })
configuration.setStitchMode('CSS');
// Add browser
configuration.addBrowser({ width: 1200, height: 800, name: BrowserType.CHROME });
configuration.addBrowser({ width: 1200, height: 800, name: BrowserType.FIREFOX });
configuration.addBrowser({ width: 1200, height: 800, name: BrowserType.SAFARI });
configuration.addBrowser({ width: 1200, height: 800, name: BrowserType.EDGE_CHROMIUM });
configuration.addBrowser({ width: 1200, height: 800, name: BrowserType.IE_11 });
configuration.addBrowser({ deviceName: DeviceName.Pixel_2 });
configuration.addBrowser({ deviceName: DeviceName.iPhone_X });
eyes.setConfiguration(configuration);
};
In this step we open Eyes right after initializing the browser and defining the page. The page is required in order to communicate and interact with the browser.
/* main.mjs */
// Applitools: launch visual grid runner & eyes
...
const apiKey = process.env.APPLITOOLS_API_KEY || 'REPLACE_YOUR_APPLITOOLS_API_KEY';
const name = 'Chrome Recorder Demo';
await setupEyes(eyes, name, apiKey);
await eyes.open(page, {
appName: 'Order a coffee',
testName: 'My First Applitools Chrome Recorder test!',
visualGridOptions: { ieV2: true }
});
This is a good opportunity to explain what a Baseline is – A Baseline is a set of images that represent the expected result of a specific test that runs on a specific application in a specific environment. A baseline is created the first time you run a test in a specific environment. This baseline will then be updated whenever you make changes to any of the pages in your app, and accept these changes in Applitools Eyes Test Manager. Any future run of the same test on the same environment will be compared against the baseline.
By default, creating a test on a specific browser for the first time (e.g. Firefox) will create a new Baseline, thus running the same test on a different browser (e.g. Chrome) will form a new baseline.
The baseline is a unique combination of the following parameters:
This means that by default a new baseline will be created for every combination that was not used before.
By calling eyes.check()
, we are telling Eyes to perform a visual validation. Using the Fluent API we can specify which target we would like to capture. Here we are performing visual validation in an afterEachStep
hook to validate each step of the replay along the way. The target is specified to capture the window (the viewport) without the fully
flag, which will force a full-page screenshot.
/* main.mjs */
...
// Extend runner to take screenshot after each step
class Extension extends PuppeteerRunnerExtension {
async afterEachStep(step, flow) {
await super.afterEachStep(step, flow);
await eyes.check(`recording step: ${step.type}`, Target.window().fully(false));
console.log(`after step: ${step.type}`);
}
}
We must close Eyes at the end of our test, as not closing Eyes will result in an Applitools test running in an endless loop. This is due the fact that when Eyes are open, you may perform any amount of visual validations you desire.
By using the eyes.abortAsync functionality, we essentially tell Eyes to abort the test in case that Eyes were not properly closed for some reason.
/* main.mjs */
...
// Puppeteer: clean up
await browser.close();
// Applitools: clean up
await eyes.closeAsync();
await eyes.abortAsync(); // abort if Eyes were not properly closed
Finally, after Eyes and the browser are closed, we may gather the test results using the runner.
/* main.mjs */
...
// Manage tests across multiple Eyes instances
const testResultsSummary = await visualGridRunner.getAllTestResults()
for (const testResultContainer of testResultsSummary) {
const testResults = testResultContainer.getTestResults();
console.log(testResults);
}
You can find the full code in this GitHub repository.
After running the test, you’ll see the results populate in the Applitools dashboard. In this case, our baseline and our checkpoint have no visual differences, so everything passed.
As we have already mentioned, the ability to quickly record a stable automated test and export it to make it more customizable is an incredible feature. For advanced users, you may also customize how a recording is stringified by extending the PuppeteerStringifyExtension class.
For example, I’d like to introduce you to the Cypress Chrome Recorder library, where you can convert the JSON file into a Cypress test script with one simple command. The library is built on top of Puppeteer Replay’s stringified feature.
We can convert our JSON recording file to a Cypress test with the following CLI command:
npm install -g @cypress/chrome-recorder
npx @cypress/chrome-recorder <relative path to target test file>
The output will be written to the cypress/integration folder. If you do not have that folder, you can get it by installing Cypress with the npm install -D cypress
in your project.
Once the test file is ready, we can simply run the test as we would run a standard Cypress test.
Although record and playback testing tools have their setbacks and challenges, this looks like a very simple and useful tool from Google. It can be a good solution for creating simple scenarios or quick tests, seeing how easy it is to use.
What we loved most about the tool was its simplicity. Plain record and playback at the moment with no advanced features, it’s a great stepstone for beginners in testing or even non-code individuals.
Like with any Record-Playback tool one of the challenges is validation. Combined with the ease and speed of adding and running Applitools Eyes you can start validating your UI in no time. Find all the visual regressions and make sure your application is visually perfect.
Applitools Eyes has many advanced features, including AI-powered auto-maintenance, which analyzes differences across all your tests and shows only distinct differences, allowing you to approve or reject changes that automatically apply across all similar changes in your test suite. Learn more about the Applitools platform and sign up for your own free account today.
The post Creating Your First Test With Google Chrome DevTools Recorder appeared first on Automated Visual Testing | Applitools.
]]>The post How to Avoid Split Batches When Running Applitools Tests in Parallel appeared first on Automated Visual Testing | Applitools.
]]>Parallel testing is a powerful tool you can use to speed up your Applitools tests, but ensuring test batches are grouped together and not split is a common issue. Here’s how to avoid it.
Visual testing with Applitools Eyes is an awesome way to supercharge your automated tests with visual checkpoints that catch more problems than traditional assertions. However, just like with any other kind of UI testing, test execution can be slow. The best way to shorten the total start-to-finish time for any automated test is parallelization. Applitools Ultrafast Grid performs ultrafast visual checkpoints concurrently in the cloud, but the functional tests that initially capture those snapshots can also be optimized with parallel execution. Frameworks like JUnit, SpecFlow, pytest, and Mocha all support parallel testing.
If you parallelize your automated test suite in addition to your visual snapshot analysis, then you might need to inject a custom batch ID to group all test results together. What? What’s a batch, and why does it need a special ID? I hit this problem recently while automating visual tests with Playwright. Let me show you the problem with batches for parallel tests, and then I’ll show you the right way to handle it.
If you haven’t already heard, Playwright is a relatively new web testing framework from Microsoft. I love it because it solves many of the problems with browser automation, like setup, waiting, and network control. Playwright also has implementations in JavaScript/TypeScript, Python, Java, and C#.
Typically, I program Playwright in Python, but today, I tried TypeScript. I wrote a small automated test suite to test the AppliFashion demo web app. You can find my code on GitHub here: https://github.com/AutomationPanda/applitools-holiday-hackathon-2020.
The file tests/hooks.ts
contains the Applitools setup:
import { test } from '@playwright/test';
import { Eyes, VisualGridRunner, Configuration, BatchInfo, BrowserType, DeviceName } from '@applitools/eyes-playwright';
export let Runner: VisualGridRunner;
export let Batch: BatchInfo;
export let Config: Configuration;
test.beforeAll(async () => {
Runner = new VisualGridRunner({ testConcurrency: 5 });
Batch = new BatchInfo({name: 'AppliFashion Tests'});
Config = new Configuration();
Config.setBatch(Batch);
Config.addBrowser(1200, 800, BrowserType.CHROME);
Config.addBrowser(1200, 800, BrowserType.FIREFOX);
Config.addBrowser(1200, 800, BrowserType.EDGE_CHROMIUM);
Config.addBrowser(1200, 800, BrowserType.SAFARI);
Config.addDeviceEmulation(DeviceName.iPhone_X);
});
Before all tests start, it sets up a batch named “AppliFashion Tests” to run the tests against five different browser configurations in the Ultrafast Grid. This is a one-time setup.
Among other pieces, this file also contains a function to build the Applitools Eyes object using Runner
and Config
:
export function buildEyes() {
return new Eyes(Runner, Config);
}
The file tests/applifashion.spec.ts
contains three tests, each with visual checks:
import { test } from '@playwright/test';
import { Eyes, Target } from '@applitools/eyes-playwright';
import { buildEyes, getAppliFashionUrl } from './hooks';
test.describe('AppliFashion', () => {
let eyes: Eyes;
let url: string;
test.beforeAll(async () => {
url = getAppliFashionUrl();
});
test.beforeEach(async ({ page }) => {
eyes = buildEyes();
await page.setViewportSize({width: 1600, height: 1200});
await page.goto(url);
});
test('should load the main page', async ({ page }) => {
await eyes.open(page, 'AppliFashion', '1. Main Page');
await eyes.check('Main page', Target.window().fully());
await eyes.close(false);
});
test('should filter by color', async ({ page }) => {
await eyes.open(page, 'AppliFashion', '2. Filtering');
await page.locator('id=SPAN__checkmark__107').click();
await page.locator('id=filterBtn').click();
await eyes.checkRegionBy('#product_grid', 'Filter by color')
await eyes.close(false);
});
test('should show product details', async ({ page }) => {
await eyes.open(page, 'AppliFashion', '3. Product Page');
await page.locator('text="Appli Air x Night"').click();
await page.locator('id=shoe_img').waitFor();
await eyes.check('Product details', Target.window().fully());
await eyes.close(false);
});
test.afterEach(async () => {
await eyes.abort();
});
});
By default, Playwright would run these three tests using one “worker,” meaning they would be run serially. We can run them in parallel by adding the following setting to playwright.config.ts
:
import type { PlaywrightTestConfig } from '@playwright/test';
import { devices } from '@playwright/test';
const config: PlaywrightTestConfig = {
//...
fullyParallel: true,
//...
};
export default config;
Now, Playwright will use one worker per processor or core on the machine running tests (unless you explicitly set the number of workers otherwise).
We can run these tests using the command npm test
. (Available scripts can be found under package.json
.) On my machine, they ran (and passed) with three workers. When we look at the visual checkpoints in the Applitools dashboard, we’d expect to see all results in one batch. However, we see this instead:
What in the world? There are three batches, one for each worker! All the results are there, but split batches will make it difficult to find all results, especially for large test suites. Imagine if this project had 300 or 3000 tests instead of only 3.
The docs on how Playwright Test handles parallel testing make it clear why the batch is split into three parts:
Note that parallel tests are executed in separate worker processes and cannot share any state or global variables.
Each test executes all relevant hooks just for itself, including beforeAll and afterAll.
So, each worker process essentially has its own “copy” of the automation objects. The BatchInfo
object is not shared between these tests, which causes there to be three separate batches.
Unfortunately, batch splits are a common problem for parallel testing. I hit this problem with Playwright, but I’m sure it happens with other test frameworks, too.
Thankfully, there’s an easy way to fix this problem: share a unique batch ID between all concurrent tests. Every batch has an ID. According to the docs, there are three ways to set this ID:
BatchInfo
object.APPLITOOLS_BATCH_ID
environment variable.My original code fell to option 3: I didn’t specify a batch ID, so each worker created its own BatchInfo
object with its own automatically generated ID. That’s why my test results were split into three batches.
Option 1 is the easiest solution. We could hardcode a batch ID like this:
Batch = new BatchInfo({name: 'AppliFashion Tests', id: 'applifashion'});
However, hardcoding IDs is not a good solution. This ID would be used for every batch this test suite ever runs. Applitools has features to automatically close batches, but if separate batches run too close together, then they could collide on this common ID and be reported as one batch. Ideally, each batch should have a unique ID. Unfortunately, we cannot generate a unique ID within Playwright code because objects cannot be shared across workers.
Therefore, option 2 is the best solution. Wecould set the APPLITOOLS_BATCH_ID
environment variable to a unique ID before each test run. For example, on macOS or Linux, we could use the uuidgen
command to generate UUIDs like this:
APPLITOOLS_BATCH_ID=$(uuidgen) npm test
The ID doesn’t need to be a UUID. It could be any string, like a timestamp. However, UUIDs are recommended because the chances of generating duplicate IDs is near-zero. Timestamps are more likely to have collisions. (If you’re on Windows, then you’ll need to come up with a different command for generating unique IDs than the one shown above.)
Now, when I run my test with this injected batch ID, all visual test results fall under one big batch:
That’s the way it should be! Much better.
I always recommend setting a concise, informative batch name for your visual tests. Setting a batch ID, however, is something you should do only when necessary – such as when tests run concurrently. If you run your tests in parallel and you see split batches, give the APPLITOOLS_BATCH_ID
environment variable a try!
The post How to Avoid Split Batches When Running Applitools Tests in Parallel appeared first on Automated Visual Testing | Applitools.
]]>The post How to Visually Test a Remix App with Applitools and Cypress appeared first on Automated Visual Testing | Applitools.
]]>Is Remix too new to be visually tested? Let’s find out with Applitools and Cypress.
In this blog post, we answer a single question: how to best visually test a Remix-based app?
We walk through Remix and build a demo app to best showcase the framework. Then, we take a deep dive into visual testing with Applitools and Cypress. We close on scaling our test coverage with the Ultrafast Test Cloud to perform cross-browser validation of the app.
So let our exciting journey begin in pursuit of learning how to visually test the Remix-based app.
Web development is an ever-growing space with almost as many ways to build web apps as there are stars in the sky. And it ultimately translates into just how many different User Interface (UI) frameworks and libraries there are. One such library is React, which most people in the web app space have heard about, or even used to build a website or two.
For those unfamiliar with React, it’s a declarative, component-based library that developers can use to build web apps across different platforms. While React is a great way to develop robust and responsive UIs, many moving pieces still happen behind the scenes. Things like data loading, routing, and more complex work like Server-Side Rendering are what a new framework called Remix can handle for React apps.
Remix is a full-stack web framework that optimizes data loading and routing, making pages load faster and improving overall User Experience (UX). The days are long past when our customers would wait minutes while a website reloads, while moving from one page to another, or expecting an update on their feed. Features like Server-Side Rendering, effective routing, and data loading have become the must for getting our users the experience they want and need. The Remix framework is an excellent open-source solution for delivering these features to our audience and improving their UX.
Our end-users shouldn’t care what framework we used to build a website. What matters to our users is that the app works and lets them achieve their goals as fast as possible. In the same way, the testing principles always remain the same, so UI testing shouldn’t be impacted by the frameworks used to create an app. The basics of how we test stay the same although some testing aspects could change. For example, in the case of an Angular app, we might need to adjust how we wait for the site to fully load by using a specialized test framework like Protractor.
Most tests follow a straightforward pattern of Arrange, Act, and Assert. Whether you are writing a unit test, an integration test, or an end-to-end test, everything follows this cycle of setting up the data, running through a set of actions and validating the end state.
When writing these end-to-end tests, we need to put ourselves in the shoes of our users. What matters most in this type of testing is replicating a set of core use-cases that our end-users go through. It could be logging into an app, writing a new post, or navigating to a new page. That’s why UI test automation frameworks like Applitools and Cypress are fantastic for testing – they are largely agnostic of the platform they are testing. With these tools in hand, we can quickly check Remix-based apps the same way we would test any other web application.
The main goal of testing is to confirm the app’s behavior that our users see and go through. This reason is why simply loading UI elements and validating inner text or styling is not enough. Our customers are not interested in HTML or CSS. What they care about is what they can see and interact with on our site, not the code behind it. It’s not enough for a robust coverage of the complex UI that modern web apps have. We can close this gap with visual testing.
Visual testing allows us to see our app from our customers’ point of view. And that’s where the Applitools Eyes SDK comes in! This visual testing tool can enhance the existing end-to-end test coverage to ensure our app is pixel-perfect.
To simplify, what Applitools does for us is that it allows developers to effectively compare visual elements across various screens to find visible defects. Applitools can record our UI elements in their platform and then monitor any visual regressions that our customers might encounter. More specifically, this testing framework exposes the visible differences between baseline snapshots and future snapshots.
Applitools has integrations with numerous testing platforms like Cypress, WebdriverIO, Selenium, and many others. For this article, we will showcase Applitools with Cypress to add visual test coverage to our Remix app.
We can’t talk about a framework like Remix without seeing it in practice. That’s why we put together a demo app to best showcase Remix and later test it with Applitools and Cypress.
We based this app on the Remix Developer Blog app that highlights the core functionalities of Remix: data loading, actions, redirects, and more. We shared this demo app and all the tests we cover in this article in this repository so that our readers can follow along.
Before diving into writing tests, we must ensure that our Remix demo application is running.
To start, we need to clone a project from this repository:
git clone https://github.com/dmitryvinn/remix-demo-app-applitools
Then, we navigate into the project’s root directory and install all dependencies:
cd remix-demo-app-applitools
npm install
After we install the necessary dependencies, our app is ready to start:
npm run dev
After we launch the app, it should be available at http://localhost:3000/
, unless the port is already taken. With our Remix demo app fully functional, we can transition into testing Remix with Applitools and Cypress.
There is this great quote from a famous American economist, Richard Thaler: “If you want people to do something, make it easy.” That’s what Applitools and Cypress did by making testing easy for developers, so people don’t see it as a chore anymore.
To run our visual test automation using Applitools, we first need to set up Cypress, which will play the role of test runner. We can think about Cypress as a car’s body, whereas Applitools is an engine that powers the vehicle and ultimately gets us to our destination: a well-tested Remix web app.
Cypress is an open-source JavaScript end-to-end testing framework developers can use to write fast, reliable, and maintainable tests. But rather than reinventing the wheel and talking about the basics of Cypress, we invite our readers to learn more about using this automation framework on the official site, or from this course at Test Automation University.
To install Cypress, we only need to run a single command:
npm install cypress
Then, we need to initialize the cypress
folder to write our tests. The easiest way to do it is by running the following:
npx cypress open
This command will open Cypress Studio, which we will cover later in the article, but for now we can safely close it. We also recommend deleting sample test suites that Cypress created for us under cypress/integration
.
Note: If npx
is missing on the local machine, follow these steps on how to update the Node package manager, or run ./node_modules/.bin/cypress open
instead.
Installing the Applitools Eyes SDK with Cypress is a very smooth process. In our case, because we already had Cypress installed, we only need to run the following:
npm install @applitools/eyes-cypress --save-dev
To run Applitools tests, we need to get the Applitools API key, so our test automation can use the Eyes platform, including recording the UI elements, validating any changes on the screen, and more. This page outlines how to get this APPLITOOLS_API_KEY
from the platform.
After getting the API key, we have two options on how to add the key to our tests suite: using a CLI or an Applitools configuration file. Later in this post, we explore how to scale Applitools tests, and the configuration file will play a significant role in that effort. Hence, we continue by creating applitools.config.js
in our root directory.
Our configuration file will begin with the most basic setup of running a single test thread (testConcurrency
) for one browser (browser
field). We also need to add our APPLITOOLS_API_KEY
under the `apiKey’ field that will look something like this:
module.exports = {
testConcurrency: 1,
apiKey: "DONT_SHARE_OUR_APPLITOOLS_API_KEY",
browser: [
// Add browsers with different viewports
{ width: 800, height: 600, name: "chrome" },
],
// set batch name to the configuration
batchName: "Remix Demo App",
};
Now, we are ready to move onto the next stage of writing our visual tests with Applitools and Cypress.
One of the best things about Applitools is that it nicely integrates with our existing tests with straightforward API.
For this example, we visually test a simple form on the Actions page of our Remix app.
To begin writing our tests, we need to create a new file named actions-page.spec.js
in the cypress/integration
folder:
Since we rely on Cypress as our test runner, we will continue using its API for writing the tests. For the basic Actions page tests where we validate that the page renders visually correctly, we start with this code snippet:
describe("Actions page form", () => {
it("Visually confirms action form renders", () => {
// Arrange
// ...
// Act
// ..
// Assert
// ..
// Cleanup
// ..
});
});
We continue following the same pattern of Arrange-Act-Assert, but now we also want to ensure that we close all the resources we used while performing the visual testing. To begin our test case, we need to visit the Action page:
describe("Actions page form", () => {
it("Visually confirms action form renders", () => {
// Arrange
cy.visit("http://localhost:3000/demos/actions");
// Act
// ..
// Assert
// ..
// Cleanup
// ..
});
});
Now, we can begin the visual validation by using the Applitools Eyes framework. We need to “open our eyes,” so-to-speak by calling cy.eyesOpen()
. It initializes our test runner for Applitools to capture critical visual elements just like we would with our own eyes:
describe("Actions page form", () => {
it("Visually confirms action form renders", () => {
// Arrange
cy.visit("http://localhost:3000/demos/actions");
// Act
cy.eyesOpen({
appName: "Remix Demo App",
testName: "Validate Action Form",
});
// Assert
// ..
// Cleanup
// ..
});
});
Note: Technically speaking, cy.eyesOpen()
should be a part of the Arrange step of writing the test, but for educational purposes, we are moving it under the Act portion of the test case.
Now, to move to the validation phase, we need Applitools to take a screenshot and match it against the existing version of the same UI elements. These screenshots are saved on our Applitools account, and unless we are running the test case for the first time, the Applitools framework will match these UI elements against the version that we previously saved:
describe("Actions page form", () => {
it("Visually confirms action form renders", () => {
// Arrange
cy.visit("http://localhost:3000/demos/actions");
// Act
cy.eyesOpen({
appName: "Remi Demo App",
testName: "Validate Action Form",
});
// Assert
cy.eyesCheckWindow("Action Page");
// Cleanup
// ..
});
});
Lastly, we need to close our test runner for Applitools by calling cy.closeEyes()
. With this step, we now have a complete Applitools test case for our Actions page:
describe("Actions page form", () => {
it("Visually confirms action form renders", () => {
// Arrange
cy.visit("http://localhost:3000/demos/actions");
// Act
cy.eyesOpen({
appName: "Remi Demo App",
testName: "Validate Action Form",
});
// Assert
cy.eyesCheckWindow("Action Page");
// Cleanup
cy.eyesClose();
});
});
Note: Although we added a cleanup-stage with cy.eyesClose()
in the test case itself, we highly recommend moving this method outside of the it()
function into the afterEach()
that will run for every test, avoiding code duplication.
After the hard work of planning and then writing our test suite, we can finally start running our tests. And it couldn’t be easier than with Applitools and Cypress!
We have two options of either executing our tests by using Cypress CLI or Cypress Studio.
Cypress Studio is a great option when we first write our tests because we can walk through every case, stop the process at any point, or replay any failures. These reasons are why we should use Cypress Studio to demonstrate best how these tests function.
We begin running our cases by invoking the following from the project’s root directory:
npm run cypress-open
This operation opens Cypress Studio, where we can select what test suite to run:
To validate the result, we need to visit our Applitools dashboard:
To make it interesting, we can cause this test to fail by changing the text on the Actions page. We could change the heading to say “Failed Actions!” instead of the original “Actions!” and re-run our test.
This change will cause our original test case to fail because it will catch a difference in the UI (in our case, it’s because of the intentional renaming of the heading). This error message is what we will see in the Cypress Studio:
To further deal with this failure, we need to visit the Applitools dashboard:
As we can see, the latest test run is shown as Unresolved, and we might need to resolve the failure. To see what the difference in the newest test run is, we only need to click on the image in question:
A great thing about Applitools is that their visual AI algorithm is so advanced that it can test our application on different levels to detect content changes as well as layout or color updates. What’s especially important is that Applitools’ algorithm prevents false positives with built-in functionalities like ignoring content changes for apps with dynamic content.
In our case, the test correctly shows that the heading changed, and it’s now up to us to either accept the new UI or reject it and call this failure a legitimate bug. Applitools makes it easy to choose the correct course of action as we only need to press thumbs up to accept the test result or thumbs down to decline it.
Accepting or Rejecting Test Run in Applitools Dashboard
In our case, the test case failed due to a visual bug that we introduced by “unintentionally” updating the heading.
After finishing our work in the Applitools Dashboard, we can bring the test results back to the developers and file a bug on whoever made the UI change.
But are we done? What about testing our web app on different browsers and devices? Fortunately, Applitools has a solution to quickly scale the tests automation and add cross-browser coverage.
Testing an application against one browser is great, but what about all others? We have checked our Remix app on Chrome, but we didn’t see how the app performs on Firefox, Microsoft Edge, and so on. We haven’t even started looking into mobile platforms and our web app on Android or iOS. Introducing this additional test coverage can get out of hand quickly, but not with Applitools and their Ultrafast Test Cloud. It’s just one configuration change away!
With this cloud solution from Applitools, we can test our app across different browsers without any additional code. We only have to update our Applitools configuration file, applitools.config.js
.
Below is an example of how to add coverage for desktop browsers like Chrome, Firefox, Safari and E11, plus two extra test cases for different models of mobile phones:
module.exports = {
testConcurrency: 1,
apiKey: "DONT_SHARE_YOUR_APPLITOOLS_API_KEY",
browser: [
// Add browsers with different viewports
{ width: 800, height: 600, name: "chrome" },
{ width: 700, height: 500, name: "firefox" },
{ width: 1600, height: 1200, name: "ie11" },
{ width: 800, height: 600, name: "safari" },
// Add mobile emulation devices in Portrait or Landscape mode
{ deviceName: "iPhone X", screenOrientation: "landscape" },
{ deviceName: "Pixel 2", screenOrientation: "portrait" },
],
// set batch name to the configuration
batchName: "Remix Demo App",
};
It’s important to note that when specifying the configuration for different browsers, we need to define their width
and height
, with an additional property for screenOrientation
to cover non-desktop devices. These settings are critical for testing responsive apps because many modern websites visually differ depending on the devices our customers use.
After updating the configuration file, we need to re-run our test suite with npm test
. Fortunately, with the Applitools Ultrafast Test Cloud, it only takes a few seconds to finish running our tests on all browsers, so we can visit our Applitools Dashboard to view the results right away:
As we can see, with only a few lines in the configuration file, we scaled our visual tests across multiple devices and browsers. We save ourselves time and money whenever we can get extra test coverage without explicitly writing new cases. Maintaining test automation that we write is one of the most resource-consuming steps of the Software Development Life Cycle. With solutions like Applitools Ultrafast Test Cloud, we can write fewer tests while increasing our test coverage for the entire app.
Hopefully, this article showed that the answer is yes; we can successfully visually test Remix-based apps with Applitools and Cypress!
Remix is a fantastic framework to take User Experience to the next level, and we invite you to learn more about it during the webinar by Kent C. Dodds “Building Excellent User Experiences with Remix”.
For more information about Applitools, visit their website, blog and YouTube channel. They also provide free courses through Test Automation University that can help take anyone’s testing skills to the next level.
The post How to Visually Test a Remix App with Applitools and Cypress appeared first on Automated Visual Testing | Applitools.
]]>The post How to Run Cross Browser Tests with Cypress on All Browsers appeared first on Automated Visual Testing | Applitools.
]]>Learn how you can run cross-browser Cypress tests against any browser, including Safari, IE and mobile browsers.
Ah, Cypress – the darling end-to-end test framework of the JavaScript world. In the past few years, Cypress has surged in popularity due to its excellent developer experience. It runs right in the browser alongside web apps, making it a natural fit for frontend developers. Its API is both concise and powerful. Its interactions automatically handle waiting to avoid any chance of flakiness. Cypress almost seems like a strong contender to dethrone Selenium WebDriver as the king of browser automation tools.
However, Cypress has a critical weakness: it cannot natively run tests against all browser types. At the time of writing this article, Cypress supports only a limited set of browsers: Chrome, Edge, and Firefox. That means no support for Safari or IE. Cypress also doesn’t support mobile web browsers. Ouch! These limitations alone could make you think twice about choosing to automate your tests in Cypress.
Thankfully, there is a way to run Cypress tests against any browser type, including Safari, IE, and mobile browsers: using Applitools Ultrafast Test Grid. With the help of Applitools, you can achieve full cross-browser testing with Cypress, even for large-scale test suites. Let’s see how it’s done. We’ll start with a basic Cypress test, and then we’ll add visual snapshots that can be rendered in any browser in the Applitools cloud.
Let’s define a basic web app login test for the Applitools demo site. The site mimics a basic banking app. The first page is a login screen:
You can enter any username or password to login. Then, the main page appears:
Nothing fancy here. The steps for our test case are straightforward:
Scenario: Successful login
Given the login page is displayed
When the user enters their username and password
And the user clicks the login button
Then the main page is displayed
These steps would be the same for the login behavior of any other application.
Let’s automate our login test using Cypress. Create a JavaScript project and install Cypress. Then, create a new test case spec: cypress/integration/login.spec.js
. Add the following test case to the spec file:
describe('Login', () => {
beforeEach(() => {
cy.viewport(1600, 1200)
})
it('should log into the demo app', () => {
loadLoginPage()
verifyLoginPage()
performLogin()
verifyMainPage()
})
})
Cypress uses Mocha as its core test framework. The beforeEach
call makes sure the browser viewport is large enough to show all elements in the demo app. The test case itself has a helper function for each test step.
The first function, loadLoginPage
, loads the login page:
function loadLoginPage() {
cy.visit('https://demo.applitools.com')
}
The second function, verifyLoginPage
, makes sure that the login page loads correctly:
function verifyLoginPage() {
cy.get('div.logo-w').should('be.visible')
cy.get('#username').should('be.visible')
cy.get('#password').should('be.visible')
cy.get('#log-in').should('be.visible')
cy.get('input.form-check-input').should('be.visible')
}
The third function, performLogin
, actually does the interaction of logging in:
function performLogin() {
cy.get('#username').type('andy')
cy.get('#password').type('i<3pandas')
cy.get('#log-in').click()
}
The fourth and final function, verifyMainPage
, makes sure that the main page loads correctly:
function verifyMainPage() {
// Check various page elements
cy.get('div.logo-w').should('be.visible')
cy.get('div.element-search.autosuggest-search-activator > input').should('be.visible')
cy.get('div.avatar-w img').should('be.visible')
cy.get('ul.main-menu').should('be.visible')
cy.contains('Add Account').should('be.visible')
cy.contains('Make Payment').should('be.visible')
cy.contains('View Statement').should('be.visible')
cy.contains('Request Increase').should('be.visible')
cy.contains('Pay Now').should('be.visible')
// Check time message
cy.get('#time').invoke('text').should('match', /Your nearest branch closes in:( \d+[hms])+/)
// Check menu element names
cy.get('ul.main-menu li span').should(items => {
expect(items[0]).to.contain.text('Card types')
expect(items[1]).to.contain.text('Credit cards')
expect(items[2]).to.contain.text('Debit cards')
expect(items[3]).to.contain.text('Lending')
expect(items[4]).to.contain.text('Loans')
expect(items[5]).to.contain.text('Mortgages')
})
// Check transaction statuses
const statuses = ['Complete', 'Pending', 'Declined']
cy.get('span.status-pill + span').each(($span, index) => {
expect(statuses).to.include($span.text())
})
}
The first three functions are fairly concise, but the fourth one is a doozy. The main page has so many things to check, and despite its length, this step doesn’t even check everything!
Run this test locally to make sure it works (npx cypress open
). It should pass using any local browser (Chrome, Edge, Electron, or Firefox).
You could run this login test on your local machine or from your Continuous Integration (CI) service, but in its present form, it can’t run on those extra browsers (Safari, IE, mobile). To do that, we need the help of visual testing techniques using Applitools Visual AI and the Ultrafast Test Cloud.
Visual testing is the practice of inspecting visual differences between snapshots of screens in the app you are testing. You start by capturing a “baseline” snapshot of, say, the login page to consider as “right” or “expected.” Then, every time you run the tests, you capture a new snapshot of the same page and compare it to the baseline. By comparing the two snapshots side-by-side, you can detect any visual differences. Did a button go missing? Did the layout shift to the left? Did the colors change? If nothing changes, then the test passes. However, if there are changes, a human tester should review the differences to decide if the change is good or bad.
Manual testers have done visual testing since the dawn of computer screens. Applitools Visual AI simply automates the process. It highlights differences in side-by-side snapshots so you don’t miss them. Furthermore, Visual AI focuses on meaningful changes that human eyes would notice. If an element shifts one pixel to the right, that’s not a problem. Visual AI won’t bother you with that noise.
If a picture is worth a thousand words, then a visual snapshot is worth a thousand assertions. We could update our login test to take visual snapshots using Applitools Eyes SDK in place of lengthy assertions. Visual snapshots provide stronger coverage than the previous assertions. Remember how verifyMainPage
had several checks but still didn’t cover all the elements on the page? A visual snapshot would implicitly capture everything with only one line of code. Visual testing like this enables more effective functional testing than traditional assertions.
But back to the original problem: how does this enable us to run Cypress tests in Safari, IE, and mobile browsers? That’s the magic of snapshots. Notice how I said “snapshot” and not “screenshot.” A screenshot is merely a grid of static pixels. A snapshot, however, captures full page content – HTML, CSS, and JavaScript – that can be re-rendered in any browser configuration. If we update our Cypress test to take visual snapshots of the login page and the main page, then we could run our test one time locally to capture the snapshots, Then, the Applitools Eyes SDK would upload the snapshots to the Applitools Ultrafast Test Cloud to render them in any target browser – including browsers not natively supported by Cypress – and compare them against baselines. All the heavy work for visual checkpoints would be done by the Applitools Ultrafast Test Cloud, not by the local machine. It also works fast, since re-rendering snapshots takes much less time than re-running full Cypress tests.
Let’s turn our login test into a visual test. First, make sure you have an Applitools account. You can register for a free account to get started.
Your account comes with an API key. Visual tests using Applitools Eyes need this API key for uploading results to your account. On your machine, set this key as an environment variable.
On Linux and macOS:
$ export APPLITOOLS_API_KEY=<value>
On Windows:
> set APPLITOOLS_API_KEY=<value>
Time for coding! The test case steps remain the same, but the test case must be wrapped by calls to Applitools Eyes:
describe('Login', () => {
it('should log into the demo app', () => {
cy.eyesOpen({
appName: 'Applitools Demo App',
testName: 'Login',
})
loadLoginPage()
verifyLoginPage()
performLogin()
verifyMainPage()
})
afterEach(() => {
cy.eyesClose()
})
})
Before the test begins, cy.eyesOpen(...)
tells Applitools Eyes to start watching the browser. It also sets names for the app under test and the test case itself. Then, at the conclusion of the test, cy.eyesClose()
tells Applitools Eyes to stop watching the browser.
The interaction functions, loadLoginPage
and performLogin
, do not need any changes. The verification functions do:
function verifyLoginPage() {
cy.eyesCheckWindow({
tag: "Login page",
target: 'window',
fully: true
});
}
function verifyMainPage() {
cy.eyesCheckWindow({
tag: "Main page",
target: 'window',
fully: true,
matchLevel: 'Layout'
});
}
All the assertion calls are replaced by one-line snapshots using Applitools Eyes. These snapshots capture the full window for both pages. The main page also sets a match level to “layout” so that differences in text and color are ignored.
The test code changes are complete, but you need to do one more thing: you must specify browser configurations to test in the Applitools Ultrafast Test Cloud. Add a file named applitools.config.js
to the root level of the project, and add the following content:
module.exports = {
testConcurrency: 5,
apiKey: 'APPLITOOLS_API_KEY',
browser: [
// Desktop
{width: 800, height: 600, name: 'chrome'},
{width: 700, height: 500, name: 'firefox'},
{width: 1600, height: 1200, name: 'ie11'},
{width: 1024, height: 768, name: 'edgechromium'},
{width: 800, height: 600, name: 'safari'},
// Mobile
{deviceName: 'iPhone X', screenOrientation: 'portrait'},
{deviceName: 'Pixel 2', screenOrientation: 'portrait'},
{deviceName: 'Galaxy S5', screenOrientation: 'portrait'},
{deviceName: 'Nexus 10', screenOrientation: 'portrait'},
{deviceName: 'iPad Pro', screenOrientation: 'landscape'},
],
batchName: 'Modern Cross-Browser Testing Workshop'
}
This config file contains four settings:
testConcurrency
sets the level of parallel execution in the Applitools Ultrafast Test Cloud. (Free accounts are limited to 1 concurrent test.)apiKey
sets the environment variable name for the Applitools API key.browser
declares a list of browser configurations to test. This config file provides ten total configs: five desktop, and five mobile. Notice that Safari and IE11 are included. Desktop browser configs include viewport sizes, while mobile browser configs include screen orientations.batchName
sets a name that all results will share in the Applitools Dashboard.Done! Let’s run the updated test.
Run the test locally to make sure it works (npx cypress open
). Then, open the Applitools dashboard to view visual test results:
Notice how this one login test has one result for each target configuration. All results have “New” status because they are establishing baselines. Also, notice how little time it took to run this batch of tests:
Running our test across 10 different browser configurations with 2 visual checkpoints each at a concurrency level of 5 took only 36 seconds to complete. That’s ultra fast! Running that many test iterations locally or in a traditional Cypress parallel environment could take several minutes.
Run the test again. The second run should succeed just like the first. However, the new dashboard results now say “Passed” because Applitools compared the latest snapshots to the baselines and verified that they had not changed:
This time, all variations took 32 seconds to complete – about half a minute.
Passing tests are great, but what happens if a page changes? Consider an alternate version of the login page:
This version has a broken icon and a different login button. Modify the loadLoginPage
function to test this version of the site like this:
function loadLoginPage() {
cy.visit('https://demo.applitools.com/index_v2.html')
}
Now, when you rerun the test, results appear as “Unresolved” in the Applitools dashboard:
When you open each result, the dashboard will display visual comparisons for each snapshot. If you click the snapshot, it opens the comparison window:
The baseline snapshot appears on the left, while the latest checkpoint snapshot appears on the right. Differences will be highlighted in magenta. As the tester, you can choose to either accept the change as a new baseline or reject it as a failure.
Even though Cypress can’t natively run tests against Safari, IE, or mobile browsers, it can with visual testing with Applitools Ultrafast Test Cloud. You can use Cypress tests to capture snapshots and then render them under any number of different browser configurations to achieve true cross-browser testing with visual and functional test coverage.
Want to see the full code? Check out this GitHub repository: applitools/workshop-cbt-cypress.
Want to try visual testing for yourself? Register for a free Applitools account.
Want to see more examples? Check out other articles here, here, and here.
The post How to Run Cross Browser Tests with Cypress on All Browsers appeared first on Automated Visual Testing | Applitools.
]]>The post Why You Should Use Visual AI Locators Instead of Fragile Selectors in Your Tests appeared first on Automated Visual Testing | Applitools.
]]>In this step-by-step tutorial, learn how to use visual locators to target anything you need to test in your application and how it can help you create tests that are more resilient and robust.
As a test engineer, you’re certainly used to writing selectors to target the specific things you are interacting with in your tests. Lots of them. Selectors are critical to test exactly what you’re aiming for, but there are challenges. Not only do you usually have to write quite a lot of them, but in some cases an obvious selector doesn’t even exist and you’ll need to rig up a creative workaround. You can do it, but that kind of workaround is typically fragile and can easily break as the application develops.
Needless to say this is frustrating when it happens. And that it happens more than we’d all like.
There are some cases where traditional selectors work quite well, of course. But in certain complex or atypical situations, there is another option that works better – visual locators.
If you’re using an automated visual testing platform that offers visual locators, such as Applitools, you can solve this problem immediately. Visual locators can replace these fragile selectors and give us a more robust way of targeting something we need to test.
Essentially, rather than hard-coding one (or many) atypical selector(s) that will be difficult to track and maintain, Applitools visual locators let you select something just by “looking” to see if it is there on the screen. Applitools is able to do this thanks to an industry-leading Visual AI that has been trained on over a billion images to an accuracy level of 99.9999%.
Let’s walk through how you can use Applitools visual locators to make your testing simpler and avoid hacking together non-standard selectors. You can check out our quick video tutorial below, or read on as we explain it.
Note: This walkthrough assumes some basic familiarity with Applitools. If you’re new to Applitools, feel free to grab a free account and explore the docs, or you can always reach out with any questions or schedule a quick demo.
When writing tests, typically we need some selector in order to target exactly what we want to click or interact with, but an obvious selector doesn’t always exist. For example, take a look at this LinkedIn logo in a typical log-in screen.
To select this, you would have to come up with some kind of creative selector, which doesn’t make a lot of sense, and it can often be fragile and break really easily as the application is being developed.
Instead, we can use visual locators in Applitools to actually select the visual area that we want to interact with.
To do this, you first need to create a new test. Let’s start off by going to the page and creating a check for the page that we actually want to interact with.
Once we run the test, we can select and open it. We can now look inside of that check for our login page and see the LinkedIn logo.
Inside the Applitools dashboard, we can find the visual locators tool. If we select that tool, we can then draw a visual locator by making a rectangle right around that LinkedIn logo and give it a name. Let’s just call it LinkedIn logo. Next we click add, and we’re done!
We now have a new visual locator created inside of our dashboard. If you have multiple visual locators in the future, there is also a Manage tab, where we can edit the name, or remove any visual locator we have made. Note that visual locators apply to a specific application, but that also means that you can use these locators across different tests in that application.
Now that we have our visual locator and its associated name, we can target that area when we’re running our test. For example, let’s say we have a Selenium for Java test where we’re currently accessing the login window, but we want make sure if somebody clicks that LinkedIn logo that they’re going to actually be taken to LinkedIn.
To start off, we want to make sure that we have the visual locator and the region packages actually imported into our project.
import com.applitools.eyes.locators.VisualLocator;
import com.applitools.eyes.Region;
Next, we’ll create a new map with a list of regions we can use to tell Eyes that we want to locate our visual locators with the name we chose (linkedIn-logo). Then we want to narrow that down to the actual regions of that locator. So we create locator regions, using those original locators to get the specific LinkedIn logo. Because we want to deal with one region instead of a list of regions, we’re going to use the get
command, so that we can grab the zero index of our regions. We can use this region to get both the X and Y axis of where our LinkedIn logo is located inside of the page.
We’ll use that region to find both our X and Y value, and use the getLeft()
method and the getTop()
method, along with the width
and the height
to calculate where exactly we want to click. In this instance, we’ll click right in the middle of the LinkedIn logo region, using the X and Y axes to target where I want to click.
Here’s the code for all of that:
Next, we need to import the Selenium Actions
interaction.
import org.openqa.selenium.interactions.Actions;
That will enable us to say that as soon as we perform that first check for the login window, we want to create new action where we move to that click location and actually click. Finally, we want to make sure we’re on the LinkedIn page. So I’m going to use Eyes to add another check
, and add this tag as LinkedIn
.
Our final code will look like this:
When we look inside of the Applitools dashboard now, we can see our new test. It will show unresolved because we’re adding a new screen to our UI. If we first open up our login window, we’ll see that it looks exactly the same and it’s still visually perfect. If we head over to the next screen, we can now see that it’s clicking over to LinkedIn, just like we wanted to, by using that visual locator.
Once we click thumbs up inside of that checkpoint or inside of the main dashboard UI, we can go ahead and save our tests and then we’ll be ready to run the next test.
If you want to learn more about how you can take advantage of AI and visual locators in your tests by using Applitools, head over to the Applitools Docs, where not only do we have a description of exactly what you can expect, but you’ll also find code examples for how you can use it inside of your next test.
Happy testing!
Want to try Applitools Eyes for yourself? You can get started today at the link below – the account is free forever.
The post Why You Should Use Visual AI Locators Instead of Fragile Selectors in Your Tests appeared first on Automated Visual Testing | Applitools.
]]>The post Hot off the Press: How to Get Started with the New Applitools / Robot Framework Library! appeared first on Automated Visual Testing | Applitools.
]]>I was really excited to hear that there was a new Applitools EyesLibrary. Being a Robot Framework Coach and Mentor I was really excited to see that this new library come out. I have been working with several people on older versions of Applitools EyesLibrary for Robot Framework. But with the recent updates to Robot Framework and Applitools these were in the state of needing repair.
Here I am going to take you through my first insight or exploration into the new EyesLibrary and provide you with a short tutorial which you can follow along, gaining a good introduction to the new EyesLibrary.
For those not familiar with Robot Framework it is a natural language, keyword based, testing framework. That means instead of reading as a syntactic programming language it reads like a testing story. My test might read like “Navigate To The Home Page”, “Edit The User’s Preferences”, “Add Applitools To My Skill Set”, “Verify Applitools Is In My User Profile”, “Perform Visual Check Of User Profile Ignore Newly Added Skills” etc. And being a framework means that it can be applied to many different areas, like visual testing. This is done through Libraries which provide a set of task specific keywords.
Here I am going to talk about Applitools EyesLibrary, which is a library for performing visual testing using Applitools with Robot Framework.
Coming from the Robot Framework side the first thing I wanted to do was go look at the keyword documentation for the library. I was curious to see what keywords were there and what I could do with the library. The first thing I noticed was the large number of keywords; much larger than I expected. Luckily I could use the tag filtering so I could filter out the keywords based upon categories. From the categories I can start to see there were keywords for visual checks, targeting parts of the screen, some configuration, and something called ultrafast grid (which I won’t cover in this article).
You can get started with a free Applitools account today and follow along with this tutorial.
Get My Free AccountTaking a step back I decided to go review the documentation for Applitools. There is “Overview of visual UI testing” which really outlines what is visual testing and what are the steps in the process of visual testing. I felt I had these concepts pretty well understood in my mind. To get to the answer of how do I use the EyesLibrary to do visual testing I found the Robot Eyes Library sdk/api documentation key.
The Robot Eyes Library documentation under the SDK section really outlines the “how” whereas the keyword documentation gives us the “what” and Overview/The Core Concepts gives us the “why”.
The “how”, what we need in our robot scripts, really just breaks down to this: one most open their eyes, perform a visual check on a region maybe with some special configurations, and then close one’s eyes.
Let’s start by setting up our environment. You will need Python (version 3.6 or greater, I recommend Python 3.8) installed on your system. The setup involves installing and initializing the library and then setting your API key. To simplify setup, I’ve created a script, setup_eyes.bat
for Windows or setup_eyes.sh
for Linux. Run the corresponding batch file or bash script at the command line or terminal prompt. It will ask you for your Applitools API Key so have that handy.
For my first script, I wanted to do something simple – perform a visual check on the demo page without changing anything. Essentially I wanted a very simple passing test to validate everything was set up properly and to start to explore. Here it is:
Go ahead and run this script by typing at the command line or terminal prompt robot firstsight.robot.
You might see a verbose result summary showing matches. You should also see the batch appear within your Applitools Dashboard for which you could set the baseline image. Rerun the script seeing it pass each time.
Now execute the firsterror.robot
script (robot firsterror.robot
) which contains the one additional keyword line Click Link ?diff
just before the visual check. This will change the demo page causing several visual differences. You should notice an error in the robot output, post-execution, noting the difference and referring you to the Applitools Dashboard URL.
The first thing I noticed with these two scripts is the output or result summary from the visual checks. Passing or failing we get information about the status of the visual checks. Also the status of the visual check does not affect the status of the robot framework check. That is the visual check is outside of the context of the robot checks which has interesting possibilities for “context switching”.
The next initial observation, which is hidden in the scripts above, is the addition and usage of the Fully
keyword/setting. We see from the documentation this sets the visual check to the whole page. Initially I did not use this and the size of the visual area I was checking would change although the content did not and my tests were failing. Think about that, the content or what I was seeing was the same but the area I was checking was not, thus a “difference”. This led me to a deeper understanding of what factors into a match against the baseline.
Quickly the first factor is Viewport size which is what I saw when I did or did not have Fully
in my test. There is Host environment and Version information which relates what I call the environment under which I execute. Finally there are test suite/test case related factors testName
and the appName
which label the application under test. We can set appName
in a few spots.
Although these factors seem straightforward I do feel it is important to mention, so that one can see what factors into matching and how one sets these with either the robot script or the environment the script is run on.
Looking beyond the basic Eyes Check on a specific window or region or frame, we see one could do, instead, the generic Eyes Check and then target an area. Taking a small step forward let’s run both the Eyes Check Window and the equivalent Eyes Check using the target of Target Window, targeting the full window, and with a name.
Running to use the specific Eyes Check Window keyword we type robot windowtarget.robot
at the command line or prompt. Then re-run this time typing robot -v useCheckWindow:False windowtarget.robot
to execute the generic Eyes Check instead.
Up to now I haven’t discussed how a Robot Framework Test Suite and Test Case relate to Applitools objects via EyesLibrary. As you have seen in the Applitools Dashboard each time we execute we get a new batch. And from the examples above each batch has a test. As we have had only one robot test case per file (test suite) we see only one Applitools test per batch. These tests have a visual representation for each visual check in the test case. They are steps in Applitools vernacular. The name which we have used in keywords label those steps. Within EyesLibrary (as well as elsewhere) they also refer to a tag which appears to be the same as name and these are used interchangeably.
In manipulating my robot test cases one observation was that the visual order of steps relates to the order in which multiple visual checks take place within a test case. This raised in my mind the question: how do I see the history of a visual check? It appears one can see history by grouping results within the Applitools Dashboard. There are also branches which allow you to version your baseline visual checks.
If these batches, tests, steps, branches, tags and names are confusing to you don’t worry. I experienced the same when first looking at Applitools Dashboard. With some experimentation I was able to start mapping the relationship between robot test suites and cases and Applitools.
The design of this EyesLibrary is slightly different then other libraries. Here keywords, especially the check settings and target keywords, act like what would be arguments in other libraries. But due to the large amount of configuration, it works. One should also note that the keywords are case sensitive within the library. For example, to check all the contents of the window if we used FULLY
(all caps) we would receive an incorrect keyword argument error. The proper usage, as we have seen, would be Fully
.
Let’s explore this “building block” design of keywords with the script regioncheck.robot
. First, as we have done before, we check a region using a single specific keyword, using the new keyword Eyes Check Region By Selector
, naming the check. Next we build a visual check that involves the full window and, in addition, ignores a region; that being the random number within the sentence. Finally we start with the full window and then we ignore multiple regions. We can run this script by typing at the command line or terminal prompt robot regioncheck.robot.
This example starts to show how we can build complex visual checks using the keywords and the structure of the EyesLibrary.
Asking what additional features could be added, I could see how the EyesLibrary could provide a keyword for getting the coordinates of a region that encompasses several different elements. That is, given the various selectors or elements as arguments, return a coordinate set which encompasses them all.
Maybe a configuration option to fail within the test stopping the execution of remaining tests. as well as the error at the end. In addition a method for saying here is a visual check and we expect an error. If an error is not found then fail. Similar to the Robot Framework keywords Run Keyword And Expect Error
and Run Keyword And Ignore Error
.
Before I conclude, I want to address a frequently asked question: “Can’t I just build my own visual checking tool?” The answer is yes but the real question is at what cost? There are a lot of factors that one needs to consider when considering to build or buy a solution. Image processing is not a simple task and it takes a lot of factors to make it work successfully. One should ask how much effort will go into getting it right and dealing with false positives. Another factor is maintaining the solution. If your developer leaves will anybody be able to maintain your visual checking? It is always an option to build but there’s also a cost to building oneself. I encourage every organization to perform an in-depth build versus buy cost analysis.
I’ve explored WebTesting using Selenium. There are other areas that Applitools works in too – mobile, responsive design, even accessibility. How could Robot Framework and EyesLibrary help in testing those areas?
Denali Lumma at the 2015 Selenium Conference in Portland gave an excellent talk outlining what should testing in the future look like. Included in her points was the idea of easily context switching such as adding in visual checking as a goal. I would like to see examples of this vision, making it a reality using Robot Framework and EyesLibrary.
I encourage you to explore the EyesLibrary even further. I look forward to seeing users combine Robot Framework and Applitools using the EyesLibrary.
The post Hot off the Press: How to Get Started with the New Applitools / Robot Framework Library! appeared first on Automated Visual Testing | Applitools.
]]>The post Top 10 Most Popular Free Test Automation Courses of 2021 appeared first on Automated Visual Testing | Applitools.
]]>The days of not being able to find a high quality Test Automation course for free are in the past – in no small part, we’re proud to say, due to Test Automation University (TAU). Today you can find free courses on web, mobile, API, and codeless test automation frameworks. Courses cover tools like Selenium, Cypress and Jenkins, and languages like Java, JavaScript, Python, Ruby, Swift and more – with new course releases every month.
TAU offers more than a dozen learning paths to guide you in your journey, and all courses are taught by leading testing experts.
Join the party! Over 100,000 students have joined TAU already, and to celebrate we’re throwing a homecoming bash. Come join us on December 1st and 2nd for a two day virtual conference with expert-led sessions and workshops, plus a live DJ, talent show and more ?.
Tens of thousands of free courses have been completed at TAU this year by leading test engineers. Here are the top testing courses for 2021:
We last compiled our list of top testing courses from 2020, and there have been a few changes!
As you’re planning your education initiatives for 2022, keep these amazing, freely available resources in mind. In addition to these 10, there are many more courses available and new ones being released every month. To be notified of new course releases, register at Test Automation University!
The post Top 10 Most Popular Free Test Automation Courses of 2021 appeared first on Automated Visual Testing | Applitools.
]]>The post How to Automate Gesture Testing with Appium appeared first on Automated Visual Testing | Applitools.
]]>In the previous blog posts in this Appium series, we discussed step-by-step instructions for creating your own Appium 2.0 Plugins as well as Appium 2.0 Drivers and Plugins usage and its installation. This article discusses the evolution of touch gestures and how you can simplify the way you perform automated tests for gestures using a new plugin in Appium 2.0.
Gestures are the new clicks. Gesture-driven devices changed the way we think about interaction. The success of mobile applications depends on how well the gestures are implemented into the user experience. The definitive guide for gestures by Luke Wroblewski details a lot of different actions and how they actually work.
Animations, when paired with Gestures, make users believe in interacting with tangible objects. Appium handles these gestures using the Actions API defined in the W3C WebDriver Specification. It’s great that the API was designed thinking of all interfaces like touch, pen, mouse, etc., but it is hard to understand. Since Appium 1.8, Appium supports the W3C Actions API for building any kind of gesture in mobile devices.
Let’s consider a situation where the application sets specific values based on the slider movements.
Let’s see using Actions API how this situation can be handled.
In the above code snippet, we have three actions to look at which are to get the element location using location API and create a sequence with actions which includes pointerMove
, pointerDown
, pause
, pointerUp
, and then perform the sequence. So in this case we need to add these actions in the right order to get the gesture to work. Let’s break down the above in detail to understand what happens under the hood.
PointerInput
object of type TOUCH
with a unique id as a finger.PointerInput
object.Refer to the below image to understand how the element location can be calculated.
This way any complex gestures can be automated using Actions API.
Let’s look at how the gestures plugin from Appium 2.0 simplifies this entire process:
Internally appium-gesture-plugin
finds the given element location and does the target location calculation based on the given percentage. It also creates the sequence of actions to perform the gestures on both iOS and Android platforms.
Refer here for a working example of the above swipe gesture using the gestures plugin. Follow the instruction specified here to install the Appium gestures plugin.
For any simple gesture actions (swipe, drag and drop, long press, double-tap, etc.) we can use the gestures plugin. For more custom actions, like if your application has a feature on digital signature, we could still use the Actions API.
Apart from the Actions API, Appium also supports native gesture API which is exposed by Android and iOS platforms through non-standard endpoints like below:
For more such APIs, check out Android non-standard gesture APIs and iOS non-standard gesture APIs.
The post How to Automate Gesture Testing with Appium appeared first on Automated Visual Testing | Applitools.
]]>The post Selenium 4 Release Candidate is Here! appeared first on Automated Visual Testing | Applitools.
]]>Update 10/14: Selenium 4 has been officially released! Check out our post covering everything new in the latest release right here.
A release candidate for Selenium 4 is finally here! That means we’re getting really close to the official version ?. This is a really great time to familiarize yourself with the latest Selenium features that are coming in the new release.
I’ve compiled a list of resources to help you do so. Check it out below ?.
For a quick summary of the latest Selenium 4 updates, Manoj Kumar & Anand Bagmar have you covered! Check out What’s New in Selenium 4.
If you’d like to see Selenium 4 features in action, I made this video demonstrating real examples. Check out this video of Selenium 4 features.
There were lots of questions from the audience when I recorded that video ?. Check out this followup blog post to read the answers to the most frequently asked questions about Selenium 4 features.
OK, now that you know what’s new, are you ready to try it out for yourself? Great! Shama Ugale covers how to install Selenium 4 in this tutorial.
Or if you just need to migrate from an Selenium 3 to version 4, Shama Ugale details the notable changes and deprecations you should be aware of. Check it out to see how to migrate to Selenium 4 safely.
Good, you’re up and running! One of the interesting new features of Selenium 4 that I really want you to try is Relative Locators. Selenium 4 Relative Locators seem pretty straightforward but I’ve covered some things to be aware of, and also how this is working under the covers.
And the biggest draw for Selenium 4 is arguably its programmatic access to the Chrome Devtools Protocol (CDP)! This is some super power stuff. Shama Ugale outlines some of the cool things you can now do right from within your tests using Selenium 4 and Chrome DevTools Protocol.
I also have a livestream video where I used Selenium 4’s CDP API to mock a location in the browser! Pretty handy if you need to do any location-based testing.
And then if you’re fascinated with all of this and want to really geek out, I talk about the architecture of the Selenium Chrome DevTools Protocol API, how it all works, and which method calls to use when (executeCdpCommand vs send).
Have fun and let me know what you think!
The post Selenium 4 Release Candidate is Here! appeared first on Automated Visual Testing | Applitools.
]]>The post Getting Started with Visual UI Testing for Android Apps with Applitools and Bitrise appeared first on Automated Visual Testing | Applitools.
]]>A hands-on guide for getting started with visual UI testing for mobile apps with Applitools and Bitrise easily.
Mobile apps play a critical role nowadays in our life, especially during the pandemic period, like e-commerce, groceries, medical, or banking apps. As we know there are different mobile apps, technologies, SDKs, and platforms that make mobile testing have a lot of challenges and which need special technical skills from the testing team to tackle.
And because most of the mobile companies aim to release their apps on a weekly or bi-weekly release cadence based on the team decision to deploy new features to the customer before their competitors, they are using different techniques. One of them is test automation.
Based on that, in this article, I’ll walk you through the most important aspects of automating the Visual UI testing for mobile apps and running it via CI/CD pipelines using Applitools and Bitrise.
Visual Testing is a testing activity that helps us to verify that the GUI (Graphical User Interface) appears correctly and as expected to customers, by checking that each element on a mobile app appears in the right size and position on different devices and OS versions.
Mobile companies nowadays are applying Mobile DevOps and implementing CI/CD (Continuous Integration and Continuous Delivery/Deployment) pipelines to deploy their mobile apps consistently and frequently.
One of the vital parts of this process is test automation, to make sure that app functionalities are working as expected and test that there are no blockers or critical issues in the early stages (Fail Fast) by running these tests on every pull request or as a nightly build. And if we add Visual UI testing with this pipeline we will make sure that there are no issues in the GUI and all the features are displayed correctly.
In this tutorial, I will add the Visual UI Testing to my Android UI tests with Appium and then run my tests on Bitrise as CI/CD pipeline, covering the following steps:
To get started with Appium for Android applications, you should prepare your local machine with the following requirements:
More information about installing Appium can be found in the free Appium Course on Test Automation University or in this blog post.
If you don’t have an Android application, you can fork the Sunflower app from GitHub. Sunflower app is a gardening app illustrating Android development best practices with Android Jetpack.
With Appium, you can test automation scripts or projects that are stand-alone. I created a Maven project (you can create a Gradle one if you’d like) with the following structure:
The full project can be found on GitHub.
And now we can run the test using the following command:
> mvn clean test -Pandroid
And the results will be like the following image:
Now it’s time to add the Visual UI validation using Applitools Eyes SDK for Appium.
Applitools Eyes is powered by Visual AI, the only AI-powered computer vision that replicates the human eyes and brain to quickly spot functional and visual regressions.
If you don’t have an account you can create a free account from here.
Eyes eyes = new Eyes();
eyes.setApiKey("YOUR_API_KEY");
eyes.setForceFullPageScreenshot(false);
eyes.open(driver, "Sunflower", "Add My Plant");
eyes.checkWindow("Plant list",false);
eyes.checkWindow("My Empty Garden",false);
eyes.checkWindow("Avocado",false);
eyes.close();
eyes.abortIfNotClosed();
10. Now we can run our test again with the same Maven command and check the results in the Applitools Test manager with the steps view.
Or the Batch Summary View:
11. Try to run the test again and check if the results are the same or if there is an issue.
In my case when I run the tests again the test failed because the Eyes detected differences between the base images or the first run/batch and the 2nd run.
And I checked the dashboard to check the results and the differences:
And I found that I have a difference in the first image which was already highlighted. You can click on the image for more details or compare it with the base image to check the difference.
Because in my case sometimes the images loaded slowly, there was a difference between the two images.
And also you can check the Batch Summary View for more details.
TIP: If you need to run on different mobile devices and OS versions in parallel, you can use Applitools Ultrafast Test Cloud. More info on that can be found here.
After we successfully run our tests locally, it’s time to integrate our tests with our Android CI using Bitrise.
But first, we need to do a small change in the application path, Applitools API Key, and the Batch ID. Locally, we’re using the absolute path of the app and hardcoded values, but with Bitrise or our CI server, it should be as an environment variables with the following steps:
caps.setCapability("app", System.getenv("BITRISE_APK_PATH"));
Now it’s time to integrate our Android app and the automation project with Bitrise.
14. The final step is the Webhook setup. We just need to click on Register a Webhook for me! button and you will trigger your first Android build for your project
15. You will be redirected to the build log where you can check the Steps and the build status.
16. In the end, you will find a summary of all the tasks you ran during the build and how much time they took, along with the Step names. You can download logs or check the apps and artifacts here.
17. Now let’s open the Workflow Editor to configure the CI Workflow with the new steps required for Appium and Applitools integration.
You can rename the workflow as you like, for example from primary to appium-ui-tests:
18. Add a Script step to clone the Appium repository by clicking on the + button and it will open the steps screen, then you can search about the script then select it.
19. As we know, we are using Appium Desktop as a server and inspector locally, but with CI servers we can’t use it, so we need to install and run Appium Server from the command line. Here I’ll add another Script step to install and run the Appium server in the background.
20. You will notice that I’m passing –log appium.log with the Appium command to export the server log as a .txt file for more debugging purposes if any tests failed.
21. Add the AVD Manager Step to create an Android Emulator to use with your tests. You can choose any API level. For example, you can change it to 28 but you need to change it in the Appium desired capabilities and commit the change to your GitHub repository as well.
22. Add the Wait for the Android emulator Step and wait for the emulator to be ready before running our tests on it.
23. Add another Script Step to run the UI tests. You need to switch inside the automation project to be able to run the maven command.
24. Add the Script Step to copy the Appium log file to the Deploy directory of Bitrise to be able to find it in the artifacts tab.
25. Add a Step to Export the test results to the Bitrise Test Report Add-on and specify the test results search pattern like this: (*/target/surefire-reports/junitreports/*)
26. Add the Applitools API Key and the Batch ID as environment variables, by click on the Secrets menu option and click add button then add your APPLITOOLS_API_KEY and APPLITOOLS_BATCH_ID like the following image:
The final Workflow will look like the following:
To run the Workflow, go to the application page and click on the Start/Schedule Build button and select the Workflow then click on the Start Build button.
To check the test results and the Appium log file you can click on the Test Report add-on, and for the appium log and the app.apk file, you can click on Apps & Artifacts tab.
Finally, we successfully added the Visual UI testing for our Android app and we can run the tests on every pull request or as a nightly build to make sure that we don’t have something breaking our codebase and to keep our master branch always green.
I hope you found this article useful and thank you for reading it!
The post Getting Started with Visual UI Testing for Android Apps with Applitools and Bitrise appeared first on Automated Visual Testing | Applitools.
]]>The post Top New Features of Cucumber JVM v6 appeared first on Automated Visual Testing | Applitools.
]]>Behavior Driven Development or BDD is one of the magical terms that many organizations are looking for today. The influence of the BDD methodology has significantly impacted the way the development model works. Its powerful business-driven approach has helped many teams collaborate with different stakeholders to define a better requirement.
One of the well-known tools that help to automate the requirements for the BDD projects is Cucumber. In this article, you will learn about some of the coolest features available as part of Cucumber 6 (cucumber-jvm) & previous versions and how you can leverage them with your automation pack and business discussions.
First, the basics!
For people who are new to Cucumber, the Gherkin language is simple English (or localised) statements put together to make a sensible requirement. It is written in Given-When-Then-But-And format.
The business requirements or the acceptance criteria are written as Scenarios, containing the Gherkin format requirements. Parameterising the scenarios with various combinations are achieved using Scenario Outlines with examples. All these Scenarios are written in a Cucumber feature file in the Cucumber framework.
A sample for a typical feature file is given below:
One of the major features released in cucumber-jvm 6.0.0 is the usage of the Rule keyword. Although it is not a new feature to Cucumber, as it was released first in the cucumber-ruby 4.x and Gherkin 6.0. It’s an optional keyword, but can be very powerful in some business cases.
In general, the Rule keyword will help the team members to think of the scenarios as examples of acceptance criteria or business rules. Examples are different types of scenarios in which the requirement has to be mapped.
This feature will be very useful during the three amigos session, especially for the product owners and to provide better living documentation. The best example for real-time usage can be like this:
Apart from the Rules keyword, many keywords allow the business stakeholders to define the right set of requirements.
Some of them are:
The keywords are interchangeable as follows:
High-Level Requirement | Feature, Ability, Business Need |
Scenario | Scenario Outline, Scenario Template |
Combinations / Parameters | Examples, Scenarios |
Though there are new keywords introduced to the Feature file, the way of writing the step definitions remains the same. Cucumber can automatically deduct the scenarios and examples. You can see the step definitions for the above example here.
A sample for Ability, Scenario Template & Scenarios:
This is one of the coolest features available as part of Cucumber to date. You can now view Cucumber reports online with a link generated after every execution. The reports show the test results and act as living documentation. The online reports will be published via the https://reports.cucumber.io site with a unique URL provided in the console output and it will self-destruct in a day, but you can also store it if you wish.
This feature can be enabled by either:
src/test/resources/cucumber.properties
and adding the below code:cucumber.publish.enabled=true |
src/test/resources/junit-platform.properties
with the below code:cucumber.publish.enabled=true |
CUCUMBER_PUBLISH_ENABLED=true |
@CucumberOptions(publish = true) |
The new features in Cucumber 6 will be of great help to the business stakeholders who like to define their requirements more understandably. This will enable more scenarios to be added for various business needs and for you to perform more collaborative work towards delivering a stable product.
The post Top New Features of Cucumber JVM v6 appeared first on Automated Visual Testing | Applitools.
]]>The post A Guide to Appium – Our Top 10 Appium Tutorials in 2021 appeared first on Automated Visual Testing | Applitools.
]]>In this collection of free Appium tutorials, guides and courses, find our most popular Appium articles so far in 2021 to help you improve your mobile test automation skills.
Are you looking to up your mobile test automation game? Appium is a powerful open-source testing framework that you should be acquainted with. At Applitools, we love all things test automation, and we’ve been thinking and writing about Appium for years. In this guide, we’ve collected all our best free tutorials, comparisons and courses in one place for you.
The pieces are ranked by the traffic they’ve received so far in 2021 – collectively they’ve been viewed by many thousands of you in the last six months or so.
Whether you’re a beginner or an experienced Appium user, you’re sure to find something new and useful here. We hope you enjoy this list, and if there’s something we missed that you wish we’d cover in the second half of 2021, let us know @applitools!
First, a brief introduction.
Appium is one of the most popular open-source test automation frameworks for mobile testing – the testing of native mobile apps, mobile web apps and hybrid apps for Android, iOS and Windows. It is cross-platform and compatible with numerous development languages, allowing you to write tests against multiple platforms in the language of your choice and reuse the code. It was developed in 2011 by Dan Cuellar and Jason Huggins, and today has about 14K stars on GitHub with very regular updates (the last commit was 4 hours ago at time of writing).
You can get an overview of what Appium is all about here in their docs.
In this on-demand webinar (with accompanying slides), Applitools Senior Director of R&D, Daniel Puterman, dives deep into the internals of Appium’s code. He and his team submitted a major pull request when they added a feature to Appium, and Daniel shares his experiences doing so. This webinar took place a few years ago but it’s still a relevant and fascinating look into the structure and architecture of Appium.
Jonathan Lipps, who leads the Appium project (among other things), led this on-demand webinar to help you understand how to easily parallelize visual testing with Appium across all Android devices at once using Genymotion Cloud and Applitools. If you’re wondering how you can perform visual testing for mobile apps at scale, this is a great one to check out.
When I said we’ve been talking about Appium for years, I meant it, and this classic post is now approaching 7 years young. That this post remains among our more popular Appium posts after all that time is a testament to the enduring relevance of the techniques it describes and the topic itself. Take a look at this post and the demo video to get a sense of how wearable devices can be tested with Appium.
Getting started with Appium for the first time can be a little daunting, as there are many dependencies to keep track of and set up. In this post, Anand Bagmar shares a custom script he wrote to automate the process for you so that you don’t have to do each of these individually. Check it out for a simple and easy way to get going with Appium quickly.
A few months ago Applitools hosted a great “Future of Testing: Mobile” event. In this recap, you can read about (and watch) key talks about Appium 2.0, the state of mobile frameworks generally, and much more. You can watch all the videos here (and sidenote: our next “Future of Testing: Mobile” event takes place on August 10th, so register now to catch it live!).
Mobile testing can be a challenge. Android in particular can seem daunting, with its fractured nature yielding numerous devices, form factors and operating system versions that need to be tested to achieve full coverage. In this tutorial, you’ll learn how you can use Genymotion’s cloud-based Android emulation to run Appium tests rapidly on a range of Android devices, and how you can easily combine it with Applitools as well to give you full visual coverage.
In this recap of a popular Test Automation University course, Automated Visual Testing with Appium, which is taught by Jonathan Lipps (who we also talked about in #9), you’ll get a crash course on visual testing with Appium. You’ll learn the basic of Appium testing and explore multiple alternatives for visual testing. This is a great written recap that is easy to follow along with, so check it today to get started with visual testing, and don’t forget to check out the full course if you’re curious for more.
Remember back at #5 when we talked about the challenge of testing on mobile across all devices, form factors and operating system versions? In this popular article, you’ll learn another way to tackle the issue using Amazon’s AWS Device Farm. The solution is cross-platform too, so it goes beyond just Android. This tutorial is on the technical side, and you’ll find a number of helpful and detailed code samples and screenshots to guide you through the process step by step.
Appium 2.0 is coming soon, and the betas have been coming out quickly as the official release draws closer. If you’re looking to get started with Appium 2.0 early, this guide is for you. You’ll find tips for installing the Appium 2.0 server, working with the newly decoupled drivers, incorporating the latest plugins and more. Though things move fast and a beta version or two has been released since it was published in February, it’s still extremely relevant and one of our most popular Appium posts for a reason.
Our most popular post around Appium, this one seeks to answer a question that’s been around since the dawn of mobile apps: What mobile test automation framework should I use for my app? This article compares Appium (cross-platform, open-source) with two of the most widely-used test automation frameworks in Espresso (just Android, developed by Google) and XCUITest (just iOS, developed by Apple). It provides a detailed overview of the pros and cons for each of these frameworks. If you’re looking to understand these frameworks better or just to figure out how to test your own app, this highly-visited post is for you.
Looking to expand your knowledge around Appium but want something more in-depth than an individual article or tutorial? Why don’t you take a free course at the Test Automation University? Here are the top three courses you may want to consider:
Appium is a popular open-source framework for mobile test automation. It’s a powerful and versatile tool, and definitely one that we’re watching closely as it develops. How do you use Appium today, and what are you looking forward to in the next release? Let us know @applitools.
The post A Guide to Appium – Our Top 10 Appium Tutorials in 2021 appeared first on Automated Visual Testing | Applitools.
]]>