The post Mobile Testing for the First Time with Android, Appium, and Applitools appeared first on Automated Visual Testing | Applitools.
]]>For some of us, it’s hard to believe how long smartphones have existed. I remember when the first iPhone came out in June 2007. I was working at my first internship at IBM, and I remember hearing in the breakroom that someone on our floor got one. Oooooooh! So special! That was 15 years ago!
In that decade and a half, mobile devices of all shapes and sizes have become indispensable parts of our modern lives: The first thing I do every morning when I wake up is check my phone. My dad likes to play Candy Crush on his tablet. My wife takes countless photos of our French bulldog puppy on her phone. Her mom uses her tablet for her virtual English classes. I’m sure, like us, you would feel lost if you had to go a day without your device.
It’s vital for mobile apps to have high quality. If they crash, freeze, or plain don’t work, then we can’t do the things we need to do. So, being the Automation Panda, I wanted to give mobile testing a try! I had three main goals:
This article covers my journey. Hopefully, it can help you get started with mobile testing, too! Let’s jump in.
The mobile domain is divided into two ecosystems: Android and iOS. That means any app that wants to run on both operating systems must essentially have two implementations. To keep things easier for me, I chose to start with Android because I already knew Java and I actually did a little bit of Android development a number of years ago.
I started by reading a blog series by Gaurav Singh on getting started with Appium. Gaurav’s articles showed me how to set up my workbench and automate a basic test:
Test Automation University also has a set of great mobile testing courses that are more than a quickstart guide:
Next, I needed an Android app to test. Thankfully, Applitools had the perfect app ready: Applifashion, a shoe store demo. The code is available on GitHub at https://github.com/dmitryvinn/applifashion-android-legacy.
To do Android development, you need lots of tools:
I followed Gaurav’s guide to a T for setting these up. I also had to set the ANDROID_HOME
environment variable to the SDK path.
Be warned: it might take a long time to download and install these tools. It took me a few hours and occupied about 13 GB of space!
Once my workbench was ready, I opened the Applifashion code in Android Studio, created a Pixel 3a emulator in Device Manager, and ran the app. Here’s what it looked like:
I chose to use an emulator instead of a real device because, well, I don’t own a physical Android phone! Plus, managing a lab full of devices can be a huge hassle. Phone manufacturers release new models all the time, and phones aren’t cheap. If you’re working with a team, you need to swap devices back and forth, keep them protected from theft, and be careful not to break them. As long as your machine is powerful and has enough storage space, you can emulate multiple devices.
It was awesome to see the Applifashion app running through Android Studio. I played around with scrolling and tapping different shoes to open their product pages. However, I really wanted to do some automated testing. I chose to use Appium for automation because its API is very similar to Selenium WebDriver, with which I am very familiar.
Appium adds on its own layer of tools:
Again, I followed Gaurav’s guide for full setup. Even though Appium has bindings for several popular programming languages, it still needs a server for relaying requests between the client (e.g., the test automation) and the app under test. I chose to install the Appium server via the NPM module, and I installed version 1.22.3. Appium Doctor gave me a little bit of trouble, but I was able to resolve all but one of the issues it raised, and the one remaining failure regarding ANDROID_HOME
turned out to be not a problem for running tests.
Before jumping into automation code, I wanted to make sure that Appium was working properly. So, I built the Applifashion app into an Android package (.apk file) through Android Studio by doing Build → Build Bundle(s) / APK(s) → Build APK(s). Then, I configured Appium Inspector to run this .apk file on my Pixel 3a emulator. My settings looked like this:
Here were a few things to note:
/wd/hub
.appium: automationName
had to be uiautomator2
– it could not be an arbitrary name.I won’t lie – I needed a few tries to get all my capabilities right. But once I did, things worked! The app appeared in my emulator, and Appium Inspector mirrored the page from the emulator with the app source. I could click on elements within the inspector to see all their attributes. In this sense, Appium Inspector reminded me of my workflow for finding elements on a web page using Chrome DevTools. Here’s what it looked like:
So far in my journey, I had done lots of setup, but I hadn’t yet automated any tests! Mobile testing certainly required a heftier stack than web app testing, but when I looked at Gaurav’s example test project, I realized that the core concepts were consistent.
I set up my own Java project with JUnit, Gradle, and Appium:
My example code is hosted here: https://github.com/AutomationPanda/applitools-appium-android-webinar.
Warning: The example code I share below won’t perfectly match what’s in the repository. Furthermore, the example code below will omit import statements for brevity. Nevertheless, the code in the repository should be a full, correct, executable example.
My build.gradle file looked like this with the required dependencies:
plugins {
id 'java'
}
group 'com.automationpanda'
version '1.0-SNAPSHOT'
repositories {
mavenCentral()
}
dependencies {
testImplementation 'io.appium:java-client:8.1.1'
testImplementation 'org.junit.jupiter:junit-jupiter-api:5.8.2'
testImplementation 'org.seleniumhq.selenium:selenium-java:4.2.1'
testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.8.2'
}
test {
useJUnitPlatform()
}
My test case class was located at /src/test/java/com/automationpanda/ApplifashionTest.java
. Inside the class, I had two instance variables: the Appium driver for mobile interactions, and a WebDriver waiting object for synchronization:
public class ApplifashionTest {
private AppiumDriver driver;
private WebDriverWait wait;
// …
}
I added a setup method to initialize the Appium driver. Basically, I copied all the capabilities from Appium Inspector:
@BeforeEach
public void setUpAppium(TestInfo testInfo) throws IOException {
// Create Appium capabilities
// Hard-coding these values is typically not a recommended practice
// Instead, they should be read from a resource file (like a properties or JSON file)
// They are set here like this to make this example code simpler
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability("platformName", "android");
capabilities.setCapability("appium:automationName", "uiautomator2");
capabilities.setCapability("appium:platformVersion", "12");
capabilities.setCapability("appium:deviceName", "Pixel 3a API 31");
capabilities.setCapability("appium:app", "/Users/automationpanda/Desktop/Applifashion/main-app-debug.apk");
capabilities.setCapability("appium:appPackage", "com.applitools.applifashion.main");
capabilities.setCapability("appium:appActivity", "com.applitools.applifashion.main.activities.MainActivity");
capabilities.setCapability("appium:fullReset", "true");
// Initialize the Appium driver
driver = new AppiumDriver(new URL("http://127.0.0.1:4723/wd/hub"), capabilities);
wait = new WebDriverWait(driver, Duration.ofSeconds(30));
}
I also added a cleanup method to quit the Appium driver after each test:
@AfterEach
public void quitDriver() {
driver.quit();
}
I wrote one test case that performs shoe shopping. It loads the main page and then opens a product page using locators I found with Appium Inspector:
@Test
public void shopForShoes() {
// Tap the first shoe
final By shoeMainImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image");
wait.until(ExpectedConditions.presenceOfElementLocated(shoeMainImageLocator));
driver.findElement(shoeMainImageLocator).click();
// Wait for the product page to appear
final By shoeProductImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image_product_page");
wait.until(ExpectedConditions.presenceOfElementLocated(shoeProductImageLocator));
}
At this stage, I hadn’t written any assertions yet. I just wanted to see if my test could successfully interact with the app. Indeed, it could, and the test passed when I ran it! As the test ran, I could watch it interact with the app in the emulator.
My next step was to write assertions. I could have picked out elements on each page to check, but there were a lot of shoes and words on those pages. I could’ve spent a whole afternoon poking around for locators through the Appium Inspector and then tweaking my automation code until things ran smoothly. Even then, my assertions wouldn’t capture things like layout, colors, or positioning.
I wanted to use visual assertions to verify app correctness. I could use the Applitools SDK for Appium in Java to take one-line visual snapshots at the end of each test method. However, I wanted more: I wanted to test multiple devices, not just my Pixel 3a emulator. There are countless Android device models on the market, and each has unique aspects like screen size. I wanted to make sure my app would look visually perfect everywhere.
In the past, I would need to set up each target device myself, either as an emulator or as a physical device. I’d also need to run my test suite in full against each target device. Now, I can use Applitools Native Mobile Grid (NMG) instead. NMG works just like Applitools Ultrafast Grid (UFG), except that instead of browsers, it provides emulated Android and iOS devices for visual checkpoints. It’s a great way to scale mobile test execution. In my Java code, I can set up Applitools Eyes to upload results to NMG and run checkpoints against any Android devices I want. I don’t need to set up a bunch of devices locally, and the visual checkpoints will run much faster than any local Appium reruns. Win-win!
To get started, I needed my Applitools account. If you don’t have one, you can register one for free.
Then, I added the Applitools Eyes SDK for Appium to my Gradle dependencies:
testImplementation 'com.applitools:eyes-appium-java5:5.12.0'
I added a “before all” setup method to ApplifashionTest
to set up the Applitools configuration for NMG. I put this in a “before all” method instead of a “before each” method because the same configuration applies for all tests in this suite:
private static InputReader inputReader;
private static Configuration config;
private static VisualGridRunner runner;
@BeforeAll
public static void setUpAllTests() {
// Create the runner for the Ultrafast Grid
// Warning: If you have a free account, then concurrency will be limited to 1
runner = new VisualGridRunner(new RunnerOptions().testConcurrency(5));
// Create a configuration for Applitools Eyes
config = new Configuration();
// Set the Applitools API key so test results are uploaded to your account
config.setApiKey("<insert-your-API-key-here>");
// Create a new batch
config.setBatch(new BatchInfo("Applifashion in the NMG"));
// Add mobile devices to test in the Native Mobile Grid
config.addMobileDevices(
new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21),
new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10),
new AndroidDeviceInfo(AndroidDeviceName.Pixel_4));
}
The configuration for NMG was almost identical to a configuration for UFG. I created a runner, and I created a config object with my Applitools API key, a batch name, and all the devices I wanted to target. Here, I chose three different phones: Galaxy S21, Galaxy Note 10, and Pixel 4. Currently, NMG supports 18 different Android devices, and support for more is coming soon.
At the bottom of the “before each” method, I added code to set up the Applitools Eyes object for capturing snapshots:
private Eyes eyes;
@BeforeEach
public void setUpAppium(TestInfo testInfo) throws IOException {
// …
// Initialize Applitools Eyes
eyes = new Eyes(runner);
eyes.setConfiguration(config);
eyes.setIsDisabled(false);
eyes.setForceFullPageScreenshot(true);
// Open Eyes to start visual testing
eyes.open(driver, "Applifashion Mobile App", testInfo.getDisplayName());
}
Likewise, in the “after each” cleanup method, I added code to “close eyes,” indicating the end of a test for Applitools:
@AfterEach
public void quitDriver() {
// …
// Close Eyes to tell the server it should display the results
eyes.closeAsync();
}
Finally, I added code to each test method to capture snapshots using the Eyes object. Each snapshot is a one-line call that captures the full screen:
@Test
public void shopForShoes() {
// Take a visual snapshot
eyes.check("Main Page", Target.window().fully());
// Tap the first shoe
final By shoeMainImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image");
wait.until(ExpectedConditions.presenceOfElementLocated(shoeMainImageLocator));
driver.findElement(shoeMainImageLocator).click();
// Wait for the product page to appear
final By shoeProductImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image_product_page");
wait.until(ExpectedConditions.presenceOfElementLocated(shoeProductImageLocator));
// Take a visual snapshot
eyes.check("Product Page", Target.window().fully());
}
When I ran the test with these visual assertions, it ran one time locally, and then NMG ran each snapshot against the three target devices I specified. Here’s a look from the Applitools Eyes dashboard at some of the snapshots it captured:
The results are marked “New” because these are the first “baseline” snapshots. All future checkpoints will be compared to these images.
Another cool thing about these snapshots is that they capture the full page. For example, the main page will probably display only 2-3 rows of shoes within its viewport on a device. However, Applitools Eyes effectively scrolls down over the whole page and stitches together the full content as if it were one long image. That way, visual snapshots capture everything on the page – even what the user can’t immediately see!
Capturing baseline images is only the first step with visual testing. Tests should be run regularly, if not continuously, to catch problems as soon as they happen. Visual checkpoints should point out any differences to the tester, and the tester should judge if the change is good or bad.
I wanted to try this change detection with NMG, so I reran tests against a slightly broken “dev” version of the Applifashion app. Can you spot the bug?
The formatting for the product page was too narrow! “Traditional” assertions would probably miss this type of bug because all the content is still on the page, but visual assertions caught it right away. Visual checkpoints worked the same on NMG as they would on UFG or even with the classic (e.g. local machine) Applitools runner.
When I switched back to the “main” version of the app, the tests passed again because the visuals were “fixed:”
While running all these tests, I noticed that mobile test execution is pretty slow. The one test running on my laptop took about 45 seconds to complete. It needed time to load the app in the emulator, make its interactions, take the snapshots, and close everything down. However, I also noticed that the visual assertions in NMG were relatively fast compared to my local runs. Rendering six snapshots took about 30 seconds to complete – three times the coverage in significantly less time. If I had run tests against more devices in parallel, I could probably have seen an even greater coverage-to-time ratio.
My first foray into mobile testing was quite a journey. It required much more tooling than web UI testing, and setup was trickier. Overall, I’d say testing mobile is indeed more difficult than testing web. Thankfully, the principles of good test automation were the same, so I could still develop decent tests. If I were to add more tests, I’d create a class for reading capabilities as inputs from environment variables or resource files, and I’d create another class to handle Applitools setup.
Visual testing with Applitools Native Mobile Grid also made test development much easier. Setting everything up just to start testing was enough of a chore. Coding the test cases felt straightforward because I could focus my mental energy on interactions and take simple snapshots for verifications. Trying to decide all the elements I’d want to check on a page and then fumbling around the Appium Inspector to figure out decent locators would multiply my coding time. NMG also enabled me to run my tests across multiple different devices at the same time without needing to pay hundreds of dollars per device or sucking up a few gigs of storage and memory on my laptop. I’m excited to see NMG grow with support for more devices and more mobile development frameworks in the future.
Despite the prevalence of mobile devices in everyday life, mobile testing still feels far less mature as a practice than web testing. Anecdotally, it seems that there are fewer tools and frameworks for mobile testing, fewer tutorials and guides for learning, and fewer platforms that support mobile environments well. Perhaps this is because mobile test automation is an order of magnitude more difficult and therefore more folks shy away from it. There’s no reason for it to be left behind anymore. Given how much we all rely on mobile apps, the risks of failure are just too great. Technologies like Visual AI and Applitools Native Mobile Grid make it easier for folks like me to embrace mobile testing.
The post Mobile Testing for the First Time with Android, Appium, and Applitools appeared first on Automated Visual Testing | Applitools.
]]>The post Writing Your First Appium Test For iOS Devices appeared first on Automated Visual Testing | Applitools.
]]>This is the third and final post in our Hello World introduction series to Appium, and we’ll discuss how to create your first Appium test for iOS. You can read the first post for an introduction to Appium, or the second to learn how to create your first Appium test for Android.
Congratulations on having made it so far. I hope you are slowly becoming more comfortable with Appium and realizing just how powerful a tool it really is for mobile automation, and that it’s not that difficult to get started with it.
This is the final post in this short series on helping you start with Appium and write your first tests. If you need a refresher on what Appium is and writing your first Android test with it, you can read the earlier parts here:
In this post, we’ll learn how to set up your dev environment and write your first Appium based iOS test.
We’ll need some dependencies to be preinstalled on your dev machine.
Let’s go over them one by one.
Also, remember it’s completely okay if you don’t understand all the details of these in the beginning since Appium pretty much abstracts those details away and you can always dig deeper later on if you need some very specific capabilities of these libraries.
To run iOS tests, we need a machine running macOS with Xcode installed.
The below command would setup Command-line scripts that are needed for us to be able to run our first test:
xcode-select --install
You can think of Carthage as a tool to allow adding frameworks to your Cocoa applications and to build required dependencies:
brew install carthage
libimobiledevice library allows Appium to talk to iOS devices using native protocols:
brew install libimobiledevice
ios-deploy helps to install and debug iOS apps from the command line:
brew install ios-deploy
brew install ios-webkit-debug-proxy
IDB (iOS Device bridge) is a node JS wrapper over IDB that are a set of utilities made by Facebook:
brew tap facebook/fb
brew install idb-companion
pip3.6 install fb-idb
If you are curious, you could read the below reference blogs below that helped me come up with this shortlist of dependencies and are good reads for more context:
For our first iOS test, we’ll use a sample demo app provided by Appium.
You can download the zip file from here, unzip it and ensure you copy it under src/test/resources dir in the project, such that we have a TestApp.app file under the test resources folder.
If you are following these tests along by checking out the GitHub repo appium-fast-boilerplate, you’ll see the iOS app path is mentioned under a file ios-caps.json under src/main/resources/.
This file represents Appium capabilities in JSON format and you can change them based on which iOS device you want to run them on.
When we run the test DriverManager will pick these up and help create the Appium session. You can read part 2 of this blog series to know more about this flow.
{
"platformName": "iOS",
"automationName": "XCUITest",
"deviceName": "iPhone 13",
"app": "src/test/resources/TestApp.app"
}
Our app has a set of UI controls with one section representing a calculator wherein we could enter two numbers and get their sum (see below snapshot):
We would automate the below flow:
Pretty basic right?
Below is how a sample test would look like (see the code here):
import constants.TestGroups;
import org.testng.Assert;
import org.testng.annotations.Test;
import pages.testapp.home.HomePage;
public class IOSTest extends BaseTest {
@Test(groups = {TestGroups.IOS})
public void addNumbers() {
String actualSum = new HomePage(this.driver)
.enterTwoNumbersAndCompute("5", "5")
.getSum();
Assert.assertEquals(actualSum, "10");
}
}
Here, we follow the same good patterns that have served us well (like using Fluent, page objects, a base test, and driver manager) in our tests just as we did in our Android test.
You can read about these in detail in this earlier blog.
The beauty of the page object pattern is that it looks very similar regardless of the platform.
Below is the complete page object for the above test that implements the desired behavior for this test.
package pages.testapp.home;
import core.page.BasePage;
import io.appium.java_client.AppiumDriver;
import org.openqa.selenium.By;
import org.openqa.selenium.WebElement;
public class HomePage extends BasePage {
private final By firstNumber = By.name("IntegerA");
private final By secondNumber = By.name("IntegerB");
private final By computeSumButton = By.name("ComputeSumButton");
private final By answer = By.name("Answer");
public HomePage(AppiumDriver driver) {
super(driver);
}
public HomePage enterTwoNumbersAndCompute(String first, String second) {
typeFirstNumber(first);
typeSecondNumber(second);
compute();
return this;
}
public HomePage typeFirstNumber(String number) {
WebElement firstNoElement = getElement(firstNumber);
type(firstNoElement, number);
return this;
}
public HomePage typeSecondNumber(String number) {
WebElement secondNoElement = getElement(secondNumber);
type(secondNoElement, number);
return this;
}
public HomePage compute() {
WebElement computeBtn = getElement(computeSumButton);
click(computeBtn);
return this;
}
public String getSum() {
waitForElementToBePresent(answer);
return getText(getElement(answer));
}
}
Let’s unpack this and understand its components.
We create a HomePage class that inherits from BasePage that has wrappers over Appium API methods.
public class HomePage extends BasePage
We define our selectors of type By, using the Appium inspector to discover that name is the unique selector for these elements, in your projects trying to depend on ID is probably a safer bet.
private final By firstNumber = By.name("IntegerA");
private final By secondNumber = By.name("IntegerB");
private final By computeSumButton = By.name("ComputeSumButton");
private final By answer = By.name("Answer");
Next, we initialize this class with a driver instance that’s passed the test and also its parent class to ensure we have the appropriate driver instance set:
public HomePage(AppiumDriver driver) {
super(driver);
}
We then create a wrapper function that takes two numbers as strings, types numbers in the two text boxes, and taps on the button.
public HomePage enterTwoNumbersAndCompute(String first, String second) {
typeFirstNumber(first);
typeSecondNumber(second);
compute();
return this;
}
We implement these methods by reusing methods from BasePage while ensuring the correct page object is returned.
Since there is no redirection happening in these tests and it’s a single screen we just return this (i.e. the current page object in Java syntax). This enables writing tests in the Fluent style that you saw earlier.
public HomePage typeFirstNumber(String number) {
WebElement firstNoElement = getElement(firstNumber);
type(firstNoElement, number);
return this;
}
public HomePage typeSecondNumber(String number) {
WebElement secondNoElement = getElement(secondNumber);
type(secondNoElement, number);
return this;
}
public HomePage compute() {
WebElement computeBtn = getElement(computeSumButton);
click(computeBtn);
return this;
}
Finally, we return the string that has the sum of two numbers in the getSum() method and let the test perform desired assertions:
public String getSum() {
waitForElementToBePresent(answer);
return getText(getElement(answer));
}
Before running the test, ensure that the Appium server is running in another terminal and that your appium 2.0 server has the XCUITest driver installed by following the below steps
# Ensure driver is installed
appium driver install xcuitest
# Start the appium server before running your test
appium
Within the project, you could run the test using the below command or use IntelliJ or equivalent editors test runner to run the desired test.
gradle wrapper clean build runTests -Dtag="IOS" -Dtarget="IOS"
With this, we come to an end to this short three-part series on getting started with Appium, from a general introduction to Appium to working with Android to this post on iOS. Hopefully, this series makes it a little bit easier for you or your friends to get set up with Appium.
Exploring the remainder of Appium’s API, capabilities and tooling is left as an exercise to you, my brave and curious reader. I’m sure pretty soon you’ll also be sharing similar posts and hopefully, I’ll learn a thing or two from you as well. Remember Appium docs, the community, and Appium Conf are great sources to go deeper into Appium ?.
So, what are you waiting for? Go for it!
Remember, you can see the entire project on Github at appium-fast-boilerplate, clone or fork it, and play around with it. Hopefully, this post helps you a little bit in starting on iOS automation using Appium. If you found it valuable, do leave a star on the repo and in case there is any feedback, don’t hesitate to create an issue.
You could also check out https://automationhacks.io for other posts that I’ve written about Software engineering and Testing and this page for a talk that I gave on the same topic.
As always, please do share this with your friends or colleagues and if you have thoughts or feedback, I’d be more than happy to chat over on Twitter or in the comments. Until next time. Happy testing and coding.
The post Writing Your First Appium Test For iOS Devices appeared first on Automated Visual Testing | Applitools.
]]>The post Writing Your First Appium Test For Android Mobile Devices appeared first on Automated Visual Testing | Applitools.
]]>This is the second post in our Hello World introduction series to Appium, and we’ll discuss how to create your first Appium test for Android. You can read the first post where we discussed what Appium is, including its core concepts and how to set up the Appium server. You can also read the next post on setting up your first Appium iOS test.
In this post, we’ll build on top of earlier basics and focus on the below areas:
We have lots to cover but don’t worry, by the end of this post, you will have run your first Appium-based Android test. Excited? ? Let’s go.
To run Android tests, we need to set up an Android SDK, ADB (Android debug bridge), and some other utilities.
The easiest way to set these up is to go to the Android site and download Android Studio (An IDE to develop Android apps), which will install all the desired libraries and also give us everything we need to run our first Android test.
Once downloaded and installed, open Android Studio, click on Configure, and then SDK Manager. Using this you can download any android API version from SDK Platforms.
Also, You can install any desired SDK Tools from here.
We’ll go with the defaults for now. This tool can also install any required updates for these tools which is quite a convenient way of upgrading.
The Appium server needs to know where the Android SDK and other tools like Emulator, Platform Tools are present to help us run the tests.
We can do so by adding the below variables in the system environment.
On Mac/Linux:
source <shell_profile_file_name>
for e.g. source.zshrc
export ANDROID_HOME=$HOME/Library/Android/sdk
export PATH=$ANDROID_HOME/emulator:$PATH
export PATH=$ANDROID_HOME/tools:$PATH
export PATH=$ANDROID_HOME/tools/bin:$PATH
export PATH=$ANDROID_HOME/platform-tools:$PATH
If you are on Windows, you’ll need to add the path to Android SDK in the ANDROID_HOME variable under System environment variables.
Once done, run the adb command on the terminal to verify ADB is set up:
➜ appium-fast-boilerplate git:(main) adb
Android Debug Bridge version 1.0.41
Version 30.0.5-6877874
Installed as /Users/gauravsingh/Library/Android/sdk/platform-tools/adb
These are a lot of tedious steps and in case you want to set these up quickly, you can execute this excellent script written by Anand Bagmar.
Our Android tests will run either on an emulator or a real Android device plugged in. Let’s see how to create an Android emulator image.
You’ll be greeted with a blank screen with no virtual devices listed. Tap on Create a virtual device to launch the Virtual device Configuration flow:
Next, select an Android device like TV, Phone, Tablet, etc., and the desired size and resolution combination.
It’s usually a good idea to set up an emulator with Play Store services available (see the Play Store icon under the column) as certain apps might need the latest play services to be installed.
We’ll go with Pixel 3a with Play Store available.
Next, we’ll need to select which Android version this emulator should have. You can choose any of the desired versions and download the image. We’ll choose Android Q (10.0 version).
You need to give this image a name. We’ll need to use this later in Appium capabilities so you can give any meaningful name or go with the default. We’ll name it Automation.
Nice, ?We have created our emulator. You can fire it up by tapping the Play icon under the Actions section.
You should see an emulator boot up on your device similar to a real phone.
While the emulator is an in-memory image of the Android OS that you can quickly spin up and destroy, it does take physical resources like Memory, RAM, and CPU. It’s always a good idea to verify your app on a real device.
We’ll see how to set up a real device so that we can run automation on it.
You need to connect your device to your machine via USB. Once done:
And thats all you need to potentially run our automation on a connected real device.
Appium comes with a nifty inspector desktop app that can inspect your Application under test and help you identify element locators (i.e. ways to identify elements on the screen) and even play around with the app.
It can easily run on any running Appium server and is really a cool utility to help you identify element locators and develop Appium scripts.
You can download it by just going to the Appium Github repo, and searching for appium-inspector.
Go to release and find the latest .dmg (on Mac) or .exe (on Windows) to download and install.
On Mac, you may get a warning stating: “Appium Inspector” can’t be opened because Apple cannot check it for malicious software.
To mitigate this, just go to System preferences > Security & Privacy > General and say Open Anyway for Appium Inspector. For more details see Issue 1217.
Once you launch Appium inspector, you’ll be greeted with the below home screen. If you see there is a JSON representation sector under Desired Capabilities section.
Think of them as properties that you want your driver’s session to have. For example, you may want to reset your app after your test, or launch the app in a particular orientation. These are all achievable by specifying a Key-Value pair in a JSON that we provide to the Appium server session.
Please see Appium docs for more idea on these capabilities.
Below are some sample capabilities we can give for Android:
{
"platformName": "android",
"automationName": "uiautomator2",
"platformVersion": "10",
"deviceName": "Automation",
"app": "/<absolute_path_to_app>/ApiDemos-debug.apk",
"appPackage": "io.appium.android.apis",
"appActivity": "io.appium.android.apis.ApiDemos"
}
For this post, we’ll use the sample apps provided by Appium. You can see them here, once you’ve downloaded them. Keep track of its absolute path and update it in-app key in the JSON.
We’ll add this JSON under JSON representation and then tap on the Save button.
It would be a good idea to save this config for the future. You can tap on ‘Save as’ and give it a meaningful name.
To start the inspection session, You need to have an Appium server run locally, run it via typing appium
on the command line:
➜ appium-fast-boilerplate git:(main) appium
[Appium] Welcome to Appium v2.0.0-beta.25
[Appium] Attempting to load driver uiautomator2...
[Appium] Appium REST http interface listener started on 0.0.0.0:4723
[Appium] Available drivers:
[Appium] - uiautomator2@2.0.1 (automationName 'UiAutomator2')
[Appium] No plugins have been installed. Use the "appium plugin" command to install the one(s) you want to use.
Let’s make sure our emulator is also up and running. We can start the emulator via AVD Manager in Android studio, or in case you are more command-line savvy, I have written an earlier post on how to do this via CMDline as well.
Once done, tap on the Start Session button, this should bring launch the API Demos app and show the inspector home screen.
Using this you can tap on any element in the app and see its Element hierarchy, elements properties. This is very useful to author Appium scripts and I’ll encourage you to explore each section of this tool and get familiar since you’ll be using this a lot.
Phew, that seemed like a lot of setup steps but don’t worry, you only have to do this once and now we can get down to the real business of writing our first Automated test on Android.
You can download the project using Github from appium-fast-boilerplate.
We’ll also understand what the fundamental concepts of writing page objects and tests are based on these. Let’s take a look at the high-level architecture.
Before automating any test, we need to be clear on what is the purpose of that test. I’ve found the Arrange Act Assert pattern quite useful to reason about it. Read this post by Andrew Knight in case you are interested to know more about it.
Our test would perform the below:
Let’s start by seeing our test.
import constants.TestGroups;
import org.testng.Assert;
import org.testng.annotations.Test;
import pages.apidemos.home.APIDemosHomePage;
public class AndroidTest extends BaseTest {
@Test(groups = {TestGroups.ANDROID})
public void testLogText() {
String logText = new APIDemosHomePage(this.driver)
.openText()
.tapOnLogTextBox()
.tapOnAddButton()
.getLogText();
Assert.assertEquals(logText, "This is a test");
}
}
There are a few things to notice above:
public class AndroidTest extends BaseTest
Our class extends a BaseTest, this is useful since we can perform common setup and tear down functions, including setting up driver sessions and closing it once our script is done.
This ensures that the tests are as simple as possible and does not overload the reader with any more details than they need to see.
String logText = new APIDemosHomePage(this.driver)
.openText()
.tapOnLogTextBox()
.tapOnAddButton()
.getLogText();
We see our tests read like plain English with a series of actions following each other. This is called a Fluent pattern and we’ll see how this is set up in just a moment.
Let’s see our BaseTest class:
import constants.Target;
import core.driver.DriverManager;
import core.utils.PropertiesReader;
import exceptions.PlatformNotSupportException;
import io.appium.java_client.AppiumDriver;
import org.testng.ITestContext;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeMethod;
import java.io.IOException;
public class BaseTest {
protected AppiumDriver driver;
protected PropertiesReader reader = new PropertiesReader();
@BeforeMethod(alwaysRun = true)
public void setup(ITestContext context) {
context.setAttribute("target", reader.getTarget());
try {
Target target = (Target) context.getAttribute("target");
this.driver = new DriverManager().getInstance(target);
} catch (IOException | PlatformNotSupportException e) {
e.printStackTrace();
}
}
@AfterMethod(alwaysRun = true)
public void teardown() {
driver.quit();
}
}
Let’s unpack this class.
protected AppiumDriver driver;
We set our driver instance as protected so that all test classes will have access to it.
protected PropertiesReader reader = new PropertiesReader();
We create an instance of PropertiesReader class to read relevant properties. This is useful since we want to be able to switch our driver instances based on different test environments and conditions. If curious, please see its implementation here.
Target target = (Target) context.getAttribute("target");
this.driver = new DriverManager().getInstance(target);
We get the relevant Target
and then using that gets an instance of AppiumDriver from a class called DriverManager.
We’ll use this reusable class to:
package core.driver;
import constants.Target;
import exceptions.PlatformNotSupportException;
import io.appium.java_client.AppiumDriver;
import org.openqa.selenium.remote.DesiredCapabilities;
import java.io.IOException;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.HashMap;
import static core.utils.CapabilitiesHelper.readAndMakeCapabilities;
public class DriverManager {
private static AppiumDriver driver;
// For Appium < 2.0, append /wd/hub to the APPIUM_SERVER_URL
String APPIUM_SERVER_URL = "http://127.0.0.1:4723";
public AppiumDriver getInstance(Target target) throws IOException, PlatformNotSupportException {
System.out.println("Getting instance of: " + target.name());
switch (target) {
case ANDROID:
return getAndroidDriver();
case IOS:
return getIOSDriver();
default:
throw new PlatformNotSupportException("Please provide supported target");
}
}
private AppiumDriver getAndroidDriver() throws IOException {
HashMap map = readAndMakeCapabilities("android-caps.json");
return getDriver(map);
}
private AppiumDriver getIOSDriver() throws IOException {
HashMap map = readAndMakeCapabilities("ios-caps.json");
return getDriver(map);
}
private AppiumDriver getDriver(HashMap map) {
DesiredCapabilities desiredCapabilities = new DesiredCapabilities(map);
try {
driver = new AppiumDriver(
new URL(APPIUM_SERVER_URL), desiredCapabilities);
} catch (MalformedURLException e) {
e.printStackTrace();
}
return driver;
}
}
You can observe:
Let’s take a look at the example of a page object that enables a Fluent pattern.
package pages.apidemos.home;
import core.page.BasePage;
import io.appium.java_client.AppiumDriver;
import org.openqa.selenium.By;
import org.openqa.selenium.WebElement;
import pages.apidemos.logtextbox.LogTextBoxPage;
public class APIDemosHomePage extends BasePage {
private final By textButton = By.xpath("//android.widget.TextView[@content-desc=\"Text\"]");
private final By logTextBoxButton = By.xpath("//android.widget.TextView[@content-desc=\"LogTextBox\"]");
public APIDemosHomePage(AppiumDriver driver) {
super(driver);
}
public APIDemosHomePage openText() {
WebElement text = getElement(textButton);
click(text);
return this;
}
public LogTextBoxPage tapOnLogTextBox() {
WebElement logTextBoxButtonElement = getElement(logTextBoxButton);
waitForElementToBeVisible(logTextBoxButtonElement);
click(logTextBoxButtonElement);
return new LogTextBoxPage(driver);
}
}
Notice the following:
Above is an example page object class:
package core.page;
import io.appium.java_client.AppiumDriver;
import org.openqa.selenium.By;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
import java.util.List;
public class BasePage {
protected AppiumDriver driver;
public BasePage(AppiumDriver driver) {
this.driver = driver;
}
public void click(WebElement elem) {
elem.click();
}
public WebElement getElement(By by) {
return driver.findElement(by);
}
public List<WebElement> getElements(By by) {
return driver.findElements(by);
}
public String getText(WebElement elem) {
return elem.getText();
}
public void waitForElementToBeVisible(WebElement elem) {
WebDriverWait wait = new WebDriverWait(driver, 10);
wait.until(ExpectedConditions.visibilityOf(elem));
}
public void waitForElementToBePresent(By by) {
WebDriverWait wait = new WebDriverWait(driver, 10);
wait.until(ExpectedConditions.presenceOfElementLocated(by));
}
public void type(WebElement elem, String text) {
elem.sendKeys(text);
}
}
Every page object inherits from a BasePage that wraps Appium methods.
Congratulations, you’ve written your first Appium Android test. You can run this either via the IDE or via a Gradle command
./gradlew clean build runTests -Dtag="ANDROID" -Ddevice="ANDROID"
You can see the entire project on the Github appium-fast-boilerplate, clone it and play around with it. Hopefully, this post helps you a little bit in starting on Android automation using Appium.
In the next post, we’ll dive into the world of iOS Automation with Appium and write our first hello world test.
You could also check out https://automationhacks.io for other posts that I’ve written about software engineering and testing.
As always, do share this with your friends or colleagues and if you have thoughts or feedback, I’d be more than happy to chat over at Twitter or elsewhere. Until next time. Happy testing and coding.
The post Writing Your First Appium Test For Android Mobile Devices appeared first on Automated Visual Testing | Applitools.
]]>The post Introducing the Next Generation of Native Mobile Test Automation appeared first on Automated Visual Testing | Applitools.
]]>Native mobile testing can be slow and error-prone with questionable ROI. With Ultrafast Test Cloud for Native Mobile, you can now leverage Applitools Visual AI to test native mobile apps with stability, speed, and security – in parallel across dozens of devices. The new offer extends the innovation of Ultrafast Cloud beyond browsers and into the mobile applications.
Mobile testing has a long and difficult history. Many industry-standard tools and solutions have struggled with the challenge of testing across an extremely wide range of devices, viewports and operating systems.
The approach currently in use by much of the industry today is to utilize a lab made up of emulators, or simulators, or even large farms of real devices. Then the tests must be run on every device independently. The process is not only costly, slow and insecure, but it is prone to errors as well.
At Applitools, we had already developed technology to solve a similar problem for web testing, and we were determined to solve this issue for mobile testing too.
Today, we are introducing the Ultrafast Test Cloud for Native Mobile. We built on the success of the Ultrafast Test Cloud Platform, which is already being used to boost the performance and quality of responsive web testing by 150 of the world’s top brands. The Ultrafast Test Cloud for Native Mobile allows teams to run automated tests on native mobile apps on a single device, and instantly render it across any desired combination of devices.
“This is the first meaningful evolution of how to test native mobile apps for the software industry in a long time,” said Gil Sever, CEO and co-founder of Applitools. “People are increasingly going to mobile for everything. One major area of improvement needed in delivering better mobile apps faster, is centered around QA and testing. We’re building upon the success of Visual AI and the Ultrafast Test Cloud to make the delivery and execution of tests for native mobile apps more consistent and faster than ever, and at a fraction of the cost.”
Last year we introduced our Ultrafast Test Grid, enabling teams to test for the web and responsive web applications against all combinations of browsers, devices and viewports with blazing speed. We’ve seen how some of the largest companies in the world have used the power of Visual AI and the Ultrafast Test Grid to execute their visual and functional tests more rapidly and reliably on the web.
We’re excited to now be able to offer the same speed and agility, and security for native mobile applications. If you’re familiar with our current Ultrafast Test Grid offering, you’ll find the experience a familiar one.
Mobile usage continues to rise globally, and more and more critical activity – from discovery to research and purchase – is taking place online via mobile devices. Consumers are demanding higher and higher quality mobile experiences, and a poorly functioning site or visual bugs can detract significantly from the user’s experience. There is a growing portion of your audience you can only convert with a five-star quality app experience.
While testing has traditionally been challenging on mobile, the Ultrafast Test Cloud for Native Mobile increases your ability to test quickly, early and often. That means you can develop a superior mobile experience at less cost than the competition, and stand out from the crowd.
With this announcement, we’re also launching our free early access program, with access to be granted on a limited basis at first. Prioritization will be given to those who register early. To learn more, visit the link below.
The post Introducing the Next Generation of Native Mobile Test Automation appeared first on Automated Visual Testing | Applitools.
]]>The post A Comprehensive Guide to Testing and Automating Data Analytics Events on Web & Mobile appeared first on Automated Visual Testing | Applitools.
]]>I have been testing Analytics for the past 10+ years. In the initial days, it was very painful and error-prone, as I was doing this manually. Over the years, as I understood this niche area better, and spent time understanding the reason and impact of data Analytics on any product and business, I started getting smarter about how to test analytics events well.
This post will focus on how to test Analytics for Mobile apps (Android / iOS), and also answer some questions I have gotten from the community regarding the same.
Analytics is the “air your product breathes”. Analytics allows teams to:
Analytics allows the business team and product team to understand how well (or not) the features are being used by the users of the system. Without this data, the team would (almost) be shooting in the dark for the ways the product needs to evolve.
The analytics information is critical data to understand where in the feature journeys the user “drops off” and then the inference will provide insights if the drop is because of the way the features have been designed, or if the user experience is not adequate, or of course, there is a defect in the way the implementation has been done.
For any team to know how their product is used by the users, you need to instrument your product so that it can share with you meaningful (non-private) information about the usage of your product. From this data, the team would try to infer context and usage patterns which would serve as inputs to make the product better.
The instrumentation I refer to above is of different types.
This can be logs sent to your servers – typically these are technical information about the product.
Another form of instrumentation would be analytics events. These capture the nature of interaction and associated metadata, and send that information to (typically) a separate server / tool. This information is sent asynchronously and does not have any impact on the functioning, nor performance of the product.
This is typically a 4 step process:
Once you know what information you want to capture and when, implementing Analytics into your product goes through the same process as for your regular product features & functionalities.
The analytics library is typically a very light-weight library, and is added as a part of your web pages or your native apps (android or iOS).
Once the library is embedded in the product, whenever the user does any specific, predetermined actions, the front-end client code would capture all the relevant information regarding the event, and then trigger a call to the analytic tool being used with that information.
Ex: Trigger an analytics event when user “clicks on the search button”
The data in the triggered event can be sent in 2 ways:
An analytics event is a simple https request sent to the Analytics tool(s) your product uses. Yes, your product may be using multiple tools to capture and visualise different types of information.
Below is an example of an analytics event.
Let’s dissect this call to understand what it is doing:
There are different ways to test Analytics events. Let’s understand the same.
Well, if testing the end report is too late, then we need to shift-left and test at the source.
Based on requirements, the (front-end) developers would be adding the analytics library to the web pages or native apps. Then they set the trigger points when the event should be captured and sent to the analytics tool.
A good practice is for the analytics event generation and trigger to be implemented as a common function / module, which will be called by any functionality that needs to send an analytics event.
This will allow the developers to write unit tests to ensure:
This approach will ensure that your event triggering and generation logic is well tested. Also, these tests will be able to be run on developer machines as well as your build pipelines / jobs in your CI (Continuous Integration) server. So you get quick feedback in case anything goes wrong.
While the unit testing is critical to ensure all aspects of the code works as expected, the context of dynamic data based on real users is not possible to understand from the unit tests. Hence, we also need the System Tests / End-2-End tests to understand if analytics is working well.
Reference: https://devrant.com/rants/754857/when-you-write-2-unit-tests-and-no-integration-tests
Let’s look at the details of how you can test Analytics Events during Testing in any of your internal testing environments:
The details include – name of the event, and the details in the query parameter
This step is very important, and different from what your unit tests are able to validate. With this approach, you would be able to verify:
All the above is possible to be tested and verified even if you do not have the Analytic tool setup or configured as per business requirements.
The advantage of this approach is that it complements the unit testing, and ensures that your product is behaving as expected in all scenarios.
The only challenge / disadvantage of this aspect is that this is manual testing. Hence, it is very possible to miss out certain scenarios or details to be validated on every manual test cycle. Also, it is impossible to scale and repeat this approach.
Hence, we need a better approach. The way unit tests are automated, the above activity of testing should also be automated. The next section talks about a solution for how you can automate testing of Analytics events as part of your System / end-2-end test automation.
This is unfortunately the most common approach teams take to test if the analytics events are being captured correctly, and that too may end up happening in production / or when the app is released for its users. But you need to test early. Hence the above technique of Testing at the source is critical for the team to know if the events are been triggered and validated as soon as the implementation is completed.
I would recommend this strategy after you have completed Testing at the Source!
There are pros and cons of this approach.
The biggest disadvantage though of the above approach is that it is too late!
That said, there is still a lot of value in doing this. This indicates that your Analytics tool is also configured correctly to accept the data and you are actually able to set up meaningful charts and reports that can indicate patterns and allows you to identify and prioritise the next steps to make the product better.
Let’s look at the approach to automate testing of Analytics events as part of your System / end-2-end Test Automation.
We will talk separately about Web & Mobile – as both of them need a slightly different approach.
There are 2 options to accomplish the Analytics event test automation for Web. They are as follows:
I built WAAT – Web Analytics Automation Testing in Java & Ruby back in 2010. Integrate this in your automation framework using the instructions in the corresponding github pages.
Here is an example of how this test would look using WAAT.
This approach will let you find the correct request and do the appropriate matching of parameters automatically.
With Selenium 4 almost available, you could potentially use the new APIs to query the network requests from Chrome Developer Protocol.
With this approach, you will need to write code to query the appropriate Analytics request from the list of requests captured, and compare the actual query parameters with what is expected.
That said, I will be working on enhancing WAAT to support Chrome Developer Protocol based plugin in the near future. Keep an eye out for updates to the WAAT project in the near future.
There are 2 options to accomplish the Analytics event test automation for Mobile apps (Android / iOS). They are as follows:
As described for the web, you can integrate WAAT – Web Analytics Automation Testing in your automation framework using the instructions in the corresponding github pages.
On the device where the test is running, you would need to do the following additional setup as described in the Proxy setup for Android device
This approach will let you find the correct request and do the appropriate matching of parameters automatically.
This is a customized implementation, but can work great in some contexts. This is what you can do:
This approach will allow us to validate events as they are being sent as a result of running the System / end-2-end tests.
As you may have noticed in the above sections for Web and Mobile, the actual testing of Analytics events is really the same in either case. The differences arise a little about how to capture the events, and maybe some proxy setup required.
There is another aspect that is different for Analytics testing for Mobile.
The Analytics tool sdk / library that is added to the Mobile app has an optimising feature – batching! This configurable feature (in most tools) allows customizing the number of requests that should be collected together. Once the batch is full, or on trigger of some specific events (like closing the app), all the events in the batch will be sent to the Analytics tool and then cleared / reset.
This feature is important for mobile devices, as the users may be on the move, (or using the apps in Airplane mode) and may not have internet connectivity when using the app. In such cases, if the device does not cache the analytics requests, then that data may be lost. Hence it is important for the app to store the analytics events and then send it at a later point when there is connectivity available.
Also, another reason batching of analytics events helps is to minimize the network traffic generated by the app.
So when we are doing the Mobile Analytics events automation, when the test completes, ensure the events are triggered from the app (i.e. from the batch), only then it will be seen in the logs or proxy server, and then validation can be done.
While batching can be a problem for Test Automation (since the events will not be generated / seen immediately), you could take one of these 2 approaches to make your tests deterministic:
I like to have my System Tests / end-2-end Test Automation solution to have the following capabilities built in:
See this post on Automating Functional / End-2-End Tests Across Multiple Platforms for implementation details for building a robust, scalable and maintainable cross-platform Test Automation Framework
The post A Comprehensive Guide to Testing and Automating Data Analytics Events on Web & Mobile appeared first on Automated Visual Testing | Applitools.
]]>The post What is Mobile Testing? appeared first on Automated Visual Testing | Applitools.
]]>In this guide, you’ll learn the basics of what it means to test mobile applications. We’ll talk about why mobile testing is important, key types of mobile testing, as well as considerations and best practices to keep in mind.
Mobile testing is the process by which applications for modern mobile devices are tested for functionality, usability, performance and much more.
Note: This includes testing for native mobile apps as well as for responsive web or hybrid apps. We’ll talk more about the differences between these types of mobile applications below.
Mobile application testing can be automated or manual, and helps you ensure that the application you’re delivering to users meets all business requirements as well as user expectations.
Mobile internet usage continues to rise even as desktop/laptop internet usage is declining, a trend that has continued unabated for years. As more and more users spend an increasing amount of their time on mobile devices, it’s critical to provide a good experience on your mobile apps.
If you’re not testing the mobile experience your users are receiving, then you can’t know how well your application serves a large and growing portion of your users. Failing to understand this leads to dreaded one-star app reviews and negative feedback on social media.
Mobile app testing ensures your mobile experience is strong, no matter what kind of app you’re using or what platform it is developed for.
As you consider your mobile testing strategy, there are a number of things that are important to keep in mind in order to plan and execute an optimal approach.
There are three general categories of mobile applications that you may need to test today:
There are additional complexities that you need to consider when testing mobile applications, even if you are testing a web app. Mobile users will interact with your app on a large variety of operating systems and devices (Android in particular has numerous operating system versions and devices in wide circulation), with any number of standard resolutions and device-specific functionalities.
Even beyond the unique devices themselves, mobile users find themselves in different situations than desktop/laptop web users that need to be accounted for in testing. This includes signal strength, battery life, even contrast and brightness as the environment frequently changes.
Ensuring broad test coverage across even just the most common scenarios can be a complex challenge.
There are a lot of different and important ways to test your mobile application. Here are some of the most common.
Functional testing is necessary to ensure the basic functions are performing as expected. It provides the appropriate input and verifies the output. It focuses on things like checking standard functionalities and error conditions, along with basic usability.
Usability testing, or user experience testing, goes further than functional testing in evaluating ease of use and intuitiveness. It focuses on trying to simulate the real experience of a customer using the app to find places where they might get stuck or struggle to utilize the application as intended, or just generally have a poor experience.
Compatibility, performance, accessibility and load testing are other common types of mobile tests to consider.
Manual testing is testing done solely by a human, who independently tests the app and methodically searches for issues that a user might encounter and logs them. Automated testing takes certain tasks out of the hands of humans and places them into an automation tool, freeing up human testers for other tasks.
Both types of testing have their advantages. Manual testing can take advantage of human intuitiveness to uncover unexpected errors, but can also be extremely time-consuming. Automated testing saves much of this time and is particularly effective on repetitive tests, but can miss less obvious cases that manual testing might catch.
Whether you use one method or a hybrid approach in your testing will depend on the requirements of your application.
There are a number of popular and open source tools and frameworks for testing your mobile apps. A few of the most common include:
For more, you can see a comparison of Appium vs Espresso vs XCUITest here.
Another type of testing to keep in mind is automated visual testing. Traditional testing experiences rely on validating against code, but this can result in flaky tests in some situations, particularly in complex mobile environments. Visual testing works by comparing visual screenshots instead.
Visual testing can be powerful on mobile applications. While the traditional pixel-to-pixel approach can still be quite flaky and prone to false-positives, advances in visual AI – trained against billions of images – make automated visual testing today increasingly accurate.
You can read more about the benefits of visual testing for mobile apps and see a quick example here.
Mobile testing can be a complex challenge due to the wide variety of hardware and software variations in common usage today. However, as mobile internet use continues to soar, the quality of your mobile applications is more critical than ever. Understanding the types of tests you need to run, and then executing them with the tools that will make you most effective, will ensure you can deliver your mobile apps in less time and with a superior user experience.
Happy testing!
The post What is Mobile Testing? appeared first on Automated Visual Testing | Applitools.
]]>The post Appium vs Espresso vs XCUITest – Understanding how Appium Compares to Espresso & XCUITest appeared first on Automated Visual Testing | Applitools.
]]>In this article we shall look at the Appium, Espresso and XCUITest test automation frameworks. We’ll learn the key differences between them, as well as when and why you should use them in your own testing environment.
Appium is an open source test automation framework which is completely maintained by the community. Appium can automate Native, Hybrid, mWeb, Mac Apps and Windows Apps. Appium follows the Selenium W3C protocol which enables the use of the same test code for both Android and iOS applications.
Under the hood Appium uses Espresso or UIAutomator2 as the mode of communication to Android Apps and XCUI for iOS. In a nutshell, Appium provides a stable webdriver interface on top of automation backends provided by Google and Apple.
Installing Appium was a bit of hassle for a long time, hence from Appium 2.0 architecturally we could choose to install the drivers and plugins as we wanted. You can find more details about Appium 2.0 here.
Espresso is an Android test framework developed by Google for UI testing. Espresso automatically synchronizes test actions with the user interface of the mobile app and ensures that activity is started well before the actual test run.
The XCUITest framework from Apple helps users write UI tests straight inside the Xcode with the separate UI testing target in the app.
XCUITest uses accessibility identifiers to interact with the main iOS app. XCUITests can be written in Swift or Objective-C.
There isn’t a reliable framework out there which easily supports testing on Apple TV devices. XCUITest is the only way to verify tvOS apps. SinceXcode 7, Apple has shipped XCTest prebuilt into its development kit.
Conclusion
Appium, Espresso and XCUI can each fill different needs for UI testing. The way to choose between them is to consider the requirements of your project. If your scope is limited just to one platform and you want comprehensive and embedded UI testing, XCUI or Espresso are great fits. For cross-platform testing across iOS, Android, and Hybrid then Appium is your best choice.
The post Appium vs Espresso vs XCUITest – Understanding how Appium Compares to Espresso & XCUITest appeared first on Automated Visual Testing | Applitools.
]]>The post How Do I Test Mobile Apps At Scale With Google Firebase TestLab And Applitools? appeared first on Automated Visual Testing | Applitools.
]]>Google Firebase Test Lab is a cloud-based app-testing infrastructure. With one operation, you can test your Android or iOS app across a wide variety of devices and device configurations, and see the results—including logs, videos, and screenshots—in the Firebase console.
Firebase Test Lab runs Espresso and UI Automator 2.0 tests on Android apps, and XCTest tests on iOS apps. Write tests using one of those frameworks, then run them through the Firebase console or the gcloud command line interface.
Firebase Test Lab lets you run the following types of tests:
As with all web and mobile applications. Applitools offers an easy, consistent way to collect visual data from multiple device types running different viewport sizes. In the rest of this article, you will run through a demonstration for using Applitools with Google Firebase Test Lab.
In this Demo I have choose a simple “Hello World” app, and to keep you running we already have an example Espresso Instrumentation Test you can find the Complete Project here https://github.com/applitools/eyes-android-hello-world
I know you have looked into the GitHub repo. Let’s just get few more prerequisites installed and make sure they are ready to use and deep dive. Make sure you have installed and/or configured the following:
Installing the Android Studio
Now let’s install Android Studio / SDK so that you can run the test script on an emulator or real device. You could install the Android SDK only but then you have to do additional advanced steps to properly configure the Android environment on your computer. I highly recommend installing the Android Studio as it makes your life easier.
Download the Android Studio executable. Follow the steps below to install locally on your computer:
1. Get the code:
2. Import the project into Android Studio
Let’s look at the Instrumented test ExampleInstrumentedTest under androidTest.
Before we run the test on Firebase lets run it on local emulator.
That’s pretty easy isn’t it? Applitools will now capture each screen where the eyes.CheckWindow() is called and create a baseline on the first run.
Once the test completes, you can analyze the test results on the Applitools dashboard.
Now let’s run the test on Firebase devices. To do this we need first need an account so let’s get that
Step1: Navigate to https://firebase.google.com/ and click on sign in
Step2: Click on Go to Console, Navigates to the Console Dashboard
Step3: Create a Project, once you create a project, all good to explore the dashboard and see through all the features available here.
Step 4: Lets Add the Configurations in android studio to run the tests
Step 5: Sign in with Google Firebase account and click ok
Step 6: Re-open the edit configurations
Now you can see the configure settings for Matrix configuration and cloud project
Select your project and add one or more custom devices from list of 150. For now let’s add 2 devices, Platform Android 9.x, API Level 28 (pie), Locale, Orientation.
We will use these devices to run our Instrumentation test on Firebase.
That’s it we are all good to run our test on the Firebase
Click on Run Example Instrumented Test this will now execute you tests on the devices you have selected on Firebase.
Let’s Go back to The Test lab on Firebase and you can see your tests running over there Parallelly with visual comparison checks done on Applitools AI Platform.
Applitools allows you to test your mobile app by running it on any device lab. Google Firebase allows a streamline platform for developers (build) and quality engineers (test) to run tests on any device configuration. The integration make it easier to use the best of platforms for best quality applications.
The post How Do I Test Mobile Apps At Scale With Google Firebase TestLab And Applitools? appeared first on Automated Visual Testing | Applitools.
]]>The post Visual Testing with Applitools, Appium, and Amazon AWS Device Farm appeared first on Automated Visual Testing | Applitools.
]]>Visual UI testing is more than just testing your app on Desktop browsers and Mobile emulators. In fact, you can do more with Visual UI testing to run your tests over physical mobile devices.
Visual UI testing compares the visually-rendered output of an application against itself in older iterations. Users call this type of test version checking. Some users apply visual testing for cross-browser tests. They run the same software version across different target devices/operating systems/browsers/viewports. For either purpose, we need a testing solution that has high accuracy, speed, and works with a range of browsers and devices. For these reasons, we chose Applitools.
Running your Visual UI testing across physical devices means having to set up your own local environment to run the tests. Imagine the number of devices, screen resolutions, operating systems, and computers you’d need! It would be frustratingly boring, expensive, and extremely time-consuming.
This is where Amazon’s AWS Device Farm comes into play. This powerful service can build a testing environment. It uses physical mobile devices to run your tests! All you do is upload your tests to Amazon, specify the devices you want, and it will take it from there!
In one of my recent articles, How Visual UI Testing can speed up DevOps flow I showed how you can configure a CD/CI service to run your Visual UI tests. The end result would be the same, whether you are running your tests locally, or via such services. Once the tests run, you can always check the results over the Applitools Test Manager Dashboard.
In this article, I will show you how you can run your Visual UI tests, whether you’ve written them for your mobile or web app, on real physical mobile devices in the cloud. For this, I will be employing Applitools, Appium, and AWS Device Farm.
AWS Device Farm is a mobile app testing platform that helps developers automatically test their apps on hundreds of real devices in minutes.
When it comes to testing your app over mobile devices, the choices are numerous. Amazon helps to build a “Device Farm” on behalf of the developers and testers, hence the name.
Here are some of the major advantages and features for using this service:
AWS Device Farm supports a number of test runners. This includes Appium Java JUnit, Appium Python, Appium Ruby, and Appium Java TestNG. Back in January 2019, Amazon announced support for the Appium Node.js test runner. This means you can build your tests with Selenium Webdriver, for instance, and have it run on top of AWS Device Farm.
Now that you have an idea about AWS Device Farm, let’s move on, and discover the Appium automation testing framework.
Selenium WebDriver is a browser automation framework that allows a developer to write commands, and send them to the browser. It offers a set of clients with a variety of programming languages (Java, JavaScript, Ruby, Python, PHP and others).
Figure 1 below shows the Selenium WebDriver architecture:
Figure 1: Selenium WebDriver Architecture
Selenium WebDriver architecture consists of:
Selenium 4 is obseleting the JSONWP in favor of the new W3C WebDriver standard.
Here’s a quick tutorial on using and learning Selenium WebDriver.
With that brief overview of Selenium WebDriver, let’s move on and explore Appium.
Appium is an open-source tool to automate Mobile app testing. It’s a cross-platform that supports both OS (Android and iOS) test scripts. It is tested on simulators (iOS), emulators (Android) and real devices (iOS, Android).
It’s an HTTP Server written in Node.js that creates and handles WebDriver sessions. When you install Appium, you are actually installing the Appium Server. It follows the same approach as the Selenium WebDriver, which receives HTTP requests from the Client Libraries in JSON format with the help of JSONWP. It then handles those HTTP Requests in different ways. That’s why you can make use of Selenium WebDriver language bindings, client libraries and infrastructure to connect to the Appium Server.
Instead of connecting a Selenium WebDriver to a specific browser WebDriver, you will be connecting it to the Appium Server. Appium uses an extension of the JSONWP called the Mobile JSON Wire Protocol (MJSONWP) to support the automation of testing for native and hybrid mobile apps.
It supports the same Selenium WebDriver clients with a variety of multiple programming languages such as Java, JavaScript, Ruby, Python, PHP and others.
Being a Node.js HTTP Server, it works in a client-server architecture. Figure 2 below depicts the Appium Client-Server Architecture model:
Figure 2: Appium Server Architecture
Appium architecture consists of:
The results of the test session are then communicated back to the Appium Server, and back to the Client in the form of logs, using the Mobile JSONWP.
Now that you are well equipped with knowledge for Selenium WebDriver and Appium, let’s go to the demo section of this article.
In this section, we will write a Visual UI test script to test a Web page. We will run the tests over an Android device both locally and on AWS Device Farm.
I will be using both Selenium WebDeriver and Appium to write the test script.
Before you can start writing and running the test script, you have to make sure you have the following components installed and ready to be used on your computer:
Assuming you are working on a MacOS computer, you can verify the above installations by running the following bash commands:
echo $JAVA_HOME // this should print the Java SDK path
node -v // this should print the version of Node.js installed
npm -v // this should print the version of the Node Package Manager installed
For this demo we need to install Appium Server, Android Studio / SDK and finally make sure to have a few environment variables properly set.
Let’s start by installing Appium Server. Run the following command to install Appium Server locally on your computer.
npm install -g appium
The command installs the Appium NPM package globally on your computer. To verify the installation, run the command:
appium -v // this should print the Appium version
Now let’s install Android Studio / SDK so that you can run the test script on an emulator or real device. You could install the Android SDK only but then you have to do additional advanced steps to properly configure the Android environment on your computer. I highly recommend installing the Android Studio as it makes your life easier.
Download the Android Studio executable. Follow the steps below to install locally on your computer:
Notice the location where the Android SDK was installed. It’s /Users/{User Account}/Library/Android/sdk.
Wait until the download and installation is complete. That’s all!
Because I want to run the test script locally over an Android emulator, let’s add one.
Open the Android Studio app:
Click the Configure icon:
Select the AVD Manager menu item.
Click the + Create Virtual Device button.
Locate and click the Pixel XL device then hit Next.
Locate the Q release and click the Download link.
Read and accept the Terms and Conditions then hit Next.
The Android 10, also known as Q release, starts downloading.
Once the installation is complete, click the Next button to continue setting up an Android device emulator.
The installation is complete. Grab the AVD Name as you will use it later on in the test script, and hit Finish.
Finally, we need to make sure the following environment variables are set on your computer. Open the ~/.bash_profile file, and add the following environment variables:
APPLITOOLS_API_KEY={Get the Applitools API Key from Applitools Test Manager}
export APPLITOOLS_API_KEY
ANDROID_HOME=/Users/{Use your account name here}/Library/Android/sdk
export ANDROID_HOME
ANDROID_HOME_TOOLS=$ANDROID_HOME/tools
export ANDROID_HOME_TOOLS
ANDROID_HOME_TOOLS_BIN=$ANDROID_HOME_TOOLS/bin
export ANDROID_HOME_TOOLS_BIN
ANDROID_HOME_PLATFORM=$ANDROID_HOME/platform-tools
export ANDROID_HOME_PLATFORM
APPIUM_ENV="Local"
export APPIUM_ENV
Finally, add the above environment variables to the $PATH as follows:
export $PATH=$PATH:$ANDROID_HOME:$ANDROID_HOME_TOOLS:$ANDROID_HOME_TOOLS_BIN:$ANDROID_HOME_PLATFORM
One last major component that you need to download, and have on your computer, is the ChromeDriver. Navigate to the Appium ChromeDriver website, and download the latest workable ChromeDriver release for Appium. Once downloaded, make sure to move the file to the location: /usr/local/bin/chromedriver
That’s it for the installations! Let’s move on and explore the Visual UI test script in depth.
You can find the source code demo of this article on this GitHub repo.
Let’s explore the main test script in this repo.
"use strict";
;(async () => {
const webdriver = require("selenium-webdriver");
const LOCAL_APPIUM = "https://web.archive.org/web/20221206000829/http://127.0.0.1:4723/wd/hub";
// Initialize the eyes SDK and set your private API key.
const { Eyes, Target, FileLogHandler, BatchInfo, StitchMode } = require("@applitools/eyes-selenium");
const batchInfo = new BatchInfo("AWS Device Farm");
batchInfo.id = process.env.BATCH_ID
batchInfo.setSequenceName('AWS Device Farm Batches');
// Initialize the eyes SDK
let eyes = new Eyes();
eyes.setApiKey(process.env.APPLITOOLS_API_KEY);
eyes.setLogHandler(new FileLogHandler(true));
eyes.setForceFullPageScreenshot(true)
eyes.setStitchMode(StitchMode.CSS)
eyes.setHideScrollbars(true)
eyes.setBatch(batchInfo);
const capabilities = {
platformName: "Android",
deviceName: "Android Emulator",
automationName: "UiAutomator2",
browserName: 'Chrome',
waitforTimeout: 30000,
commandTimeout: 30000,
};
if (process.env.APPIUM_ENV === "Local") {
capabilities["avd"] = 'Pixel_XL_API_29';
}
// Open browser.
let driver = new webdriver
.Builder()
.usingServer(LOCAL_APPIUM)
.withCapabilities(capabilities)
.build();
try {
// Start the test
await eyes.open(driver, 'Vuejs.org Conferences', 'Appium on Android');
await driver.get('https://us.vuejs.org/');
// Visual checkpoint #1.
await eyes.check('Home Page', Target.window());
// display title of the page
await driver.getTitle().then(function (title) {
console.log("Title: ", title);
});
// locate and click the burger button
await driver.wait(webdriver.until.elementLocated(webdriver.By.tagName('button.navbar__burger')), 2000).click();
// locate and click the hyperlink with href='/#location' inside the second nav element
await driver.wait(webdriver.until.elementLocated(webdriver.By.xpath("//web.archive.org/web/20221206000829/https://nav[2]/ul/li[3]/a[contains(text(), 'Location')]")), 2000).click();
const h2 = await driver.wait(webdriver.until.elementLocated(webdriver.By.xpath("(//h2[@class='section-title'])[4]")), 2000);
console.log("H2 Text: ", await h2.getText());
// Visual checkpoint #2.
await eyes.check('Home Loans', Target.window());
// Close Eyes
await eyes.close();
} catch (error) {
console.log(error);
} finally {
// Close the browser.
await driver.quit();
// If the test was aborted before eyes.close was called, ends the test as aborted.
await eyes.abort();
}
})();
The test script starts by importing the selenium-webdriver NPM package.
It imports a bunch of objects from the @applitools/eyes-selenium NPM package.
It constructs a BatchInfo object used by Applitools API.
const batchInfo = new BatchInfo("AWS Device Farm"); batchInfo.id = process.env.BATCH_ID batchInfo.setSequenceName('AWS Device Farm Batches');
It then creates the Eyes object that we will use to interact with the Applitools API.
// Initialize the eyes SDK
let eyes = new Eyes();
eyes.setApiKey(process.env.APPLITOOLS_API_KEY);
eyes.setLogHandler(new FileLogHandler(true));
eyes.setForceFullPageScreenshot(true)
eyes.setStitchMode(StitchMode.CSS)
eyes.setHideScrollbars(true)
eyes.setBatch(batchInfo);
It’s so important to set the Applitools API key at this stage. Otherwise, you won’t be able to run this test. The code above also directs the Applitools API logs to a File located at the root of the project under the name of eyes.log.
Next, we define the device capabilities that we are going to send to Appium.
const capabilities = {
platformName: "Android",
deviceName: "Android Emulator",
automationName: "UiAutomator2",
browserName: 'Chrome',
waitforTimeout: 30000,
commandTimeout: 30000,
};
if (process.env.APPIUM_ENV === "Local") {
capabilities["avd"] = 'Pixel_XL_API_29';
}
We are using an Android emulator to run our test script over a Chrome browser with the help of the UIAutomator 2 library.
We need to set the avd capability only when running this test script locally. For this property, grab the AVD ID of the Android Device Emulator we set above.
Now, we create and build a new WebDriver object by specifying the Appium Server local URL and the device capabilities as:
const LOCAL_APPIUM = "http://127.0.0.1:4723/wd/hub";
let driver = new webdriver
.Builder()
.usingServer(LOCAL_APPIUM)
.withCapabilities(capabilities)
.build();
Appium is configured to listen on Port 4723 under the path of /wd/hub.
The rest of the script is usual Applitools business. In brief, the script:
Notice that the script asserts two Eyes SDK Snapshots. The first captures the home page of the website, while the second captures the Location section.
Finally, some important cleanup is happening to close the WebDriver and Eyes SDK sessions.
Open the package.json file, and locate the two scripts there:
"appium": "appium --chromedriver-executable /usr/local/bin/chromedriver --log ./appium.log",
"test": "node appium.js"
The first runs and starts the Appium Server, and the second to run the test script.
Let’s first run the Appium server by issuing this command:
npm run-script appium
Then, once Appium is running, let’s run the test script by issuing this command:
npm run-script test
Login to the Applitools Test Manager located at: https://applitools.com/users/login
You will see the following test results:
The two snapshots have been recorded!
Now that the test runs locally, let’s run it on AWS Device Farm. Start by creating a new account on Amazon Web Service website.
Login to your AWS account on this page: https://console.aws.amazon.com/devicefarm
Create a new project by following the steps below:
Let’s package our app in a zip file in order to upload it at this step.
Switch back to the code editor, open a command window, and run the following:
npm install
This command is essential to make sure all the NPM package dependencies for this app are installed.
npm install -g npm-bundle
The command above installs the npm-bundle NPM package globally on your machine.
Then, run the command to package and bundle your app:
npm-bundle
The command bundles and packages your app files, and folders, including the node_modules folder.
The output of this step creates the file with the .tgz extension.
The final step before uploading is to compress the file by running the command:
zip -r appium-aws.zip *.tgz
Name the file whatever you wish.
Now you can upload the .zip file to AWS Device Farm.
Once the file uploads, scroll down the page to edit the .yaml file of this test run like so:
Switch back to the Applitools Test Manager, and verify the results of this second run via AWS Device Farm.
As expected, we get exactly the same results as running the test script locally.
Given the massive integrations that Applitools offers with its rich SDKs, we saw how easily and quickly we can run our Visual UI tests in the cloud using the AWS Device Farm service. This service, and similar services, enrich the Visual regression testing ecosystem, and make perfect sense when performing them.
The post Visual Testing with Applitools, Appium, and Amazon AWS Device Farm appeared first on Automated Visual Testing | Applitools.
]]>The post Using Genymotion, Appium & Applitools to visually test Android apps appeared first on Automated Visual Testing | Applitools.
]]>If you want to run mobile applications, you want to run on Android. Android devices dominate the smartphone market. Genymotion allows you to run your Appium tests in parallel on a range of virtual Android devices. Applitools lets you rapidly validate how each device renders each Appium test. Together, Genymotion and Applitools give you coverage with speed for your functional and visual tests.
As a QA automation professional, you know you need to test on Android. Then you look at the market and realize just how fragmented the market is.
Fragmented is an understatement. A study by OpenSignal measured over 24,000 unique models of Android devices in use, running nine different Android OS versions across over three dozen different screen sizes, manufactured by 1,294 distinct device vendors. That is fragmentation. These numbers are mind-boggling, so here’s a chart to explain. Each box represents the usage share of one phone model.
Plenty of other studies confirm this. There are 19 major vendors of Android devices. Leading manufacturers include Samsung, Huawei, OnePlus, Xiaome, and Google. The market share of the leading Android device is less than 2% of the market, and the market share of the top 10 devices is 11%. The most popular Android version accounts for only 31% of the market.
We would all like to think that Android devices behave exactly the same way. But, no one knows for sure without testing. If you check through the Google Issue Tracker, you’ll find a range of issues that end up as platform-specific.
So, if every Android device might behave differently, exactly how should you test your Android apps? One way is to run the test functionally on each platform and measure behavior in code – that’s costly. Another way is to run functionally on one platform and hope the code works on the others. Functionally, this can tell you that the app works – but you are left vulnerable to device-specific behaviors that may not be obvious without testing.
To visualize the challenge of testing against 24,000 unique platforms, imagine your application has just 10 screens. If you placed these ten different screens on 24,000 unique devices end-to-end, they would stretch over 30 miles. That’s longer than the distance of a marathon!
Could you imagine manually checking a marathon’s worth of screens with every release?
I can’t run a marathon, much less do while examining thousands of screens. Thankfully there’s a better way, which I’ll explain in this post: using Genymotion, Appium, and Applitools.
Genymotion is the industry-leading provider of cloud-based Android emulation and virtual mobile infrastructure solutions. Genymotion frees you from having to build your own Android device farm.
Once you integrate your Appium tests with Genymotion Cloud, you can run them in parallel across many Android devices at once, to detect bugs as soon as possible and spend less time on test runs. That’s powerful.
With Genymotion Cloud, you can choose to test against just the most popular Android device/OS combinations. Or, you can test the combinations for a specific platform vendor in detail. Genymotion gives you the flexibility to run whatever combination of Androids you need.
Genymotion Cloud can run your Android functional tests across multiple platforms. However, functional tests are a subset of the device and OS version issues you might encounter with your application. In addition to functional tests. you can run into visual issues that affect how your app looks as well as how it runs. How do you run visual UI tests with Genymotion Cloud? Applitools.
Applitools provides AI-powered visual testing of applications and allows you to test cross-platform easily to identify visual bugs. Visual regressions seem like they might be simply a distraction to your customers. At worst, though, visual errors block your customers from completing transactions. Visual errors have real costs – and without visual testing, they often don’t appear until a user encounters them in the field.
Here’s one example of what I’m talking about. This messed-up layout blocked Instagram from making any money on this ad, and probably led to an upset customer and engineering VP. All the elements are present, so this screen probably passed functional testing.
You can find plenty of other examples of visual regressions by following #GUIGoneWrong on Twitter.
Applitools uses an AI-powered visual testing engine to highlight issues that customers would identify. More importantly, Applitools ignores differences that customers would not notice. If you ever used snapshot testing, you may have stopped because you tracked down too many false positives. Applitools finds the issues that matter and ignores the ones that don’t.
Applitools already works with Appium to provide visual testing for your Android OS applications. Now, you can use Applitools and Genymotion to run your visual tests across numerous Android virtual devices. To sum up:
That’s the overview. To dive into the details, check out this step-by-step tutorial on using Genymotion, Appium, and Applitools.
While it’s pretty complete, here’s some additional information you’ll need:
We’ve put together a series of step-by-step tutorial videos using Genymotion, Appium, and Applitools. Here’s the first one:
https://www.youtube.com/watch?v=qXuMglfNEeo
When you run Appium, Applitools, and Genymotion together, you get a huge boost in test productivity. You get to re-use your existing Appium test scripts. Genymotion lets you run all your functional and visual tests in parallel. And, with the accuracy of Applitools AI-powered visual testing, you track down only issues that matter, without the distraction of false positives.
Read more about how to use our products together from this Genymotion blog post.
Visit Applitools at the Appium Conference 2019 in Bengaluru, India.
Sign up for our upcoming webinar on July 9 with Jonathan Lipps: Easy Distributed Visual Testing for Mobile Apps and Sites.
Find out more about Genymotion Cloud, and sign up for a free account to get started.
Find out more about Applitools. You can request a demo, sign up for a free account, and view our tutorials.
The post Using Genymotion, Appium & Applitools to visually test Android apps appeared first on Automated Visual Testing | Applitools.
]]>The post Test Automation for Android Wearable Devices with Appium appeared first on Automated Visual Testing | Applitools.
]]>Have you already hopped on the Android Wear train? It’s riding fast since Google’s recent SDK release. In this post I will help you ensure a safe ride by showing you how to automatically test your new ‘wearable app’.
This post contains advanced techniques for test automation using Appium and is mainly intended for technical readers, but non-technical readers will surely benefit from reading it as well (and watching the demo video).
So, here we go:
Please remember these two words: automation & validation.
We need automation in order to run our tests without any manual intervention; we need validation in order to assure everything looks as we intended. If automation replaces the hands of the manual QA testers – validation replaces their eyes.
After thoroughly researching this topic – and trying it first-hand, I created the following example for automating a wearable device. I recorded all the steps described below for your convenience.
Prerequisites:
I used Android Virtual Devices Manager (AVD). I wanted it to look cool, so I selected a round device. Make sure to select “Android 4.4W – API Level 20” with ARM CPU. Do not select “Use host GPU”, otherwise visual validation won’t work. Use “AndroidWearRound” skin to display round layout (that’s the coolness factor).
After running the new wearable device I created, I connected my Android device to the emulator device. Make sure to forward TCP Port 5601 via ADB, as shown in the video, so the wearable device will display “Connected status” on the Android host device.
Now I was almost ready to go (and so are you, if you’re following my lead on this one…).
Once the devices are all connected, there is one thing left to do before running the automation code and it’s to start the Appium server. Since I automated two devices in parallel (the wearable device and the hosting Android device), I needed two Appium servers on two different ports – make sure to specify port and device UID as shown in the video.
Now I was ready to go.
Let’s create awesome automation code; mine is shown in the video.
And… run it!
In order to make validation reliable and covered as much as possible, I used Applitools Eyes UI validation tool. It allows to use one line of code to validate the entire screen, so it keeps you agile and fast, and most importantly: free of UI issues as much as possible.
And here is the demo video:
If you have any questions, or comments, please feel free to leave them below, or contact me directly: yanir.taflev(at)applitools.com.
Additional reading: Want to radically reduce UI automation code and increase test coverage? Learn how CloudShare achieved this dual goal by simply automating UI testing.
To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.
The post Test Automation for Android Wearable Devices with Appium appeared first on Automated Visual Testing | Applitools.
]]>