The post iOS 16 – What’s New for Test Engineers appeared first on Automated Visual Testing | Applitools.
]]>Learn about what’s new in iOS 16, including some new updates test engineers should be looking out for.
It’s an exciting time of the year for anyone who uses Apple devices – and that includes QA engineers charged with mobile testing. Apple has just unveiled iOS 16, and as usual it is filled with new features for iOS users to enjoy.
Many of these new features, of course, affect the look and feel and usability of any application running on iOS. If you’re in QA, that means you’ve now got a lot of new testing to do to make sure your application works as perfectly on iOS 16 as it did on previous versions of the operating system.
For example, Apple has just upgraded their iconic “notch” into a “Dynamic Island.” This is significant redesign of a small but highly visual component that your users will see every time they look at their phone. If your app doesn’t function appropriately with this new UI change, your users will notice.
If you’re using Native Mobile Grid for your mobile testing, no need to worry, as Native Mobile Grid already supports automated testing of iOS 16 on Apple devices.
With this in mind, let’s take a look through some of the most exciting new features of iOS 16, with a focus on how they can affect your life as a test engineer.
The lockscreen on iOS 16 devices can now be customized far more than before, going beyond changing the background image – you can now alter the appearance of the time as well as add new widgets. Another notable change here is that notifications now pop up from the bottom instead of the top.
As a QA engineer, there are a few things to consider here. First, if your app will have a new lockscreen widget, you certainly need to test it carefully. Performing visual regression testing and getting contrast right will be especially important on an uncertain background.
Even if you don’t develop a widget, it’s worth thinking about (and then verifying) whether the user experience could be affected by your notifications moving from the top of the user’s screen to the bottom. Be sure and take a look at how they will appear when stacked as well to make sure the right information is always visible.
As we mentioned above, the notch is getting redesigned into a “Dynamic Island.” This new version of the cutout required for the front-facing camera can now present contextual information about the app you’re using. It will expand and contract based on the info it’s displaying, so it’s not a fixed size.
That means your app may now be resizing around the new “Dynamic Island” in ways it never did with the old notch. Similarly, your contextual notifications may not look quite the same either. This is definitely something worth testing to make sure the user experience is still exactly the way you meant it to be.
There are a lot of other new features, of course. Some of these may not have as direct an impact on the UI or functionality of you own applications, but it’s worth being familiar with them all. Here are a few of the other biggest changes – check them carefully against your own app and be sure to test accordingly.
Mobile testing is a challenge for many organizations. The number of devices, browsers and screens in play make achieving full coverage extremely time-consuming using traditional mobile testing solutions. At Applitools, we’re focused on making software testing easier and more effective – that’s why we pioneered our industry-leading Visual AI. With the new Native Mobile Grid, you can significantly reduce the time you spend testing mobile apps while ensuring full coverage in a native environment.
Learn more about how you can scale your mobile automation testing with Native Mobile Grid, and sign up for access to get started with Native Mobile Grid today.
The post iOS 16 – What’s New for Test Engineers appeared first on Automated Visual Testing | Applitools.
]]>The post Writing Your First Appium Test For iOS Devices appeared first on Automated Visual Testing | Applitools.
]]>This is the third and final post in our Hello World introduction series to Appium, and we’ll discuss how to create your first Appium test for iOS. You can read the first post for an introduction to Appium, or the second to learn how to create your first Appium test for Android.
Congratulations on having made it so far. I hope you are slowly becoming more comfortable with Appium and realizing just how powerful a tool it really is for mobile automation, and that it’s not that difficult to get started with it.
This is the final post in this short series on helping you start with Appium and write your first tests. If you need a refresher on what Appium is and writing your first Android test with it, you can read the earlier parts here:
In this post, we’ll learn how to set up your dev environment and write your first Appium based iOS test.
We’ll need some dependencies to be preinstalled on your dev machine.
Let’s go over them one by one.
Also, remember it’s completely okay if you don’t understand all the details of these in the beginning since Appium pretty much abstracts those details away and you can always dig deeper later on if you need some very specific capabilities of these libraries.
To run iOS tests, we need a machine running macOS with Xcode installed.
The below command would setup Command-line scripts that are needed for us to be able to run our first test:
xcode-select --install
You can think of Carthage as a tool to allow adding frameworks to your Cocoa applications and to build required dependencies:
brew install carthage
libimobiledevice library allows Appium to talk to iOS devices using native protocols:
brew install libimobiledevice
ios-deploy helps to install and debug iOS apps from the command line:
brew install ios-deploy
brew install ios-webkit-debug-proxy
IDB (iOS Device bridge) is a node JS wrapper over IDB that are a set of utilities made by Facebook:
brew tap facebook/fb
brew install idb-companion
pip3.6 install fb-idb
If you are curious, you could read the below reference blogs below that helped me come up with this shortlist of dependencies and are good reads for more context:
For our first iOS test, we’ll use a sample demo app provided by Appium.
You can download the zip file from here, unzip it and ensure you copy it under src/test/resources dir in the project, such that we have a TestApp.app file under the test resources folder.
If you are following these tests along by checking out the GitHub repo appium-fast-boilerplate, you’ll see the iOS app path is mentioned under a file ios-caps.json under src/main/resources/.
This file represents Appium capabilities in JSON format and you can change them based on which iOS device you want to run them on.
When we run the test DriverManager will pick these up and help create the Appium session. You can read part 2 of this blog series to know more about this flow.
{
"platformName": "iOS",
"automationName": "XCUITest",
"deviceName": "iPhone 13",
"app": "src/test/resources/TestApp.app"
}
Our app has a set of UI controls with one section representing a calculator wherein we could enter two numbers and get their sum (see below snapshot):
We would automate the below flow:
Pretty basic right?
Below is how a sample test would look like (see the code here):
import constants.TestGroups;
import org.testng.Assert;
import org.testng.annotations.Test;
import pages.testapp.home.HomePage;
public class IOSTest extends BaseTest {
@Test(groups = {TestGroups.IOS})
public void addNumbers() {
String actualSum = new HomePage(this.driver)
.enterTwoNumbersAndCompute("5", "5")
.getSum();
Assert.assertEquals(actualSum, "10");
}
}
Here, we follow the same good patterns that have served us well (like using Fluent, page objects, a base test, and driver manager) in our tests just as we did in our Android test.
You can read about these in detail in this earlier blog.
The beauty of the page object pattern is that it looks very similar regardless of the platform.
Below is the complete page object for the above test that implements the desired behavior for this test.
package pages.testapp.home;
import core.page.BasePage;
import io.appium.java_client.AppiumDriver;
import org.openqa.selenium.By;
import org.openqa.selenium.WebElement;
public class HomePage extends BasePage {
private final By firstNumber = By.name("IntegerA");
private final By secondNumber = By.name("IntegerB");
private final By computeSumButton = By.name("ComputeSumButton");
private final By answer = By.name("Answer");
public HomePage(AppiumDriver driver) {
super(driver);
}
public HomePage enterTwoNumbersAndCompute(String first, String second) {
typeFirstNumber(first);
typeSecondNumber(second);
compute();
return this;
}
public HomePage typeFirstNumber(String number) {
WebElement firstNoElement = getElement(firstNumber);
type(firstNoElement, number);
return this;
}
public HomePage typeSecondNumber(String number) {
WebElement secondNoElement = getElement(secondNumber);
type(secondNoElement, number);
return this;
}
public HomePage compute() {
WebElement computeBtn = getElement(computeSumButton);
click(computeBtn);
return this;
}
public String getSum() {
waitForElementToBePresent(answer);
return getText(getElement(answer));
}
}
Let’s unpack this and understand its components.
We create a HomePage class that inherits from BasePage that has wrappers over Appium API methods.
public class HomePage extends BasePage
We define our selectors of type By, using the Appium inspector to discover that name is the unique selector for these elements, in your projects trying to depend on ID is probably a safer bet.
private final By firstNumber = By.name("IntegerA");
private final By secondNumber = By.name("IntegerB");
private final By computeSumButton = By.name("ComputeSumButton");
private final By answer = By.name("Answer");
Next, we initialize this class with a driver instance that’s passed the test and also its parent class to ensure we have the appropriate driver instance set:
public HomePage(AppiumDriver driver) {
super(driver);
}
We then create a wrapper function that takes two numbers as strings, types numbers in the two text boxes, and taps on the button.
public HomePage enterTwoNumbersAndCompute(String first, String second) {
typeFirstNumber(first);
typeSecondNumber(second);
compute();
return this;
}
We implement these methods by reusing methods from BasePage while ensuring the correct page object is returned.
Since there is no redirection happening in these tests and it’s a single screen we just return this (i.e. the current page object in Java syntax). This enables writing tests in the Fluent style that you saw earlier.
public HomePage typeFirstNumber(String number) {
WebElement firstNoElement = getElement(firstNumber);
type(firstNoElement, number);
return this;
}
public HomePage typeSecondNumber(String number) {
WebElement secondNoElement = getElement(secondNumber);
type(secondNoElement, number);
return this;
}
public HomePage compute() {
WebElement computeBtn = getElement(computeSumButton);
click(computeBtn);
return this;
}
Finally, we return the string that has the sum of two numbers in the getSum() method and let the test perform desired assertions:
public String getSum() {
waitForElementToBePresent(answer);
return getText(getElement(answer));
}
Before running the test, ensure that the Appium server is running in another terminal and that your appium 2.0 server has the XCUITest driver installed by following the below steps
# Ensure driver is installed
appium driver install xcuitest
# Start the appium server before running your test
appium
Within the project, you could run the test using the below command or use IntelliJ or equivalent editors test runner to run the desired test.
gradle wrapper clean build runTests -Dtag="IOS" -Dtarget="IOS"
With this, we come to an end to this short three-part series on getting started with Appium, from a general introduction to Appium to working with Android to this post on iOS. Hopefully, this series makes it a little bit easier for you or your friends to get set up with Appium.
Exploring the remainder of Appium’s API, capabilities and tooling is left as an exercise to you, my brave and curious reader. I’m sure pretty soon you’ll also be sharing similar posts and hopefully, I’ll learn a thing or two from you as well. Remember Appium docs, the community, and Appium Conf are great sources to go deeper into Appium ?.
So, what are you waiting for? Go for it!
Remember, you can see the entire project on Github at appium-fast-boilerplate, clone or fork it, and play around with it. Hopefully, this post helps you a little bit in starting on iOS automation using Appium. If you found it valuable, do leave a star on the repo and in case there is any feedback, don’t hesitate to create an issue.
You could also check out https://automationhacks.io for other posts that I’ve written about Software engineering and Testing and this page for a talk that I gave on the same topic.
As always, please do share this with your friends or colleagues and if you have thoughts or feedback, I’d be more than happy to chat over on Twitter or in the comments. Until next time. Happy testing and coding.
The post Writing Your First Appium Test For iOS Devices appeared first on Automated Visual Testing | Applitools.
]]>The post Introducing the Next Generation of Native Mobile Test Automation appeared first on Automated Visual Testing | Applitools.
]]>Native mobile testing can be slow and error-prone with questionable ROI. With Ultrafast Test Cloud for Native Mobile, you can now leverage Applitools Visual AI to test native mobile apps with stability, speed, and security – in parallel across dozens of devices. The new offer extends the innovation of Ultrafast Cloud beyond browsers and into the mobile applications.
Mobile testing has a long and difficult history. Many industry-standard tools and solutions have struggled with the challenge of testing across an extremely wide range of devices, viewports and operating systems.
The approach currently in use by much of the industry today is to utilize a lab made up of emulators, or simulators, or even large farms of real devices. Then the tests must be run on every device independently. The process is not only costly, slow and insecure, but it is prone to errors as well.
At Applitools, we had already developed technology to solve a similar problem for web testing, and we were determined to solve this issue for mobile testing too.
Today, we are introducing the Ultrafast Test Cloud for Native Mobile. We built on the success of the Ultrafast Test Cloud Platform, which is already being used to boost the performance and quality of responsive web testing by 150 of the world’s top brands. The Ultrafast Test Cloud for Native Mobile allows teams to run automated tests on native mobile apps on a single device, and instantly render it across any desired combination of devices.
“This is the first meaningful evolution of how to test native mobile apps for the software industry in a long time,” said Gil Sever, CEO and co-founder of Applitools. “People are increasingly going to mobile for everything. One major area of improvement needed in delivering better mobile apps faster, is centered around QA and testing. We’re building upon the success of Visual AI and the Ultrafast Test Cloud to make the delivery and execution of tests for native mobile apps more consistent and faster than ever, and at a fraction of the cost.”
Last year we introduced our Ultrafast Test Grid, enabling teams to test for the web and responsive web applications against all combinations of browsers, devices and viewports with blazing speed. We’ve seen how some of the largest companies in the world have used the power of Visual AI and the Ultrafast Test Grid to execute their visual and functional tests more rapidly and reliably on the web.
We’re excited to now be able to offer the same speed and agility, and security for native mobile applications. If you’re familiar with our current Ultrafast Test Grid offering, you’ll find the experience a familiar one.
Mobile usage continues to rise globally, and more and more critical activity – from discovery to research and purchase – is taking place online via mobile devices. Consumers are demanding higher and higher quality mobile experiences, and a poorly functioning site or visual bugs can detract significantly from the user’s experience. There is a growing portion of your audience you can only convert with a five-star quality app experience.
While testing has traditionally been challenging on mobile, the Ultrafast Test Cloud for Native Mobile increases your ability to test quickly, early and often. That means you can develop a superior mobile experience at less cost than the competition, and stand out from the crowd.
With this announcement, we’re also launching our free early access program, with access to be granted on a limited basis at first. Prioritization will be given to those who register early. To learn more, visit the link below.
The post Introducing the Next Generation of Native Mobile Test Automation appeared first on Automated Visual Testing | Applitools.
]]>The post A Comprehensive Guide to Testing and Automating Data Analytics Events on Web & Mobile appeared first on Automated Visual Testing | Applitools.
]]>I have been testing Analytics for the past 10+ years. In the initial days, it was very painful and error-prone, as I was doing this manually. Over the years, as I understood this niche area better, and spent time understanding the reason and impact of data Analytics on any product and business, I started getting smarter about how to test analytics events well.
This post will focus on how to test Analytics for Mobile apps (Android / iOS), and also answer some questions I have gotten from the community regarding the same.
Analytics is the “air your product breathes”. Analytics allows teams to:
Analytics allows the business team and product team to understand how well (or not) the features are being used by the users of the system. Without this data, the team would (almost) be shooting in the dark for the ways the product needs to evolve.
The analytics information is critical data to understand where in the feature journeys the user “drops off” and then the inference will provide insights if the drop is because of the way the features have been designed, or if the user experience is not adequate, or of course, there is a defect in the way the implementation has been done.
For any team to know how their product is used by the users, you need to instrument your product so that it can share with you meaningful (non-private) information about the usage of your product. From this data, the team would try to infer context and usage patterns which would serve as inputs to make the product better.
The instrumentation I refer to above is of different types.
This can be logs sent to your servers – typically these are technical information about the product.
Another form of instrumentation would be analytics events. These capture the nature of interaction and associated metadata, and send that information to (typically) a separate server / tool. This information is sent asynchronously and does not have any impact on the functioning, nor performance of the product.
This is typically a 4 step process:
Once you know what information you want to capture and when, implementing Analytics into your product goes through the same process as for your regular product features & functionalities.
The analytics library is typically a very light-weight library, and is added as a part of your web pages or your native apps (android or iOS).
Once the library is embedded in the product, whenever the user does any specific, predetermined actions, the front-end client code would capture all the relevant information regarding the event, and then trigger a call to the analytic tool being used with that information.
Ex: Trigger an analytics event when user “clicks on the search button”
The data in the triggered event can be sent in 2 ways:
An analytics event is a simple https request sent to the Analytics tool(s) your product uses. Yes, your product may be using multiple tools to capture and visualise different types of information.
Below is an example of an analytics event.
Let’s dissect this call to understand what it is doing:
There are different ways to test Analytics events. Let’s understand the same.
Well, if testing the end report is too late, then we need to shift-left and test at the source.
Based on requirements, the (front-end) developers would be adding the analytics library to the web pages or native apps. Then they set the trigger points when the event should be captured and sent to the analytics tool.
A good practice is for the analytics event generation and trigger to be implemented as a common function / module, which will be called by any functionality that needs to send an analytics event.
This will allow the developers to write unit tests to ensure:
This approach will ensure that your event triggering and generation logic is well tested. Also, these tests will be able to be run on developer machines as well as your build pipelines / jobs in your CI (Continuous Integration) server. So you get quick feedback in case anything goes wrong.
While the unit testing is critical to ensure all aspects of the code works as expected, the context of dynamic data based on real users is not possible to understand from the unit tests. Hence, we also need the System Tests / End-2-End tests to understand if analytics is working well.
Reference: https://devrant.com/rants/754857/when-you-write-2-unit-tests-and-no-integration-tests
Let’s look at the details of how you can test Analytics Events during Testing in any of your internal testing environments:
The details include – name of the event, and the details in the query parameter
This step is very important, and different from what your unit tests are able to validate. With this approach, you would be able to verify:
All the above is possible to be tested and verified even if you do not have the Analytic tool setup or configured as per business requirements.
The advantage of this approach is that it complements the unit testing, and ensures that your product is behaving as expected in all scenarios.
The only challenge / disadvantage of this aspect is that this is manual testing. Hence, it is very possible to miss out certain scenarios or details to be validated on every manual test cycle. Also, it is impossible to scale and repeat this approach.
Hence, we need a better approach. The way unit tests are automated, the above activity of testing should also be automated. The next section talks about a solution for how you can automate testing of Analytics events as part of your System / end-2-end test automation.
This is unfortunately the most common approach teams take to test if the analytics events are being captured correctly, and that too may end up happening in production / or when the app is released for its users. But you need to test early. Hence the above technique of Testing at the source is critical for the team to know if the events are been triggered and validated as soon as the implementation is completed.
I would recommend this strategy after you have completed Testing at the Source!
There are pros and cons of this approach.
The biggest disadvantage though of the above approach is that it is too late!
That said, there is still a lot of value in doing this. This indicates that your Analytics tool is also configured correctly to accept the data and you are actually able to set up meaningful charts and reports that can indicate patterns and allows you to identify and prioritise the next steps to make the product better.
Let’s look at the approach to automate testing of Analytics events as part of your System / end-2-end Test Automation.
We will talk separately about Web & Mobile – as both of them need a slightly different approach.
There are 2 options to accomplish the Analytics event test automation for Web. They are as follows:
I built WAAT – Web Analytics Automation Testing in Java & Ruby back in 2010. Integrate this in your automation framework using the instructions in the corresponding github pages.
Here is an example of how this test would look using WAAT.
This approach will let you find the correct request and do the appropriate matching of parameters automatically.
With Selenium 4 almost available, you could potentially use the new APIs to query the network requests from Chrome Developer Protocol.
With this approach, you will need to write code to query the appropriate Analytics request from the list of requests captured, and compare the actual query parameters with what is expected.
That said, I will be working on enhancing WAAT to support Chrome Developer Protocol based plugin in the near future. Keep an eye out for updates to the WAAT project in the near future.
There are 2 options to accomplish the Analytics event test automation for Mobile apps (Android / iOS). They are as follows:
As described for the web, you can integrate WAAT – Web Analytics Automation Testing in your automation framework using the instructions in the corresponding github pages.
On the device where the test is running, you would need to do the following additional setup as described in the Proxy setup for Android device
This approach will let you find the correct request and do the appropriate matching of parameters automatically.
This is a customized implementation, but can work great in some contexts. This is what you can do:
This approach will allow us to validate events as they are being sent as a result of running the System / end-2-end tests.
As you may have noticed in the above sections for Web and Mobile, the actual testing of Analytics events is really the same in either case. The differences arise a little about how to capture the events, and maybe some proxy setup required.
There is another aspect that is different for Analytics testing for Mobile.
The Analytics tool sdk / library that is added to the Mobile app has an optimising feature – batching! This configurable feature (in most tools) allows customizing the number of requests that should be collected together. Once the batch is full, or on trigger of some specific events (like closing the app), all the events in the batch will be sent to the Analytics tool and then cleared / reset.
This feature is important for mobile devices, as the users may be on the move, (or using the apps in Airplane mode) and may not have internet connectivity when using the app. In such cases, if the device does not cache the analytics requests, then that data may be lost. Hence it is important for the app to store the analytics events and then send it at a later point when there is connectivity available.
Also, another reason batching of analytics events helps is to minimize the network traffic generated by the app.
So when we are doing the Mobile Analytics events automation, when the test completes, ensure the events are triggered from the app (i.e. from the batch), only then it will be seen in the logs or proxy server, and then validation can be done.
While batching can be a problem for Test Automation (since the events will not be generated / seen immediately), you could take one of these 2 approaches to make your tests deterministic:
I like to have my System Tests / end-2-end Test Automation solution to have the following capabilities built in:
See this post on Automating Functional / End-2-End Tests Across Multiple Platforms for implementation details for building a robust, scalable and maintainable cross-platform Test Automation Framework
The post A Comprehensive Guide to Testing and Automating Data Analytics Events on Web & Mobile appeared first on Automated Visual Testing | Applitools.
]]>The post What is Mobile Testing? appeared first on Automated Visual Testing | Applitools.
]]>In this guide, you’ll learn the basics of what it means to test mobile applications. We’ll talk about why mobile testing is important, key types of mobile testing, as well as considerations and best practices to keep in mind.
Mobile testing is the process by which applications for modern mobile devices are tested for functionality, usability, performance and much more.
Note: This includes testing for native mobile apps as well as for responsive web or hybrid apps. We’ll talk more about the differences between these types of mobile applications below.
Mobile application testing can be automated or manual, and helps you ensure that the application you’re delivering to users meets all business requirements as well as user expectations.
Mobile internet usage continues to rise even as desktop/laptop internet usage is declining, a trend that has continued unabated for years. As more and more users spend an increasing amount of their time on mobile devices, it’s critical to provide a good experience on your mobile apps.
If you’re not testing the mobile experience your users are receiving, then you can’t know how well your application serves a large and growing portion of your users. Failing to understand this leads to dreaded one-star app reviews and negative feedback on social media.
Mobile app testing ensures your mobile experience is strong, no matter what kind of app you’re using or what platform it is developed for.
As you consider your mobile testing strategy, there are a number of things that are important to keep in mind in order to plan and execute an optimal approach.
There are three general categories of mobile applications that you may need to test today:
There are additional complexities that you need to consider when testing mobile applications, even if you are testing a web app. Mobile users will interact with your app on a large variety of operating systems and devices (Android in particular has numerous operating system versions and devices in wide circulation), with any number of standard resolutions and device-specific functionalities.
Even beyond the unique devices themselves, mobile users find themselves in different situations than desktop/laptop web users that need to be accounted for in testing. This includes signal strength, battery life, even contrast and brightness as the environment frequently changes.
Ensuring broad test coverage across even just the most common scenarios can be a complex challenge.
There are a lot of different and important ways to test your mobile application. Here are some of the most common.
Functional testing is necessary to ensure the basic functions are performing as expected. It provides the appropriate input and verifies the output. It focuses on things like checking standard functionalities and error conditions, along with basic usability.
Usability testing, or user experience testing, goes further than functional testing in evaluating ease of use and intuitiveness. It focuses on trying to simulate the real experience of a customer using the app to find places where they might get stuck or struggle to utilize the application as intended, or just generally have a poor experience.
Compatibility, performance, accessibility and load testing are other common types of mobile tests to consider.
Manual testing is testing done solely by a human, who independently tests the app and methodically searches for issues that a user might encounter and logs them. Automated testing takes certain tasks out of the hands of humans and places them into an automation tool, freeing up human testers for other tasks.
Both types of testing have their advantages. Manual testing can take advantage of human intuitiveness to uncover unexpected errors, but can also be extremely time-consuming. Automated testing saves much of this time and is particularly effective on repetitive tests, but can miss less obvious cases that manual testing might catch.
Whether you use one method or a hybrid approach in your testing will depend on the requirements of your application.
There are a number of popular and open source tools and frameworks for testing your mobile apps. A few of the most common include:
For more, you can see a comparison of Appium vs Espresso vs XCUITest here.
Another type of testing to keep in mind is automated visual testing. Traditional testing experiences rely on validating against code, but this can result in flaky tests in some situations, particularly in complex mobile environments. Visual testing works by comparing visual screenshots instead.
Visual testing can be powerful on mobile applications. While the traditional pixel-to-pixel approach can still be quite flaky and prone to false-positives, advances in visual AI – trained against billions of images – make automated visual testing today increasingly accurate.
You can read more about the benefits of visual testing for mobile apps and see a quick example here.
Mobile testing can be a complex challenge due to the wide variety of hardware and software variations in common usage today. However, as mobile internet use continues to soar, the quality of your mobile applications is more critical than ever. Understanding the types of tests you need to run, and then executing them with the tools that will make you most effective, will ensure you can deliver your mobile apps in less time and with a superior user experience.
Happy testing!
The post What is Mobile Testing? appeared first on Automated Visual Testing | Applitools.
]]>The post Appium vs Espresso vs XCUITest – Understanding how Appium Compares to Espresso & XCUITest appeared first on Automated Visual Testing | Applitools.
]]>In this article we shall look at the Appium, Espresso and XCUITest test automation frameworks. We’ll learn the key differences between them, as well as when and why you should use them in your own testing environment.
Appium is an open source test automation framework which is completely maintained by the community. Appium can automate Native, Hybrid, mWeb, Mac Apps and Windows Apps. Appium follows the Selenium W3C protocol which enables the use of the same test code for both Android and iOS applications.
Under the hood Appium uses Espresso or UIAutomator2 as the mode of communication to Android Apps and XCUI for iOS. In a nutshell, Appium provides a stable webdriver interface on top of automation backends provided by Google and Apple.
Installing Appium was a bit of hassle for a long time, hence from Appium 2.0 architecturally we could choose to install the drivers and plugins as we wanted. You can find more details about Appium 2.0 here.
Espresso is an Android test framework developed by Google for UI testing. Espresso automatically synchronizes test actions with the user interface of the mobile app and ensures that activity is started well before the actual test run.
The XCUITest framework from Apple helps users write UI tests straight inside the Xcode with the separate UI testing target in the app.
XCUITest uses accessibility identifiers to interact with the main iOS app. XCUITests can be written in Swift or Objective-C.
There isn’t a reliable framework out there which easily supports testing on Apple TV devices. XCUITest is the only way to verify tvOS apps. SinceXcode 7, Apple has shipped XCTest prebuilt into its development kit.
Conclusion
Appium, Espresso and XCUI can each fill different needs for UI testing. The way to choose between them is to consider the requirements of your project. If your scope is limited just to one platform and you want comprehensive and embedded UI testing, XCUI or Espresso are great fits. For cross-platform testing across iOS, Android, and Hybrid then Appium is your best choice.
The post Appium vs Espresso vs XCUITest – Understanding how Appium Compares to Espresso & XCUITest appeared first on Automated Visual Testing | Applitools.
]]>The post How Do I Test Mobile Apps At Scale With Google Firebase TestLab And Applitools? appeared first on Automated Visual Testing | Applitools.
]]>Google Firebase Test Lab is a cloud-based app-testing infrastructure. With one operation, you can test your Android or iOS app across a wide variety of devices and device configurations, and see the results—including logs, videos, and screenshots—in the Firebase console.
Firebase Test Lab runs Espresso and UI Automator 2.0 tests on Android apps, and XCTest tests on iOS apps. Write tests using one of those frameworks, then run them through the Firebase console or the gcloud command line interface.
Firebase Test Lab lets you run the following types of tests:
As with all web and mobile applications. Applitools offers an easy, consistent way to collect visual data from multiple device types running different viewport sizes. In the rest of this article, you will run through a demonstration for using Applitools with Google Firebase Test Lab.
In this Demo I have choose a simple “Hello World” app, and to keep you running we already have an example Espresso Instrumentation Test you can find the Complete Project here https://github.com/applitools/eyes-android-hello-world
I know you have looked into the GitHub repo. Let’s just get few more prerequisites installed and make sure they are ready to use and deep dive. Make sure you have installed and/or configured the following:
Installing the Android Studio
Now let’s install Android Studio / SDK so that you can run the test script on an emulator or real device. You could install the Android SDK only but then you have to do additional advanced steps to properly configure the Android environment on your computer. I highly recommend installing the Android Studio as it makes your life easier.
Download the Android Studio executable. Follow the steps below to install locally on your computer:
1. Get the code:
2. Import the project into Android Studio
Let’s look at the Instrumented test ExampleInstrumentedTest under androidTest.
Before we run the test on Firebase lets run it on local emulator.
That’s pretty easy isn’t it? Applitools will now capture each screen where the eyes.CheckWindow() is called and create a baseline on the first run.
Once the test completes, you can analyze the test results on the Applitools dashboard.
Now let’s run the test on Firebase devices. To do this we need first need an account so let’s get that
Step1: Navigate to https://firebase.google.com/ and click on sign in
Step2: Click on Go to Console, Navigates to the Console Dashboard
Step3: Create a Project, once you create a project, all good to explore the dashboard and see through all the features available here.
Step 4: Lets Add the Configurations in android studio to run the tests
Step 5: Sign in with Google Firebase account and click ok
Step 6: Re-open the edit configurations
Now you can see the configure settings for Matrix configuration and cloud project
Select your project and add one or more custom devices from list of 150. For now let’s add 2 devices, Platform Android 9.x, API Level 28 (pie), Locale, Orientation.
We will use these devices to run our Instrumentation test on Firebase.
That’s it we are all good to run our test on the Firebase
Click on Run Example Instrumented Test this will now execute you tests on the devices you have selected on Firebase.
Let’s Go back to The Test lab on Firebase and you can see your tests running over there Parallelly with visual comparison checks done on Applitools AI Platform.
Applitools allows you to test your mobile app by running it on any device lab. Google Firebase allows a streamline platform for developers (build) and quality engineers (test) to run tests on any device configuration. The integration make it easier to use the best of platforms for best quality applications.
The post How Do I Test Mobile Apps At Scale With Google Firebase TestLab And Applitools? appeared first on Automated Visual Testing | Applitools.
]]>