The post SauceCon 2019 in Austin, TX – See You There! appeared first on Automated Visual Testing | Applitools.
]]>Next week we are joining test automation, Selenium and Sauce Labs power-users at the Hyatt Regency Austin for SauceCon 2019!
What is SauceCon? SauceCon (@saucecon, #SauceCon) is an annual conference hosted by Sauce Labs that brings together the software testing and test automation community to share, network and explore the latest trends and technologies in this space.
SauceCon 2019 takes place in Austin, Texas, April 23-25 and features speakers from companies like Charles Schwab, Fannie Mae, and the New York Times. We are honored to have Applitools test automation experts take the stage to share their insights on testing techniques and tools. Join us at the following sessions:
The first session takes place on Wednesday, April 24 @ 11:40 a.m. CST and features Dave Haeffner, Software Developer (and Full-time Maintainer of Selenium IDE) at Applitools, presenting “How I Learned to Stop Worrying and Love Record and Playback.”
Dave will share examples of how Selenium IDE can reliably augment and level-up testing practice regardless of a team’s test automation maturity or level of technical experience. He’ll also share how he went from a record-and-playback naysayer to a Selenium IDE maintainer.
Dave’s talk comes off the heels of our contribution to Selenium IDE. It’s the easiest way to get your team to automate their functional and visual UI testing.
Taking the stage on Thursday, April 25 @ 10:50 CST, Applitools’ Senior Developer Advocate and Industry Thought Leader Angie Jones will present “What’s That Smell? Tidying Up Our Test Code”. Angie will walk through bad coding practices and demonstrate a cleaner approach by refactoring the test code live on stage.
As Platinum Sponsor, we’re thrilled to support SauceCon 2019 and hope to see you at our booth in the expo hall during exhibit hours. We’re excited to share the latest developments around end-to-end visual testing and monitoring techniques that support automation and continuous quality.
Want to kick back after a long conference day with some delicious Texas barbeque? Join us on Wednesday, April 24 @ 6 p.m. CST in the hotel pool area for the “Saustin Street Fair & BBQ“. Connect with friends and enjoy the excellent food and drinks. You can also enjoy our “Visually Perfect” signature cocktail, and win a levitating Bluetooth speaker if you play our “Spot the Difference” Game.
If you don’t get a chance to connect with us at SauceCon 2019, reach out or sign up for a free Applitools account.
If you want to learn more from Angie and other software testing experts after SauceCon 2019, we’ve got your back! Head over to Test Automation University and check out this amazing, community-driven collection of educational training courses. These courses will help improve your test automation skill set. Best of all, all courses on the website are free and available anytime at your convenience!
The post SauceCon 2019 in Austin, TX – See You There! appeared first on Automated Visual Testing | Applitools.
]]>The post Step-by-Step Guide to Flawless UI Delivery – with Cloud-based Visual Testing appeared first on Automated Visual Testing | Applitools.
]]>Watch Automation Expert Greg Sypolt, Sr. Engineer at Gannett | USA Today, as he takes a deep dive into implementing automated visual testing.
In the age of continuous delivery, teams must explore and deploy new testing approaches to increase test coverage—and most importantly, take pressure off of manual testing to move faster and more accurately.
The objective of visual testing is to catch unintended visual bugs before they are pushed to production, and affect the user experience. Without visual inspections, there is no way to know that a UI component has broken when developers commit UI changes by using existing automated functional end-to-end test suites.
Watch this live session, where Automation expert Greg Sypolt will share with us how he successfully leveraged visual testing to increase coverage and reduce maintenance, while speeding up release cycles – with his step-by-step guide.
Key talking points include:
Watch the Step by Step Guide to Flawless UI Delivery with Cloud-Based Visual Testing here:
And find Greg’s slide deck below:
To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.
The post Step-by-Step Guide to Flawless UI Delivery – with Cloud-based Visual Testing appeared first on Automated Visual Testing | Applitools.
]]>The post Continuous Testing Done Right: Test Automation at the World’s Leading Non-profit appeared first on Automated Visual Testing | Applitools.
]]>Watch this amazing Test Automation webinar on-demand, right here!
Watch Brian Jordan from Code.org as he takes you through the ins and outs of developing test automation for the world’s premiere computer science education platform.
In this webinar, Brian – software engineer at Code.org since 2004 – presented Code.org automated testing suite, including: architecture, frameworks, tools, and best practices – and how those are designed to address complex QA issues, such as visual testing, functional testing, cross-browser testing, cross-device testing, and localization testing for over 40 supported languages.
Watch the full on-demand recording of the Continuous Testing Done Right: Test Automation at the World’s Leading Non-profit webinar below:
Slides can be found here:
To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.
The post Continuous Testing Done Right: Test Automation at the World’s Leading Non-profit appeared first on Automated Visual Testing | Applitools.
]]>The post 10 Things You Didn’t Know about Appium – with Appium Creator Dan Cuellar appeared first on Automated Visual Testing | Applitools.
]]>Thanks for everyone who joined in for our special webinar with Dan Cuellar, creator of Appium: the leading open-source test automation framework for mobile app testing.
Dan shared 10 secrets about Appium that you may have never heard before, talk about what’s new in Appium 1.5 and what’s on the Appium roadmap for the rest of 2016.
Watch the on-demand recording of the 10 Things You Didn’t Know about Appium + What’s New in Appium 1.5 webinar with Appium creator Dan Cuellar now, right here:
To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.
The post 10 Things You Didn’t Know about Appium – with Appium Creator Dan Cuellar appeared first on Automated Visual Testing | Applitools.
]]>The post Awesome Test Automation Made Simple – presented by Dave Haeffner appeared first on Automated Visual Testing | Applitools.
]]>Must-see webinar for test automation professionals, presented by Automation expert Dave Haeffner – now available online, on-demand!
Learn how to build simple and powerful automated cross-browser tests, covering visual testing and functional regressions, and configured to run automatically through the use of a Continuous Integration (CI) server.
Watch this step-by-step webinar, presented by Test Automation expert Dave Haeffner (author of Elemental Selenium and The Selenium Guidebook), and learn how to:
Watch it now:
Dave’s slides can be found here:
To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.
The post Awesome Test Automation Made Simple – presented by Dave Haeffner appeared first on Automated Visual Testing | Applitools.
]]>The post Automating Your Test Runs with Continuous Integration — CI Series by Dave Haeffner: Part 3/3 appeared first on Automated Visual Testing | Applitools.
]]>
You’ll probably get a lot of mileage out of your automated tests if you run things from your computer, look at the results, and tell people when there are issues in the application. But that only helps you solve part of the problem.
The real goal in test automation is to find issues reliably, quickly, and automatically – and ideally, in sync with the development workflow you’re a part of.
To do that we need to use a Continuous Integration server.
A Continuous Integration Server Primer
A Continuous Integration server (a.k.a. CI) is responsible for merging code that is actively being developed into a central place (e.g., “trunk” or “master”) frequently (e.g., several times a day, or on every code commit, etc.) to find issues early so they can be addressed quickly — all for the sake of releasing working software in a timely fashion.
With CI, we can automate our test runs so they can happen as part of the development workflow. The lion’s share of tests that are typically run on a CI Server are unit (and potentially integration) tests. But we can very easily add in our recently written Selenium tests.
There are numerous CI Servers available for use today, most notably:
Let’s step through an example of using Jenkins on CloudBees.
Jenkins is a fully functional, widely adopted, and open-source CI adn CD (Contibuous Delivery) server. Its a great candidate for us to step through. And DEV@cloud is an enterprise-grade hosted Jenkins service offered by CloudBees, the enterprise Jenkins company. It takes the infrastructure overhead out of the equation for us.
We’ll first need to create a free trial account, which we can do here.
Once logged in we can click on Get Started with Builds
from the account page. This will take us to our Jenkins server. We can also get to the server by visiting http://your-username.ci.cloudbees.com
. Give it a minute to provision, when it’s done, you’ll be presented with a welcome screen.
NOTE: Before moving on, click the ENABLE AUTO-REFRESH
link at the top right-hand side of the page. Otherwise you’ll need to manually refresh the page to see results (e.g., when running a job and waiting for results to appear).
Now that Jenkins is loaded, let’s create a Job and configure it to run our tests.
New Item
from the top-left of the DashboardLogin Tests IE8
)Freestyle project
OK
This will load a configuration screen for the Jenkins job.
Ideally your test will live in a version control system (like Git). There are many benefits to doing this, but the immediate one is that you can configure your job (under Source Code Management
) to pull in the test code from the version control repository and run it.
Source Code Management
sectionGit
optionhttps://github.com/tourdedave/getting-started-blog-series.git
)Now we’re ready to tell the Jenkins Job how to run our tests.
Build
sectionAdd Build Step
and select Execute Shell
In the Command
input box, add the following commands:
export SAUCE_USERNAME="your-sauce-username"
export SAUCE_ACCESS_KEY="your-sauce-access-key"
export APPLITOOLS_API_KEY="your-applitools-api-key"
gem install bundler
bundle install
bundle exec rspec
Since our tests have never run on this server we need to include the installation and running of the bundler gem (gem install bundler
and bundle install
) to download and install the libraries (a.k.a. gems) used in our test suite. And we also need to specify our credentials for Sauce Labs and Applitools Eyes (unless you decided to hard-code these values in your test already – if so, then you don’t need to specify them here).
Now we’re ready to save, run our tests, and view the job result.
Save
Build Now
from the left-hand side of the screenWhen the build completes, the result will be listed on the job’s home screen under Build History
.
You can drill into the job to see what was happening behind the scenes. To do that click on the build from Build History
and select Console Output
(from the left navigation). This output will be your best bet in tracking down an unexpected result.
In this case, we can see that there was a failure. If we follow the URLs provided, we can see a video replay of the test in Sauce Labs (link) and a diff image in Applitools Eyes.
The culprit for the failure here wasn’t a failure of functionality, but a visual defect with the image on the Login button.
Before we can call our setup complete, we’ll want a better failure report for our test job. That way when there’s a failure we won’t have to sift through the console output for info. Instead we should get it all in a formatted report. For that, we’ll turn to JUnit XML (a standard format that CI servers support).
This functionality doesn’t come built into RSpec, but it’s simple enough to add through the use of another gem. There are plenty to choose from with RSpec, but we’ll go with rspec_junit_formatter
.
After we install the gem we need to specify some extra command-line arguments when running our tests. A formatter type (e.g., --format RspecJunitFormatter
) and an output file for the XML (e.g., --out results.xml
). And since this type of output is really only useful when running on our CI server, we’ll want an easy way to turn it on and off.
# filename: .rspec
<% if ENV['ci'] == 'on' %>
--format RspecJunitFormatter
--out tmp/result.xml
<% end %>
Within RSpec comes the ability to specify command line arguments that are used frequently in a file (e.g., .rspec
) that lives in the root of the test directory. In it we specify the new commands we want to use and wrap them in a conditional that checks an environment variable that denotes whether or not the tests are being run on a CI server (e.g., if ENV['ci'] == 'on'
).
Now it’s a small matter of updating our Jenkins job to consume this new JUnit XML output file by adding a post-build action to publish it as a report.
Then we need to tell the Jenkins job where the XML file is. Since it ends up in the root of the test directory, we can just specify the file extension with a wildcard.
Lastly, we need to update the shell commands for the build to set the ci
environment variable to on
.
Now when we run our test, we’ll get a test report which states which test failed. And when we drill into it, we get the URLs for the jobs in Sauce Labs and Applitools Eyes.
In order to maximize your CI effectiveness, you’ll want to send out notifications to alert your team members when there’s a failure.
There are numerous ways to go about this (e.g., e-mail, chat, text, co-located visual cues, etc). And thankfully there are numerous, freely available plugins that can help facilitate whichever method you want. You can find out more about Jenkins’ plugins here.
For instance, if you wanted to use chat notifications and you use a service like HipChat or Slack, you would do a plugin search and find one of the following plugins:
After installing the plugin for your chat service, you will need to provide the necessary information to configure it (e.g., an authorization token, the channel/chat room where you want notifications to go, what kinds of notifications you want sent, etc.) and then add it as a Post-build Action
to your job (or jobs).
Now when your CI job runs and fails, a notification will be sent to the chat room you configured.
If you’ve been following along through this whole series, then you should now have a test that leverages Selenium fundamentals, that performs visual checks (thanks to Applitools Eyes), which is running on whatever browser/operating system combinations you care about (thanks to Sauce Labs), and running on a CI server with notifications being sent to you and your team (thanks to CloudBees).
This is a powerful combination that will help you find unexpected bugs (thanks to the automated visual checks) and act as a means of collaboration for you and your team.
And by using a CI Server you’re able to put your tests to work by using computers for what they’re good at – automation. This frees you up to focus on more important things. But keep in mind that there are numerous ways to configure your CI server. Be sure to tune it to what works best for you and your team. It’s well worth the effort.
To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.
The post Automating Your Test Runs with Continuous Integration — CI Series by Dave Haeffner: Part 3/3 appeared first on Automated Visual Testing | Applitools.
]]>The post How To Do Cross-browser Visual Testing with Selenium appeared first on Automated Visual Testing | Applitools.
]]>It’s easy enough to get started doing visual UI testing with a single browser, and with Selenium you can quickly expand your efforts into different browsers. But, it’s difficult to verify that your web application looks right on all of them (e.g., making sure the elements are not missing/misaligned/hidden, etc.).
Especially when there are small rendering inconsistencies between browsers that can easily cause your visual tests to fail. Things like how an image gets rendered. This poses a real challenge since you can’t use traditional visual testing techniques like pixel comparison. So standard workarounds like modifying the match tolerance won’t work.
By leveraging a solution like Applitools Eyes we can reliably run the same visual tests against multiple browsers (which we can gain access to through the use of Sauce Labs).
Normally, you would use a strict visual match for visual regression testing. This is good for verifying the layout of an application in the same execution environment (e.g., same browser, screen size, device, etc.), and with exactly the same (or very similar) content. And if you want to cover multiple execution environments (e.g., multiple browsers, various screen sizes, etc.) you need to maintain several baseline images.
But with layout matching (a feature that Applitools Eyes offers) we can use a single baseline for validating multiple execution environments and/or sites with extremely dynamic content.
Let’s dig in with an example.
NOTE: This example builds on the test code from this previous write-up which covers how to add visual testing to existing Selenium tests using Applitools Eyes and Sauce Labs. You can see the full code example from it here. In order to follow along with this example, you’ll want to familiarize yourself with the sample code. Also, if you want to play along at home, you’ll need to have an account for both Applitools Eyes and SauceLabs (they have free trial options).
To prepare ourselves for cross-browser testing, we’ll need to modify our setup()
method. This is where we’ll focus all of our efforts for this post.
Here is where we left off with the setup()
method from the previous post.
// filename: Login.java
// ...
@Before
public void setup() throws Exception {
DesiredCapabilities capabilities = DesiredCapabilities.internetExplorer();
capabilities.setCapability("platform", Platform.XP);
capabilities.setCapability("version", "8");
capabilities.setCapability("name", testName);
String sauceUrl = String.format(
"http://%s:%s@ondemand.saucelabs.com:80/wd/hub",
"YOUR_SAUCE_USERNAME",
"YOUR_SAUCE_ACCESS_KEY");
WebDriver browser = new RemoteWebDriver(new URL(sauceUrl), capabilities);
sessionId = ((RemoteWebDriver) browser).getSessionId().toString();
eyes = new Eyes();
eyes.setApiKey("YOUR_APPLITOOLS_API_KEY");
driver = eyes.open(browser, "the-internet", testName);
}
// ...
First, let’s modify the DesiredCapabilities
instantiation so that we can more flexibly specify the browser name.
@Before
public void setup() throws Exception {
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability("browserName", "firefox");
capabilities.setCapability("platform", Platform.XP);
capabilities.setCapability("version", "36");
capabilities.setCapability("name", testName);
Next, we’ll want to specify the name of the baseline so we can reuse it. If one doesn’t exist, it will be created. And if one exists, it will be used for comparison. This enables us to run our test in one browser, capture a baseline, and then rerun the same test against another browser, and compare the results between the browsers. And if we don’t do it, a separate baseline will automatically be created and used for each browser or screen size.
eyes.setBaselineName(testName);
driver = eyes.open(browser, "the-internet", testName);
}
Now let’s save the file and run the test to capture the baseline (e.g., mvn clean test -Dtest=Login.java
from the command line).
Once the test completes, Applitools will provide the URL to the job in the test output. You can review it to make sure it is what you expect. If it is Accept
and Save
it. Alternatively, you can assume the baseline image is correct and proceed without checking it. It will automatically be used as the baseline on future test runs.
Now that we have a baseline, let’s change our browserName
and version
capabilities so our test will run against a different browser.
@Before
public void setup() throws Exception {
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability("browserName", "internet explorer");
capabilities.setCapability("platform", Platform.XP);
capabilities.setCapability("version", "8");
// ...
When we save and run it (e.g., mvn clean test -Dtest=Login.java
from the command-line) the test fails.
When we view the results we can see that there are no visual bugs between Firefox and Internet Explorer. Instead, there are differences in how the page was rendered (e.g., different screen sizes, different placement of text, etc.).
The test was considered a failure because the Applitools Eyes match level defaults to Strict
mode, which performs a match that is very exact. For effective cross-browser visual testing, we’ll need to use a different match level called Layout2
.
Let’s update our test code to use this match level instead.
import com.applitools.eyes.MatchLevel;
// ...
eyes.setBaselineName(testName);
eyes.setMatchLevel(MatchLevel.LAYOUT2);
driver = eyes.open(browser, "the-internet", testName);
}
Now when we save our test and run it again (e.g., mvn clean test -Dtest=Login.java
from the command-line) it will pass.
The test worked this time, but for consistent results we’ll want to specify the viewport size in our Applitools setup. This will help ensure that the viewport size of each browser is consistent regardless of the browser used and the system’s screen resolution.
If you don’t know what size to specify, go with a generic value like 1000x600
.
import com.applitools.eyes.RectangleSize;
// ...
eyes.setBaselineName(testName);
eyes.setMatchLevel(MatchLevel.LAYOUT2);
driver = eyes.open(browser, "the-internet", testName, new RectangleSize(1000, 600));
}
If the application you’re testing is responsive you can verify it’s layout by changing the viewport size to force the page layout to change. But regardless of the viewport size specified, Applitools Eyes will scroll through the page and stitch together a full page screenshot for validation on each test run.
Rather than constantly modifying our capabilities by hand to change the browser
, version
, and platform
let’s update the test setup to retrieve runtime properties specified on the command-line instead.
@Before
public void setup() throws Exception {
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability("browserName", System.getProperty("browser", "firefox"));
capabilities.setCapability("platform", System.getProperty("platform", "Windows XP"));
capabilities.setCapability("version", System.getProperty("browserVersion", "36"));
// ...
With this approach we’re also able to set sensible defaults, which we’ve done. So now if we don’t specify anything, Firefox 36 will run on Windows XP.
To specify values we need to use the -D
flag when running our tests on the command line. Here are some examples of it in use. Notice the use of double-quotes for values with spaces.
mvn clean test -Dtest=Login.java -Dbrowser="internet explorer" -DbrowserVersion=8
mvn clean test -Dtest=Login.java -Dbrowser="internet explorer" -DbrowserVersion=10 -Dplatform="Windows 8"
mvn clean test -Dtest=Login.java -Dbrowser=firefox -DbrowserVersion=26 -Dplatform="Windows 7"
mvn clean test -Dtest=Login.java -Dbrowser=safari -DbrowserVersion=8 -Dplatform="OS X 10.10"
mvn clean test -Dtest=Login.java -Dbrowser=chrome -DbrowserVersion=40 -Dplatform="OS X 10.8"
For a full list of available browser and operating system combinations, check out Sauce Labs’ platform list.
To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.
The post How To Do Cross-browser Visual Testing with Selenium appeared first on Automated Visual Testing | Applitools.
]]>The post How To Add Visual Testing To Your Existing Selenium Tests appeared first on Automated Visual Testing | Applitools.
]]>In previous write-ups by Selenium expert Dave Haeffner (published here, on the Applitools blog), he covered the basics of automated visual UI testing and how to execute it. Following those posts, many of our readers requested more in-depth information on how automated visual testing fits into existing automated testing practices.
Some of the questions received were: Do you need to write and maintain a separate set of tests? What about your existing Selenium tests? What do you do if there isn’t a sufficient library for the programming language you’re currently using?
In response, Dave Haeffner wrote a post that will put your mind at ease: you can build automated visual testing checks into your existing Selenium tests.
In his in-depth post: “Adding Automated Visual Testing to Existing Selenium Tests”, Dave demonstrates how, by leveraging a third-party platform such as Applitools Eyes, this is a simple feat. And when coupled with a cross-browser test framework, such as Sauce Labs, you can quickly add coverage for those hard to reach browser, device, and platform combinations.
You can read Dave’s full post: Adding Automated Visual Testing to Existing Selenium Tests, on the Sauce Labs blog.
In addition, if you’re interested in learning more on how Automated Visual Testing can boost your cross-browser coverage and reduce maintenance, we invite you to watch this hands-on webinar we hosted with our friends at Sauce Labs.
To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.
The post How To Add Visual Testing To Your Existing Selenium Tests appeared first on Automated Visual Testing | Applitools.
]]>The post Practical Tips and Tricks for Selenium Test Automation appeared first on Automated Visual Testing | Applitools.
]]>Last week, Applitools & Sauce Labs hosted a special webinar with Selenium expert Dave Haeffner, author of The Selenium Guidebook, where he shared Selenium Test Automation tips & tricks from his popular weekly Selenium newsletter.
In this webinar, Dave covered the following topics:
Here’s the full webinar recording:
Dave’s full slidedeck is available here:
To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.
The post Practical Tips and Tricks for Selenium Test Automation appeared first on Automated Visual Testing | Applitools.
]]>The post Automated Visual Testing in The Cloud: Enhance Your Cross-browser Coverage appeared first on Automated Visual Testing | Applitools.
]]>In our last webinar, hosted by Sauce Labs, Adam Carmi from Applitools & Chris Broesamle from Sauce Labs showed how to enhance cross-browser coverage with cloud-based Automated Visual Testing.
In this webinar, Adam and Chris explained how to avoid visual regressions and front-end bugs by adding scalable automated visual testing to existing Selenium and Appium tests and running them on the Sauce Labs cloud.
They ran a live cross-browser visual test with Sauce Labs and Applitools, where they how to increase coverage – while reducing maintenance efforts – by leveraging visual testing; and revealed expert tips on how to successfully perform large-scale automated visual testing.
If you missed it live – a free on-demand recording and slidedeck can be found right here, on the Sauce Labs blog
To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.
The post Automated Visual Testing in The Cloud: Enhance Your Cross-browser Coverage appeared first on Automated Visual Testing | Applitools.
]]>