Python Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/python/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 01 Dec 2023 18:57:15 +0000 en-US hourly 1 The Top 10 Test Automation University Courses https://applitools.com/blog/the-top-10-test-automation-university-courses/ Thu, 10 Nov 2022 16:34:00 +0000 https://applitools.com/?p=44406 Test Automation University (also called “TAU”) is one of the best online platforms for learning testing and automation skills. TAU offers dozens of courses from the world’s leading instructors, and...

The post The Top 10 Test Automation University Courses appeared first on Automated Visual Testing | Applitools.

]]>
Test Automation University, powered by Applitools

Test Automation University (also called “TAU”) is one of the best online platforms for learning testing and automation skills. TAU offers dozens of courses from the world’s leading instructors, and everything is available for free. The platform is proudly powered by Applitools. As of November 2022, nearly 140,000 students have signed up! TAU has become an invaluable part of the testing community at large. Personally, I know many software teams who use TAU courses as part of their internal onboarding and mentorship programs.

So, which TAU courses are currently the most popular? In this list, we’ll count down the top 10 most popular courses, ranked by the total number of course completions over the past year. Let’s go!

#10: Selenium WebDriver with Java

Selenium WebDriver with Java course badge
Angie Jones

Starting off the list at #10 is Selenium WebDriver with Java by none other than Angie Jones. Even with the rise of alternatives like Cypress and Playwright, Selenium WebDriver continues to be one of the most popular tools for browser automation, and Java continues to be one of its most popular programming languages. Selenium WebDriver with Java could almost be considered the “default” choice for Web UI test automation.

In this course, Angie digs deep into the WebDriver API, teaching everything from the basics to advanced techniques. It’s a great course for building a firm foundation in automation with Selenium WebDriver.

#9: Python Programming

Python Programming course badge
Jess Ingrassellino

#9 on our list is one of our programming courses: Python Programming by Jess Ingrassellino. Python is hot right now. On whatever ranking, index, or article you find these days for the “most popular programming languages,” Python is right at the top of the list – often vying for the top spot with JavaScript. Python is also quite a popular language for test automation, with excellent frameworks like pytest, libraries like requests, and bindings for browser automation tools like Selenium WebDriver and Playwright.

In this course, Dr. Jess teaches programming in Python. This isn’t a test automation course – it’s a coding course that anyone could take. She covers both structured programming and object-oriented principles from the ground up. After two hours, you’ll be ready to start coding your own projects!

#8: API Test Automation with Postman

API Test Automation with Postman course badge
Beth Marshall

The #8 spot belongs to API Test Automation with Postman by Beth Marshall. In recent years, Postman has become the go-to tool for building and testing APIs. You could almost think of it as an IDE for APIs. Many test teams use Postman to automate their API test suites.

Beth walks through everything you need to know about automating API tests with Postman in this course. She covers basic features, mocks, monitors, workspaces, and more. Definitely take this course if you want to take your API testing skills to the next level!

#7: Introduction to Cypress

Intro to Cypress course badge
Gil Tayar

Lucky #7 is Introduction to Cypress by Gil Tayar. Cypress is one of the most popular web testing frameworks these days, even rivaling Selenium WebDriver. With its concise syntax, rich debugging features, and JavaScript-native approach, it’s become the darling end-to-end test framework for frontend developers.

It’s no surprise that Gil’s Cypress course would be in the top ten. In this course, Gil teaches how to set up and run tests in Cypress from scratch. He covers both the Cypress app and the CLI, and he even covers how to do visual testing with Cypress.

#6: Exploring Service APIs through Test Automation

Exploring Services APIs through Test Automation course badge
Amber Race

The sixth most popular TAU course is Exploring Service APIs through Test Automation by Amber Race. API testing is just as important as UI testing, and this course is a great way to start learning what it’s all about. In fact, this is a great course to take before API Test Automation with Postman.

This course was actually the second course we launched on TAU. It’s almost as old as TAU itself! In it, Amber shows how to explore APIs first and then test them using the POISED strategy.

#5: IntelliJ for Test Automation Engineers

IntelliJ for Test Automation Engineers course badge
Corina Pip

Coming in at #5 is IntelliJ for Test Automation Engineers by Corina Pip. Java is one of the most popular languages for test automation, and IntelliJ is arguably the best and most popular Java IDE on the market today. Whether you build frontend apps, backend services, or test automation, you need proper development tools to get the job done.

Corina is a Java pro. In this course, she teaches how to maximize the value you get out of IntelliJ – and specifically for test automation. She walks through all those complicated menus and options you may have ignored otherwise to help you become a highly efficient engineer.

#4: Java Programming

Java Programming course badge
Angie Jones

Our list is winding down! At #4, we have Java Programming by Angie Jones. For the third time, a Java-based course appears on this list. That’s no surprise, as we’ve said before that Java remains a dominant programming language for test automation.

Like the Python Programming course at spot #9, Angie’s course is a programming course: it teaches the fundamentals of the Java language. Angie covers everything from “Hello World” to exceptions, polymorphism, and the Collections Framework. Clocking in at just under six hours, this is also one of the most comprehensive courses in the TAU catalog. Angie is also an official Java Champion, so you know this course is top-notch.

#3: Introduction to JavaScript

Introduction to Cypress course badge
Mark Thompson

It’s time for the top three! The bronze medal goes to Introduction to JavaScript by Mark Thompson. JavaScript is the language of the Web, so it should be no surprise that it is also a top language for test automation. Popular test frameworks like Cypress, Playwright, and Jest all use JavaScript.

This is the third programming course TAU offers, and also the top one in this ranking! In this course, Mark provides a very accessible onramp to start programming in JavaScript. He covers the rock-solid basics: variables, conditionals, loops, functions, and classes. These concepts apply to all other programming languages, too, so it’s a great course for anyone who is new to coding.

#2: Web Element Locator Strategies

Web Element Locator Strategies course badge
Andrew Knight

I’m partial to the course in second place – Web Element Locator Strategies by me, Andrew Knight! This was the first course I developed for TAU, long before I ever joined Applitools.

In whatever test framework or language you use for UI-based test automation, you need to use locators to find elements on the page. Locators can use IDs, CSS selectors, or XPaths to uniquely identify elements. This course teaches all the tips and tricks to write locators for any page, including the tricky stuff!

#1: Setting a Foundation for Successful Test Automation

Setting a Foundation for Successful Test Automation course badge
Angie Jones

It should come as no surprise that the #1 course on TAU in terms of course completions is Setting a Foundation for Successful Test Automation by Angie Jones. This course was the very first course published to TAU, and it is the first course in almost all the Learning Paths.

Before starting any test automation project, you must set clear goals with a robust strategy that meets your business objectives. Testing strategies must be comprehensive – they include culture, tooling, scaling, and longevity. While test tools and frameworks will come and go, common-sense planning will always be needed. Angie’s course is a timeless classic for teams striving for success with test automation.

What can we learn from these trends?

A few things are apparent from this list of the most popular TAU courses:

  1. Test automation is clearly software development. All three of TAU’s programming language courses – Java, JavaScript, and Python – are in the top ten for course completions. A course on using IntelliJ, a Java IDE, also made the top ten. They prove how vital good development skills are needed for successful test automation.
  2. API testing is just as important as UI testing. Two of the courses in the top ten focused on API testing.
  3. Principles are more important than tools or frameworks. Courses on strategy, technique, and programming rank higher than courses on specific tools and frameworks.

What other courses are popular?

The post The Top 10 Test Automation University Courses appeared first on Automated Visual Testing | Applitools.

]]>
Test Automation University is now 75,000 students strong https://applitools.com/blog/tau-contributors/ Tue, 23 Feb 2021 09:32:28 +0000 https://applitools.com/?p=27058 What does it take to make a difference in the lives of 75,000 people? Applitools has reached 75,000 students enrolled in Test Automation University, a global online platform led by...

The post Test Automation University is now 75,000 students strong appeared first on Automated Visual Testing | Applitools.

]]>

What does it take to make a difference in the lives of 75,000 people?

Applitools has reached 75,000 students enrolled in Test Automation University, a global online platform led by Angie Jones that provides free courses on things test automation. Today, more engineers understand how to create, manage, and maintain automated tests.

What Engineers Have Learned on TAU

Engineers have learned how to automate UI, mobile, and API tests. They have learned to write tests in specific languages, including Java, JavaScript, Python, Ruby, and C#. They have applied tests through a range of frameworks including Selenium, Cypress, WebdriverIO, TestCafe, Appium, and Jest.

75,000 engineers would exceed the size of some 19,000 cities and towns in the United States. They work at large, established companies and growing startups. They work on every continent with the possible exception of Antarctica.

What makes Test Automation University possible? Contributors, who create all the coursework.

Thank You, Instructors

As of this writing, Test Automation University consists of 54 courses taught by 39 different instructors. Each instructor has contributed knowledge and expertise. You can find the list of authors on the Test Automation University home page.

Here are the instructors of the most recently added courses to TAU.

AuthorCourseDetailsChapters
Profile Name
Corina Pip
JUnit 5
Learn to execute and verify your automated tests with JUnit 517
Profile Name
Matt Chiang
WinAppDriver
Learn how to automate Windows desktop testing with WinAppDriver10
Profile Name
Marie Drake
Test Automation for Accessibility
Learn the fundamentals of automated accessibility testing8
Profile Name
Lewis Prescott
API Testing In JavaScript
Learn how to mock and test APIs in JavaScript5
Profile Name
Andrew Knight
Introduction to pytest
Learn how to automate tests using pytest10
Profile Name
Moataz Nabil
E2E Web Testing with TestCafe
Learn how to automate end-to-end testing with TestCafe15
Profile Name
Aparna Gopalakrishnan
Continuous Integration with Jenkins

Learn how to use Jenkins for Continuous Integration5
Profile Name
Moataz Nabil
Android Test Automation with Espresso
Learn how to automate Android tests with Espresso11
Profile Name
Mark Thompson
Introduction to JavaScript
Learn how to program in JavaScript6
Profile Name
Dmitri Harding
Introduction to NightwatchJS
Learn to automate web UI tests with NightwatchJS8
Profile Name
Rafaela Azevedo
Contract Tests with Pact
Learn how to implement contract tests using Pact8
Profile Name
Simon Berner
Source Control for Test Automation with Git
Learn the basics of source control using Git8
Profile Name
Paul Merrill
Robot Framework
Learn to use Robot Framework for robotic process automation (RPA)7
Profile Name
Brendan Connolly
Introduction to Nunit
Learn to execute and verify your auotmated tests with nUnit8
Profile Name
Gaurav Singh
Automated Visual Testing with Python
Learn how to automate visual testing in Python with Applitools11

Thank You, Students

As engineers and thinkers, the students continue to expand their knowledge through TAU coursework.

Each course contains quizzes of several questions per chapter. Each student who completes a course gets credit for questions answered correctly. Students who have completed the most courses and answered the most questions successfully make up the TAU 100.

Some of the students who lead on the TAU 100 include:

StudentCreditsRank
Profile Name Osanda Nimalarathna
Founder @MaxSoft
Ambalangoda Sri Lanka
44,300
Griffin
Profile Name Patrick Döring
Sr. QA Engineer @Pro7
Munich Germany
44,300
Griffin
Profile NameDarshit Shah
Sr. QA Engineer @N/A
Ahmedabad India
40,250Griffin
Profile NameAdha Hrustic
QA Engineer @Klika
Bosnia and Herzegovina
39,575Griffin
Profile NameHo Sang
Principal Technical Test Engineer @N/A
Kuala Lumpur Malaysia
38,325Griffin
Profile Name Gopi Srinivasan
Senior SDET Lead @Trimble Inc
Chennai India
38,075Griffin
Profile Name Ivo Dimitrov
Sr. QA Engineer @IPD
Sofia Bulgaria
37,875Griffin
Profile Name Malith Karunaratne
Technical Specialist – QE @Pearson Lanka
Sri Lanka
36,400Griffin
Profile Name Stéphane Colson
Freelancer @Testing IT
Lyon France
35,325Griffin
Profile NameTania Pilichou
Sr. QA Engineer @Workable
Athens Greece
35,025Griffin

Join the 75K!

Get inspired by the engineers around the world who are learning new test automation skills through Test Automation University.

Through the courses on TAU, you’ll not only learn how to automate tests, but more importantly, you’ll learn to eliminate redundant tests, add automation into your continuous integration processes, and make your testing an integral part of your build and delivery processes.

Learn a new language. Pick up a new testing framework. Know how to automate tests for each part of your development process – from unit and API tests through user interface, on-device, and end-to-end tests.

No matter what you learn, you will become more valuable to your team and company with your skills on how to improve quality through automation.

The post Test Automation University is now 75,000 students strong appeared first on Automated Visual Testing | Applitools.

]]>
Thunderhead Speeds Quality Delivery with Applitools https://applitools.com/blog/thunderhead-speeds-quality-delivery-with-applitools/ Tue, 16 Feb 2021 07:15:36 +0000 https://applitools.com/?p=26911 Thunderhead is the recognised global leader in the Customer Journey Orchestration and Analytics market. The ONE Engagement Hub helps global brands build customer engagement in the era of digital transformation.  ...

The post Thunderhead Speeds Quality Delivery with Applitools appeared first on Automated Visual Testing | Applitools.

]]>

Thunderhead is the recognised global leader in the Customer Journey Orchestration and Analytics market. The ONE Engagement Hub helps global brands build customer engagement in the era of digital transformation.  

Thunderhead provides its users with great insights into customer behavior. To continue to improve user experience with their highly-visual web application, Thunderhead develops continuously. How does Thunderhead keep this visual user experience working well? A key component is Applitools.

Before – Using Traditional Output Locators

Prior to using Applitools, Thunderhead drove its UI-driven tests with Selenium for browser automation and Python as the primary test language. They used traditional web element locators both for setting test conditions and for measuring the page responses.

Element locators have been state-of-the-art for measuring page response because of precision. Locators get generated programmatically. Test developers can find any visual structure on the page as an element.

Depending on page complexity, a given page can have dozens, or even hundreds, of locators. Because test developers can inspect individual locators, they can choose which elements they want to check. But, locators limit inspection. If a change takes place outside the selected locators, the test cannot find the change.

These output locators must be maintained as the application changes. Unmaintained locators can cause test problems by reporting errors because the locator value has changed while the test has not. Locators may also remain the same but reflect a different behavior not caught by a test.

Thunderhead engineers knew about pixel diff tools for visual validation. They also had experience with those tools; they had concluded that pixel diff tools would be unusable for test automation because of the frequency of false positives.

Introducing Applitools at Thunderhead

When Thunderhead started looking to improve their test throughput, they came across Applitools. Thunderhead had not considered a visual validation tool, but Applitools made some interesting claims. The engineers thought that AI might be marketing buzz, but they were intrigued by a tool that could abstract pixels into visible elements.

As they began using Applitools, Thunderhead engineers realized that Applitools gave them the ability to inspect an entire page.  Not only that, Applitools would capture visual differences without yielding bogus errors. Soon they realized that Applitools offered more coverage than their existing web locator tests, with less overall maintenance because of reduced code.

The net benefits included:

  • Coverage – Thunderhead could write tests for each visible on-page element on every page
  • Maintainability – By measuring the responses visually, Thunderhead did not have to maintain all the web element locator code for the responses – reducing the effort needed to maintain tests
  • Visual Validation – Applitools helped Thunderhead engineers see the visual differences between builds under test, highlighting problems and aiding problem-solving.
  • Faster operation – Visual validation analyzed more quickly than traditional web element locators.

Moving Visual Testing Into Development

After. using Applitools in end-to-end testing, Thunderhead realized that Applitools could help in several areas.

First, Applitools could help with development. Often, when developers made changes to the user interface, unintended consequences could show up at check-in time. However, by waiting for end-to-end tests to expose these issues, developers often had to stop existing work and shift context to repair older code. By moving visual validation to check-in, Thunderhead could make developers more effective.

Second, developers often waited to run their full suite of element locator tests until final build. These tests ran against multiple platforms, browsers, and viewports. The net test run would take several hours. The equivalent test. using Applitools took five minutes. So, Thunderhead could run these tests with every build.

For Thunderhead, the net result was both greater coverage with tests run at the right time for developer productivity.

Adding Visual Testing to Component Tests

Most recently, Thunderhead has seen the value of using a component library in their application development. By standardizing on the library, Thunderhead looks to improve development productivity over time. Components ensure that applications provide consistency across different development teams and use cases.

To ensure component behavior, Thunderhead uses Applitools to validate the individual components in the library. Thunderhead also tests the components in mocks that demonstrate the components in typical deployment uses cases.

By adding visual validation to components, Thunderhead expects to see visual consistency validated much earlier in the application development cycle.

Other Benefits From Applitools

Beyond the benefits listed above, Thunderhead has seen the virtual elimination of visual defects found through end-to-end testing. The check-in and build tests have exposed the vast majority of visual behavior issues during the development cycle. They have also made developers more productive by eliminating the context switches previously needed if bugs were discovered during end-to-end testing. As a result, Thunderhead has gained greater predictability in the development process.

In turn, Thunderhead engineers have gained greater agility. They can try new code and behaviors and know they will visually catch all unexpected behaviors. As a result, they are learning previously-unexplored dependencies in their code base. As they expose these dependencies, Thunderhead engineers gain greater control of their application delivery process.

With predictability and control comes confidence. Using Applitools has given Thunderhead increased confidence in the effectiveness of their design processes and product delivery. With Applitools, Thunderhead knows how customers will experience the ONE platform and how that experience changes over time.

Featured photo by Andreas Steger on Unsplash

The post Thunderhead Speeds Quality Delivery with Applitools appeared first on Automated Visual Testing | Applitools.

]]>
2020’s Most Popular Programming Languages for UI Test Automation https://applitools.com/blog/2020-most-popular-programming-languages-for-ui-test-automation/ Wed, 16 Dec 2020 20:21:06 +0000 https://applitools.com/?p=25195 See the top 5 programming languages used for web and mobile UI automation!

The post 2020’s Most Popular Programming Languages for UI Test Automation appeared first on Automated Visual Testing | Applitools.

]]>

I often get questions from those starting new test automation projects querying which programming language they should choose. I never have a cut and dry answer to this because it depends on a few factors such as which language the product features are being developed in, who will be writing the tests and which language are they most comfortable in, and how much support and tooling is available for a given language.

In this post, I’ll share which programming languages are most used for test automation as it gives some insight into industry usage. However, do not take this to mean “best programming language”, as the best language is the one that is best for your context.

The Data

You may be wondering where the data is from. Good question! More than half of the top companies in software, financial services, and healthcare verticals use Applitools for their web and mobile test automation needs. From the millions of tests that run in our cloud every week, I’ve analyzed which languages the tests were written in and aggregated the results at the team level (not the test level).

The Results

  1. Java
    Java remains the most popular programming language for test automation. Java held its lead with 43% of our users opting to write their tests in this language. In last year’s review, Java was used by 44% of our customers, so a slight decline but nevertheless this language managed to keep the crown in 2020.
  2. JavaScript
    Coming in as the 2nd most popular programming language for test automation is JavaScript with 35% of our users writing their tests in this language. This is a huge increase from last year where only 15% of our users tested in JS! According to StackOverflow, JavaScript is the most popular technology used by professional developers, so I expect to see an increased usage of JS for testing in the years to come.
  3. C#
    With Java and JavaScript accounting for 78% of usage combined, there’s not much market share left for the other languages. So, we see quite the jump with the 3rd place language, C#, being used by 8.8% of our users. This is rather interesting because last year’s results showed 13% of our customers using C#, which means almost a third of these users have likely opted for a different language this year.
  4. Python
    Right behind C# is Python, with 8% of our customers using Python as their language of choice for test automation. This is exactly the same percentage of usage we saw last year. What’s most surprising about this stat is that Python is gaining popularity year after year with professional developers and has become the fastest-growing major programming language – even edging out Java for the first time this year! Perhaps we’ll eventually see this trend in software testing as well.
  5. Ruby
    Only 4.2% of our customers use Ruby for test automation. This is a stunning 40% decrease from Ruby test automation usage last year. StackOverflow shows Ruby’s popularity down to 8.9% with professional developers and it appears Ruby is even less popular in the testing space.

Remember…

While the data here doesn’t necessarily indicate which is the best programming language for test automation, it does highlight which ones are most used for testing amongst the hundreds of companies and open source projects surveyed.

The post 2020’s Most Popular Programming Languages for UI Test Automation appeared first on Automated Visual Testing | Applitools.

]]>
Where To Learn Test Programming – July 2020 Edition https://applitools.com/blog/learn-test-programming/ Fri, 10 Jul 2020 12:27:00 +0000 https://applitools.com/?p=17380 What do you do when you have lots of free time on your hands? Why not learn test programming strategies and approaches? When you’re looking for places to learn test...

The post Where To Learn Test Programming – July 2020 Edition appeared first on Automated Visual Testing | Applitools.

]]>

What do you do when you have lots of free time on your hands? Why not learn test programming strategies and approaches?

When you’re looking for places to learn test programming, Test Automation University has you covered. From API testing through visual validation, you can hone your skills and learn new approaches on TAU.

We introduced five new TAU courses from April through June, and each of them can help you expand your knowledge, learn a new approach, and improve your craft as a test automation engineer. They are:

  • Mobile Automation with Appium in JavaScript (1 hr 22 min)
  • Selenium WebDriver with Python (1 hr 13 min)
  • Automated Visual Testing with Python (58 min)
  • Introduction to NUnit (1 hr 19 min)
  • Robot Framework (1 hr 1 min)

These courses add to the other three courses we introduced in January through March 2020:

  • IntelliJ for Test Automation Engineers (3 hrs 41 min)
  • Cucumber with JavaScript (1 hr 22 min)
  • Python Programming (2 hrs)

Each of these courses can give you a new set of skills.

Let’s look at each in a little detail.

Mobile Automation with Appium in JavaScript 

Orane Findley teaches Mobile Automation with Appium in JavaScript. Orane walks through all the basics of Appium, starting with what it is and where it runs. 

“Appium is an open-source tool for automating native, web, and hybrid applications on different platforms.”

In the introduction, Orane describes the course parts:

  • Setup and Dependencies – installing Appium and setting up your first project
  • Working with elements by finding them, sending values, clicking, and submitting
  • Creating sessions, changing screen orientations, and taking screenshots
  • Timing, including TimeOuts and Implicit Waits
  • Collecting attributes and data from an element
  • Selecting and using element states
  • Reviewing everything to make it all make sense

The first chapter, broken into five parts, gets your system ready for the rest of the course. You’ll download and install a Java Developer Kit, a stable version of Node.js, Android Studio and Emulator (for a mobile device emulator), Visual Studio Code for an IDE, Appium Server, and a sample Appium Android Package Kit. If you get into trouble, you can use the Test Automation University Slack channel to get help from Orane. Each subchapter contains the links to get to the proper software. Finally, Orane has you customize your configuration for the course project.

Chapter 2 deals with element and screen interactions for your app. You can find elements on the page, interact with those elements, and scroll the page to make other elements visible.  Orane breaks the chapter into three distinct subchapters so you can become competent with each part of finding, scrolling, and interacting with the app. The quiz comes at the end of the third subchapter.

The remaining chapters each deal with specific bullets listed above: sessions and screen capture, timing, element attributes, and using element states. The final summary chapter ensures you have internalized the key takeaways from the course. Each of these chapters includes its own quiz. 

When you complete this course successfully, you will have both a certificate of completion and the code infrastructure available on your system to start testing mobile apps using Appium.

Selenium WebDriver with Python

Andrew Knight, who blogs as The Automation Panda, teaches the course on Selenium WebDriver with Python. As Andrew points out, Python has become a popular language for test automation. If you don’t know Python at all, he points you to Jess Ingrassellino’s great course, Python for Test Programming, also on Test Automation University.

In the first chapter, Andrew has you write your first test. Not in Python, but in Gherkin. If you have never used Gherkin syntax, it helps you structure your tests in a pseudocode that you can translate into any language of your choice. Andrew points out that it’s important to write your tests steps before you write test code – and Gherkin makes this process straightforward.

The second chapter goes through setting up pytest, the test framework Andrew uses. He assumes you already have Python 3.8 installed. Depending on your machine, you may need to do some work (Macs come with Python 2.7.16 installed, which is old and won’t work. Andrew also goes through the pip package manager to install pipenv. He gives you a github link to his test code for the project.  And, finally he creates a test using the Gherkin codes as comments to show you how a test actually runs in pytest.

In the third chapter, you set up Selenium Webdriver to work with specific browsers, then create your test fixture in pytest. Andrew reminds you to download the appropriate browser driver for the browser you want to test – for example chromedriver to drive Chrome and geckodriver to drive Firefox. Once you use pipenv to install Selenium, you begin your test fixture. One thing to remember is to call an explicit quit for your webdriver after a test.  

Chapter 4 goes through page objects, and how you abstract page object details to simplify your test structure. Chapter 5 goes through element locator structures and how to use these in Python. And, in Chapter 6, Andrew goes through some common webdriver calls and how to use them in your tests. These first six chapters cover the basics of testing with Python and Selenium.

Now that you have the basics down, the final three chapters review some advanced ideas: testing with multiple browsers, handling race conditions, and running your tests in parallel. This course gives you specific skills around Python and Selenium on top of what you can get from the Python for Test Programming course.

Automated Visual Testing with Python

The next course we introduced was Gaurav Singh’s course, Automated Visual Testing with Python. In this course, Gaurav goes through the details of using Applitools – either replacing or alongside – coded assertions of web elements for application testing. If you have never used Applitools and you love Python, you will learn a lot about how to use visual validation through this course. An Applitools user will not learn python testing from Gaurav’s course – the Python for Test Programming serves that purpose much better – and then this course helps you with the Applitools syntax in Python. I posted a detailed review of Gaurav’s course in a separate blog post. 

Introduction to NUnit 

Brendan Connolly teaches Introduction to NUnit. NUnit provides a unit test framework for the .NET universe. It started off as a .net port of the jUnit framework for Java. NUnit allows you to write tests, execute them, and report results. If you’re coding in C#, F#, Visual Basic, or even C++, NUnit can help you write tests effectively.

I already went through the first chapter, which describes NUnit. In the second chapter, you install the .NET SDK. I’m on a Mac and figured I was stuck – but Brendan wrote his course from a Mac. You can use Homebrew to install the SDK. Once you have the SDK, you make sure you have a compatible IDE. You can use the community edition of Visual Studio, JetBrains Rider, or VS Code. Lastly in the chapter you set up your test environment.

In Chapter 3, you write your first test. I’m not a .NET person, but the code examples seemed quite similar to Java, so it was easy to follow along.  

Brendan broke Chapter 4 into three parts: basic assertions, constraint model, and advanced options. Basic assertions, or classic model assertions, provide separate methods for evaluating that a single value, array, string, or collection possesses some property (Assert.IsEmpty, Assert.IsNotEmpty, etc.) or compares two or more and validates a comparison (Assert.AreEqual, Assert.IsGreater, etc.).  The constraint model uses a single method, Assert.That, which receives the constraint to test as part of the parameters passed to it.  The constraint model has become standard in NUnit because of its flexibility. Advanced options let you do things like run tests and report on conditions that are problematic but allow the test to continue running – so, instead of terminating a test on a first failure, the test will continue to run.

Chapters 5 and 6 go through structure and organization of your tests. Chapter 7 focuses on data-driven tests. Finally, Chapter 8 dives into test execution and reporting. 

Overall, this course provides a great introduction into testing within your .NET environment.

Robot Framework

Paul Merrill teaches the course on Robot Framework, a test driver written in Python. Robot Framework boasts it lets people with little or no programming experience create test automation and robotic process automation (RPA). Most importantly, its developers made Robot Framework free and easily extensible. If you are using a programming language like Python, JavaScript, Java, .NET, Perl, or PHP, you can implement Robot Framework keywords in your test code.  

For this course, knowing a programming language can help speed learning along. Students need to know how to use the command line to find and run files, as well as how to open, edit, and save text files.

The course will teach you how to:

  • Recognize and create scripts
  • Install Robot Framework and supporting tools
  • Run tests
  • Recognize and use keywords
  • Drive web browsers with Selenium
  • Create test cases

Because Robot Framework depends on scripting, you’ll spend a bit of time learning how to:

  • Read and write steps
  • Understand how the script runs
  • Configure scripts, plus setup and teardown
  • Create and use variables
  • Read log files and reports

This course launched on June 26 on Test Automation University, and I look forward to taking it to learn Robot Framework.

IntelliJ for Test Automation Engineers

Corina Pip teaches this course on IntelliJ for Test Automation Engineers. This course makes sense for anyone asking,

“How does my approach and toolset compare with what other people do?”

Corina explains the details of using IntelliJ IDEA for test automation.  The 12 chapter course takes 3 hours and 41 minutes to complete, and the course covers the full use of IntellJ IDEA. Her first five chapters involve setup and use of IntelliJ. These are:

  • Installation – you can use the paid version, or the free community version
  • Create and import projects
  • Menus
  • Screens
  • Settings

At the end of these chapters, you have a good idea of where to find things in IntelliJ.

Then, Corina jumps into details about using IntelliJ for test and test automation:

  • Create and edit tests
  • Running tests
  • Debugging tests
  • Code analysis
  • Version Control System integration
  • Additional tips
  • Updating and plugins

The course covers lots of detail. I think you will appreciate test contents in Chapters 6 and 7. Chapter 6 – create and edit tests, contains lots of critical testing tasks:

  1. Creating a package and test class
  2. Building the test methods
  3. Creating fields and variables
  4. Calling methods and jumping to source
  5. Auto import class reformat
  6. Renaming methods and variable

Chapter 7 also covers critical testing tasks, including:

  1. Running a package from the project screen
  2. Rerunning tests
  3. Running tests from the editor and configurations
  4. Pin, fixing tests and rerunning

By the time you get through Chapter 10, you’ll know how to link your tests back to your version controls system (Corina shows examples with Git, but you can use your favorite). The last chapters help you see how to use IntelliJ on an ongoing basis, as packages, add-ons, and the entire IDE receive updates periodically.

If you’re not using automation today, this course provides a great framework. If you are using another approach, Corina will give you some way to compare your results with what she does.

Cucumber with JavaScript

Photo by PhotoMIX-Company–1546875

Do you want to learn test programming for behavior-driven development (BDD)?

Gavin Samuels teaches Cucumber with JavaScript. He focuses this course on BDD, Gherkin, Cucumber, and JavaScript.

Gavin’s first chapter covers BDD in detail. He covers the value of BDD to your organization:

  • Improved collaboration
  • A common language for the product owner, tester and developer
  • Silos break down as team members understand each other’s roles and responsibilities
  • The common language builds a shared team understanding of the requirements
  • Examples used in design become artifacts used in development and test

In his second chapter, Gavin covers Gherkin – a syntax for describing a certain behavior. Each entry provides details for an example or scenario involving a  specific feature and condition – followed by a Given/When/Then set of inputs and result (Given state, When input happens, Then act to create a specific result). This chapter contains the guts of the thought process for BDD. Spend time going through his examples, because the more richly you think things through and specify scenarios in Gherkin, the more likely you can create both usable and reusable code.

The third chapter covers Cucumber. Cucumber supports BDD and can execute Gherkin statements Gavin shows how you can use BDD and Cucumber – or misuse it –  in your environment. Gavin lists out the skills you need for the rest of the course:

  • Java (he shows you where to get Oracle Java)
  • JavaScript (he has you install node.js and npm)
  • Webdriver.io – you need some knowledge.
  • Knowledge of regular expressions
  • A text editor

In the last three chapters, you actually use Cucumber to build tests. Chapter 4 shows you how to set up all the code you have installed. Chapter 5 runs through actual test scenarios. And, finally, Chapter 6 shows you how to add visual validation with Applitools.

Python for Test Automation

Photo by Christina Morillo

Yes, you could get a book. Or, you could take a class. And why take just a generic language class when you can learn Python for a specific use – like, say learn test programming with Python?

Jess Ingrassellino teaches the Python for Test Automation programming class.

If you want to learn Python for test automation, take this course. In the end, you can read, understand, and review Python code. What’s more, you can read, write and understand unit tests, write scripts for different types of testing, create and modify Python/Selenium UI tests, understand security scans, and understand accessibility scans in Python.

Her course looks like a traditional programming course. In fact, if you read the titles of her chapters, you would wonder how it differs from any other language course.  Especially in the last chapter, “Inheritance, Multiple Inheritance, and Polymorphism”, which you might think presents itself as just object-oriented detail. But, in fact, her focus on language helps you understand how you can use Python, or work with others who use Python, in your everyday testing.

Each of the chapters incorporates the idea that you might be testing some parts of the code. She incorporates examples as part of the course, so you can see how to create tests yourself – or how to read tests written by others.

While compact, this course covers key ideas in using Python – starting from installing Python and the Pycharm IDE to creating tests to account for inheritance, multiple inheritance, and polymorphism.

Conclusion

As I find myself with more time on my hands, I expect to take these courses really soon. And, I want to learn test programming skills.

I like the fact that I can learn Python for testing – rather than taking a general language course. I take my programming languages like foreign languages – I’d rather learn French so I can find the lavatory now, and explicate the poetry of Guy de Maupassant sometime in the future.

As a recovering product manager, I am looking forward to seeing how BDD helps developers take my product requirements and turn them into development and test code. So, I’ll make time for Gavin’s course soon.

And, finally, I love seeing how other people make use of the tools they prefer for testing, so I can’t wait to take Corina’s course.

Are you looking for other courses? We have over 35 free courses on Java, JavaScript, C#, Cypress, Selenium IDE, Mocha, Chai… find all of them at Test Automation University.

For More Information

The post Where To Learn Test Programming – July 2020 Edition appeared first on Automated Visual Testing | Applitools.

]]>
How Do I Test Visually With Python? https://applitools.com/blog/test-visually-with-python-tau/ Tue, 30 Jun 2020 01:52:35 +0000 https://applitools.com/?p=19973 How Do I Test Visually With Python? Python has gained popularity as a language for writing tests because of its scalability, structural flexibility, and rich library for testing. Because of...

The post How Do I Test Visually With Python? appeared first on Automated Visual Testing | Applitools.

]]>

How Do I Test Visually With Python?

Python has gained popularity as a language for writing tests because of its scalability, structural flexibility, and rich library for testing. Because of its flexibility, Python offers a broad ability to automate testing from unit tests up to end-to-end testing. And, Applitools integrates with Python to add visual testing easily into Python-based end-to-end tests.

I finished taking Gaurav Singh’s course, Automated Visual Testing with Python, on Test Automation University.  And, I passed. But, before I show you my certificate of completion, I want to give you a quick review of Gaurav’s course.

About The Instructor

Gaurav Singh

Gaurav Singh advocates for test automation as part of his job. Gaurav serves as a test automation lead at Gojek, a logistics company based in Southeast Asia. In addition to Python (the subject of this course), Gaurav programs in Kotlin and Java.

He writes extensively about test automation. Some of his blog posts include:

Why Python?

1024px Python logo notext.svg

Python provides huge value for testing, according to the Automation Panda blogger, Andrew Knight. He writes:

“Python is object-oriented and functional. It lets programmers decide if functions or classes are better for the needs at hand. This is a major boon for test automation because (a) stateless functions avoid side effects and (b) simple syntax for those functions make them readable. pytest itself uses functions for test cases instead of shoehorning them into classes (à la JUnit).”

Some people think that interpreted code suffers performance issues, but Andrew points out that the performance suits testing – which can wait milliseconds to seconds for tests to complete. And, the flexibility you get from choosing Python provides huge advantages to test development.

Prerequisites

You need to know Python to take Gaurav’s course, because he does not teach the language or teach you how to use it for test purposes. If you know Python but have never used it for testing, Gaurav points you to Jess Ingrasselino Ed.D.’s excellent course on Test Automation University, Python for Test Automation.

You will need Python 3. By default, Macs come with Python 2 installed, so you will need to install Python 3. Gaurav gives you instructions for doing this.

You also need an IDE. Gaurav demonstrates using PyCharm for his IDE – you can download it pretty easily. Again – good instructions. You can also choose another IDE – Virtual Studio Code, for example. But the examples all work with PyCharm.

Course Structure

This course has an introduction and 11 chapters. Each of the 11 chapters includes a quiz. You need to take and succeed with the quizzes to pass the course and get the course certificate.

The 11 chapters include:

  • 1 – Introduction to Visual Testing
  • 2 – Setting Up Dependencies
  • 3 – Getting the API Key
  • 4 – Setup PyCharm and Initializing Applitools Eyes
  • 5 – Running Your First Test
  • 6 – Understanding Match Levels
  • 7 – Checkpoints
  • 8 – Organizing Tests
  • 9 – Visual Validation of PDF Files
  • 10 – Analyzing Test Results
  • 11 – Integrations

Getting Going

The first three steps are the parts to get organized. In Chapter 1, Gaurav brings up some concepts around visual testing – why it’s important. He shows some examples of pages that cause some percentage of users to abandon an app. For instance, if the button that drives customers to buy your product gets covered with text. The button may still function, but what percentage of your customers pause when they see this kind of error? And, the bummer is that  your existing test automation cannot catch this kind of error. The button may be findable and clickable for your test automation – so your tests will continue to pass.

In Chapter 2, Gaurav gives you instructions for installing all the python test software. If you’re experienced with Python, you will find these instructions straightforward. If not, you might encounter differences between your environment and his. Gaurav points to a couple of sites to help you install Python3. I did, but ran into some issues with my environment on my Mac. I gave up and moved to Linux.

Screen Shot 2020 06 29 at 3.32.26 PM

Also in Chapter 2, Gaurav gets you to install the Eyes Selenium software for Python. This lets you call on Applitools Eyes calls in your Python code. Gaurav leaves the explanation for this code to later chapters.

Next, he makes sure you can run Applitools Eyes by getting sure you sign up for a free account in Chapter 3. To run Applitools, you need an API key, which you can get once you have an account.

Finally, he makes sure your PyCharm IDE can work with Applitools Eyes by giving you some initial coding steps to initialize Applitools and ensure the Applitools API Key gets loaded.

Running Visual Tests with Python

Next, Gaurav runs through some tests demonstrating calls to Applitools Eyes once you have navigated to a given page.

In Chapter 5, Gaurav shows how Applitools Eyes can validate the output of a given functional task. He uses a bookstore app that Angie Jones developed to demonstrate the limits of code-based assertions.

Gaurav shows how to build a test for the app in Python. He shows how to exercise the app with Python, and how to call Applitools to check the page. By using a search function that presents a single result, he captures the resulting page image as a baseline.

Screen Shot 2020 06 29 at 5.40.51 PM

Gaurav then introduces a visual difference. He changes the color of the pricing box in the CSS for the app. The text is correct, but the visual image differs. He shows that Applitools catches the difference, which a text assertion might catch if a tester coded for the CSS change.

In Chapter 6, Gaurav digs into match levels in Applitools:

  • Exact – pixel comparison
  • Strict (the default) – using AI to inspect for visually-meaningful differences
  • Content – checking for content and ignoring color differences
  • Layout – handling dynamic content (like news) and ensuring that the page structure and layout remain consistent

Gaurav explains why each match level matters.  Basically, no one uses Exact in production – it just demonstrates how many more errors your typical pixel checker uncovers that don’t matter to a user. Layout and Content have specific purposes. Most of the time, use Strict.

In Chapter 7, Gaurav goes through some of the special capabilities in Applitools. For instance, if an app page extends beyond the viewport, you can configure Applitools to capture an entire page. Applitools will scroll through and capture the entire page to its horizontal and vertical maximum size for comparison purposes. You can also capture specific regions, or regions-by-element, and run different comparisons on those regions. You can even capture a region inside an iFrame.

Becoming Productive with Visual Validation in Python

The rest of the course focuses on how you can become productive using Applitools for visual testing with Python.

Chapter 8 focuses on how to organize tests into batches for test suites. You can capture multiple tests on a single page and see how the behavior compares for that page. When users have multiple functions to exercise on a single page, they find really powerful to group all the tests for a single page together.

Chapter 9 explains how you can use the Applitools Image Tester application to validate PDF files. How do you test the PDF output of your applications? Applitools gives you a stand-alone app to make those comparisons. The Image Tester application can compare two image files – including PDF files – and give you a comparison. While Image Tester is a .jar and runs in Java, you can direct Selenium to call Image Tester from inside your Selenium Python tests.  You can use the Applitools controls to run strict, content, or layout comparisons of your PDFs – and you can choose to ignore regions you know will differ (like, say, a timestamp).

Chapter 10 dives into the ways you can analyze test results.  Chapter 11 focuses on integrations. Both chapters help you understand how to get the most out of using Applitools with your development and test infrastructure.

As always, I include my certificate of completion:

certificates TAU 83939a8a

Conclusion

Gaurav’s course just focuses on using Applitools with Python. While he covers a bit of good test writing style, he wants to show you how to use Applitools. If you already know Applitools, this course provides a great refresher on the basic capabilities, and you will get some idea of the power of adding Applitools to your existing tests written in Python.

Next Steps

I’d recommend taking Raja Rao’s course on Modernizing Functional Test Automation Through Visual AI. While the course focuses on Java, its principles help explain how to cut down on coded assertions in your end-to-end tests and use the Visual AI in Applitools to validate your application output.

Sign up for a free Applitools account.

Request an Applitools demo.

The post How Do I Test Visually With Python? appeared first on Automated Visual Testing | Applitools.

]]>
Announcing the $50,000 Ultrafast Cross Browser Hackathon https://applitools.com/blog/ultrafast-cross-browser-hackathon/ Tue, 09 Jun 2020 02:48:22 +0000 https://applitools.com/?p=19616 Ready For a Challenge? If you are up for a challenge and enjoy learning new skills, join the world’s best quality engineers between now until June 30th to compete in the...

The post Announcing the $50,000 Ultrafast Cross Browser Hackathon appeared first on Automated Visual Testing | Applitools.

]]>

Ready For a Challenge?

If you are up for a challenge and enjoy learning new skills, join the world’s best quality engineers between now until June 30th to compete in the industry’s first next generation cross browser, cross device, and cross operating system hackathon. Focused on the use of Visual AI and Ultrafast Grid, this virtual event seeks to educate and upskill developers and test automation engineers all over the world. Test at incredible speeds, deliver higher quality software faster than ever, and earn a chance to win the $5,000 Diamond Prize.

Rubiks Cube Hero 7

Over 500 Winners. $50,000 in Cash Prizes.

So long as you are among the first 5,000 to qualify, you are eligible to win one of 500 prizes worth over $5,000. That’s at least a 10% chance to win! Since this hackathon is about testing at incredible speeds, the first 500 to submit a qualifying submission also earn a $25 ultrafast submission prize. Even better, you become eligible for one of the 100 cash prizes listed below if our panel of expert judges determines your test suites did the best job of providing efficient coverage and catching all the bugs.

As of June 8, almost 2,000 people have signed up, and we have been receiving initial submissions. If you want to qualify for an ultrafast submission prize, you still have time.

Pricing table UFG for hackathon prizes

How Does the Hackathon Work?

Software developers, quality engineers, and QA professionals will compete for $50,000 in cash prizes. For those who qualify, you will be challenged to author cross browser and cross device tests against a real-world app using both your preferred legacy cloud testing solution and Applitools Ultrafast Grid powered by Visual AI. Contestants are free to use any major test framework, such as Cypress, Selenium, WebdriverIO, or TestCafe, and do so in their preferred language including Java, Javascript, Python, Ruby, or C#.

Here is what you need to do:

  1. Apply here for access. Once you qualify, you will get access to the hackathon application, instructions on how to complete the challenge, and full access to Applitools Visual AI and Ultrafast Grid.
  2. Submit when you’re ready. The instructions will guide you. We expect most submissions to take 4 to 6 hours to complete. There is plenty of help to get it done if you need it!
  3. Your submission will be judged by a panel of experts. Those submissions that do the best job of catching all the bugs and doing so in the most efficient way possible will win.
  4. All submissions are due by June 30th, 2020 at 11.59pm PT. No exceptions!
  5. Winners will be announced no later than August 1st, 2020

That’s it! So why wait? Get started today.

The Next Generation of Cross Browser Testing is Ultrafast.

Our hackathons were created to make a point. There is a better way to automate your testing. Browsers do not suffer from the same executional bugs that plagued them five, 10, or 20 years ago. What does create problems, lots of problems, is the rendering of the application across various viewports and screens. This reality means amajor shift in how you need to test, and you will learn and see for yourself what we mean by competing.

In the Ultrafast Cross Browser Testing Hackathon, even more valuable than the prizes you might win, is the learning you will gain from competing. If you take on this challenge, you will learn how next generation cross browser testing works. If you want a quick summary — read this this blog post on next generation cross browser testing. 

We’ve Done This Before. The Visual AI Rockstar Hackathon.

In November 2019, the Visual AI Rockstar Hackathon was a huge success. Almost 3,000 quality engineers participated and the response was overwhelmingly positive. Here is what some of our winners had to say about their experience:

VisualAI Impact Three Quotes from previous hackathon winners

We expect this one to be even bigger, so what’s stopping you?

Take The Challenge

Why participate in the Applitools Cross Browser Testing Hackathon?

First, you will learn new skills. You get hands-on experience seeing how easily you can run tests once and evaluating behavior across the browsers that matter to your customers.

Second, you experience a new way of running application validation. If you have your own multi-browser lab today, or if you use a third-party service that requires multiple tests run on multiple setups in parallel, you can see the difference in running the Applitools Ultrafast grid in comparison. And, if you have not considered running tests across multiple browsers – due to cost or complexity – you can reevaluate your decision.

Finally, you can win prizes and bragging rights as a hackathon winner. To show the world, we will proudly display your name on our website. Your success will demonstrate your engineering acumen to your peers and anyone else that matters to you.

Your opportunity to learn something new and stand out in a crowd awaits. Sign up now.

The post Announcing the $50,000 Ultrafast Cross Browser Hackathon appeared first on Automated Visual Testing | Applitools.

]]>
How Can 2 Hours Now Save You 1,000 Hours In The Next Year? https://applitools.com/blog/two-hours-learn-visual-ai/ Fri, 29 May 2020 03:50:52 +0000 https://applitools.com/?p=19427 As a quality engineer, what do you think about when you consider the return on investing an hour of your time? Or two hours? What if it could save you...

The post How Can 2 Hours Now Save You 1,000 Hours In The Next Year? appeared first on Automated Visual Testing | Applitools.

]]>

As a quality engineer, what do you think about when you consider the return on investing an hour of your time? Or two hours? What if it could save you up to half the work hours you spent over the past year?

If you spend a lot of time writing or maintaining your end-to-end tests, our data shows that you’ll find lots of savings. What would you do with that time?

I have written previously about our Impact of Visual AI on Test Automation report. And, I have written about the benefits that many of your peers discovered by using Applitools Visual AI. They reduced the number of lines of code they had to write. Their code became more stable and easier to maintain. They caught more bugs with less code. And they learned how to do all this in about one to two hours.

So, what can an hour or two of your time give you?

ROI For A Quality Engineer Hour

Business people like me often talk about return on investment (ROI). If we invest this money today, what does the future payoff look like, and when does it come?

ROI offers a good model for thinking for everyone. Including a decision about how to spend your time to learn a new skill. If you spend an hour learning something now, what is the future benefit to you, and when do you get it?

The ROI of a quality engineer hour might be measured by:

  • Increasing the number of test cases you can run
  • Increasing the number of bugs you can uncover
  • Decreasing the amount of low-value code you write.
  • Decreasing your code maintenance effort.

So if you’re going to invest one or two hours into learning a new technology, like, say, Visual AI, you would like to see many hours worth of return on your time.

Visual AI frees you from writing inspection code in your end-to-end tests. Inspection code has the highest chance to contain errors, gets written selectively to provide limited coverage, and still can require a high degree of coding skill.  

You might think that your value as a software engineer comes from your coding skills, and that often devolves into the inane measure of the value of a software engineer by counting lines of code written. Truthfully, not all code has the same value. In fact, code in of itself has zero value. It’s what that code does to help you test more conditions that matters.

High-Value Code And Low-Value Code

Your end-to-end tests contain both high-value code and low-value code.  

The high-value code exercises your application. It sets the test cases. It runs execution variations. The high-value code correlates to source code and UI test coverage.

ilya pavlov OqtafYT5kTw unsplash 1
Photo by Ilya Pavlov on Unsplash

The low-value code inspects your results. And, if you’re using state-of-the-art coding, you’re inspecting the DOM for output web elements. Some you find with an ID. Others you find with a CSS selector. Some you find with a relative Xpath expression. Sometimes this code can involve incredible complexity. For example, writing all the assertions for reordering a table can involve elegant coding skills. And, yet…

If you review your end-to-end tests, much of your code determines whether the DOM contains the proper elements to let you conclude that the app passed. And, you have become good at triage. You cannot cover the state of every element in the DOM, so you become great at selecting code to inspect that tells you if your test passed or failed. Still, you write a lot of inspection code compared to execution code – and you only inspect a selected subset of the DOM.

But, the time you take writing this low-value code detracts from some of the high-value activity you can add to your end-to-end tests. For instance – you can write the preconditions for each test to ensure that they can be run independently – and, thus, in parallel. You can review the test conditions evaluated by each test to eliminate redundancy and improve automation accuracy.

What about maintenance? We didn’t even include code maintenance efforts you undertake between releases. That is yet more end-to-end coding effort you need – to validate and update existing tests, add new tests, and resolve failures – every time you modify the application. And, yes, some of that code provides new or modified test conditions. As well, some of that code needs to modify your inspection code.

Visual AI Replaces Low-Value Code

When we developed Visual AI, we recognized that a quality engineer makes trade offs. One engineer can only write a finite number of lines of code. An incomplete test has no value. So, every line of code needed to execute a test and validate the results makes a complete test.

We also recognized the limitations of DOM inspection for test results validation. Inspected thoroughly, the DOM represents the elements of the page to be rendered by the browser. And, one can write a detailed DOM inspection – and even account for variations in the DOM structure between releases. However, that depth of inspection involves complexity that rarely pays off for the coder. So, most coders spot-check their DOM – and can miss unexpected changes between releases unless validating the application through manual testing.

root cause flow

Visual AI uses one line of code to capture every visual element on your web page – and the state of the DOM that created the page. Once captured, Visual AI compares your captured version to your baseline. Then, Visual AI highlights for you the real user-noticeable differences between the baseline and the new capture. From there, you choose to accept the changes as expected new features or reject them as bugs. And, you can link those differences directly to the DOM code used to generate those differences.

Since inspection statements make up the bulk of your end-to-end test code, by adding Visual AI, you can eliminate the bulk of your inspection statements – letting you write code faster, making your code more stable, and allowing you more time to work on high-value test automation tasks.

How Long Does It Take To Learn Visual AI?

When we started the Applitools Visual AI Rockstar Hackathon, we directed participants to two courses on Test Automation University (TAU). TAU, offered exclusively by Applitools, offers classes on a range of technologies, including:

We pointed participants to one course written by Raja Rao describing how to modernize your functional tests with Visual AI. Raja walked through the different test cases on the Hackathon in about an hour. We also pointed participants to a course by Angie Jones on how to add Visual AI to your test automation.  Each course took approximately an hour.

Hackathon participants got pretty amazing results. After an hour or two of classes, they applied their knowledge of Visual AI and found:

  • Average test coverage jumped from 65% for coded inspection to 95% for Visual AI
  • Test writing time, on average, dropped from a little over 7 hours to a little over 1 hour.
  • The amount of code they wrote dropped significantly

So, for one or two hours of learning, Hackathon participants got faster test writing, more coverage, and less code.

Testing Visual AI Yourself

In the end, testing is a scientific activity. We run a series of tests and make observations – pass or fail.

In this blog, you’re reading a bunch of claims about Visual AI. These are based on data that we shared in our report about the Hackathon results.

What do these claims mean to you?

I recommend that you test these claims yourself. The claim – one or two hours of learning can help you write tests:

  • With comprehensive inspection
  • Requiring less code
  • More easily maintained
  • Let you focus your time on high-value test activity.

If true, would that be worth the investment?

It is up to you to find out.

Learn More

James Lamberti is CMO at Applitools

The post How Can 2 Hours Now Save You 1,000 Hours In The Next Year? appeared first on Automated Visual Testing | Applitools.

]]>
Five Data-Driven Reasons To Add Visual AI To Your End-To-End Tests https://applitools.com/blog/add-visual-ai/ Thu, 14 May 2020 00:04:59 +0000 https://applitools.com/?p=18411 Do you believe in learning from the experiences of others? If others found themselves more productive adding Visual AI to their functional tests, would you give it a try? In...

The post Five Data-Driven Reasons To Add Visual AI To Your End-To-End Tests appeared first on Automated Visual Testing | Applitools.

]]>

Do you believe in learning from the experiences of others? If others found themselves more productive adding Visual AI to their functional tests, would you give it a try?

In November 2019, over 3,000 engineers signed up to participate in the Applitools Visual AI Rockstar Hackathon. 288 completed the challenge and submitted tests – comparing their use of coded test validation versus the same tests using Visual AI. They found themselves with better coverage, faster test development, more stable test code, with easier code test code maintenance.

On April 23, James Lamberti, CMO at Applitools, and Raja Rao DV, Director of Growth Marketing at Applitools, discussed the findings from the Applitools Hackathon submissions. The 288 engineers who submitted their test code for evaluation by the Hackathon team spent an average of 11 hours per submission. That’s over 3,000 person-hours – the equivalent of  1 ½ years of engineering work.

Over 3000 participants signed up. They came from around the world.

Screen Shot 2020 05 13 at 11.25.28 AM

They used a variety of testing tools and a range of programming languages.

Screen Shot 2020 05 13 at 11.27.03 AM

In the end, they showed some pretty amazing results from adding Applitools Visual AI to their existing test workflow.

Describing the Hackathon Tests

Raja described the tests that made up the Hackathon.

Screen Shot 2020 05 13 at 11.30.42 AM

Each test involved a side-by-side comparison of two versions of a web app. In one version, the baseline, the page rendered correctly. In the other version, the new candidate, the page rendered with errors. This would simulate the real-world issues of dealing with test maintenance as apps develop new functionality.

Hackathon participants had to write code that did the following:

  • Ensure the page rendered as expected on the baseline.
  • Capture all mistakes in the page rendering on the new candidate
  • Report on all the differences between the baseline and the new candidate

Also, Hackathon participants needed to realize that finding a single error on a page met the necessary – but not sufficient condition for testing. A single test that captures all the problems at once has a faster resolution time than running into multiple bug capture/fix loops. Test engineers needed to write tests that captured all the test conditions, as well as properly reporting all the failures.

Hackathon participants would code their test using a conventional test runner plus assertions of results in the output DOM. Then, they used the same test runner code but replaced all their assertions with Applitools Visual AI comparisons.

To show these test results, he used the Github repository of Corina Zaharia, one of the platinum Hackathon winners.

At this point here, Raja walked through each of the test cases.

CASE 1 – Missing Elements

Raja presented two web pages. One was complete. The other had missing elements. Hackathon participants had to find those elements and report them in a single test.

Screen Shot 2020 05 13 at 11.49.52 AM

To begin coding tests, Corina started with the baseline. She identified each of the HTML elements and ensured that their text identifiers existed. She wrote assertions for every element on the page.

Screen Shot 2020 05 13 at 11.52.31 AM

In evaluating submissions, judges ensured that the following differences got captured:

  1. The title changed
  2. The Username icon was missing
  3. The Password icon was missing
  4. The username placeholder changed
  5. The wrong password label
  6. The password placeholder changed
  7. There was extra space next to the check box
  8. The Twitter icon had moved
  9. The Facebook icon had moved
  10. The LinkedIn Icon was missing.

Capturing this page required identifying element locators and validating locator values.

In comparison, adding Visual AI required only three instructions:

  • Open a capture session
  • Capture the page with an eyes.checkWindow() command
  • Close the capture session

No identifiers needed – Applitools captured the visual differences.

With much less coding, Applitools captured all the visual differences. And, test maintenance takes place in Applitools.

CASE 2 – Data-Driven Testing

In Case 2, Hackathon participants needed to validate how a login page behaved when applying different inputs. The test table looked like this:

  • No username, no password
  • Username, no password
  • Password, no username
  • Username and password combination invalid
  • Valid username and password
Screen Shot 2020 05 13 at 11.51.30 AM

Each condition resulted in a different response page.

Hackathon participants found an identical page to the tests in Case 1 – but they were responsible for handling the different responses to each of the different test conditions.

Again, the coding for the conventional test required entering the test conditions via test runner asserting all the elements on the page, including asserting error messages.

Also, the question was left open for testers – what should they test when they test the valid password and username condition? The simplest answer – just make sure the app visits the correct target post-login page. But, more advanced testers wanted to make sure that the target paged rendered as expected.

So, again, the comparison with coded assertions and adding Visual AI makes clear how much more easily Visual AI captures baselines and then compares the new candidate against the baselines.

CASE 3 – Testing Table Sort

The next case – testing table capabilities – fits into capabilities found on many web apps that provide multiple selections. Many consumer apps, such as retailers, reviewers, and banks, provide tables for their customers. Some business apps provide similar kinds of selectors – in retail, financial, and medical applications.  In many use cases, users expect tables with advanced capabilities, such as sorting and filtering.

Screen Shot 2020 05 13 at 11.53.33 AM

Tables can provide some challenges for testers. Tables can contain lots of elements. Many table functions can require complex test coding – for example, sorting and filtering.

To test table sorting with conventional assertion code, Hackathon participants had to write code that captured all the data in the table, performed the appropriate sort of that data, and use the internally-sorted table in the test code with the sorted table on the web page. Great test coders took pains to ensure that they had done this well and could handle various sorting options.  The winners took time to ensure that their code covered the table behavior. This complex behavior did not get caught by all participants, even with a decent amount of effort.  

In contrast, all the participants understood how to test the table sort with Visual AI. Capture the page, execute the sort, capture the result, and validate inside Applitools.

Case 4 – Non-Textual Plug-ins

The fourth case involved using graphical rendering of a table in canvas. How do you test that?

Screen Shot 2020 05 13 at 12.15.01 PM

Without normal web element locators, a lot of participants got lost.  They weren’t sure how to start finding the graphing elements and to build a comparison between the baseline behavior and the new candidate.

Winning Hackathon participants dug into the rendering code to find the javascript calls for the graph and the source data for table elements. This allowed them to extract the values that should be rendered and compare them between the baseline and the new candidate. And, while the winners wrote fairly elegant code, this particular skill took time to dive into JavaScript. And, a fair amount of coding effort.

As with the table sorting Case 3, all the participants understood how to test the graph with Visual AI. Capture the page, and then compare the new candidate with the baseline in Applitools.

Case 5 – Dynamic Data

The final case required the participants to test a page with floating advertisements that can change.  In fact, as long as content gets rendered in the advertising box, and the rest of the candidate remains unchanged, the test passes.

Screen Shot 2020 05 13 at 12.16.59 PM

The winning participants coded conditional tests to ensure that code existed in the advertising boxes, though they did not have the ability to see how that code got rendered.

With Visual AI, participants had to use different visual comparison modes in Applitools. The standard mode – Strict Mode – searches for visual elements that have moved or rendered in unexpected ways. With dynamic data, Strict Mode comparisons fail.

For these situations, Applitools offers Layout Mode instead. When using Layout Mode, the text and graphical elements need to share order and orientation, but their actual visual representation can be different.  In Layout Mode, the following are considered identical – image above text.

This Is A Dog

Not A Dog

However, the pair below has a different layout. On the left, the text sits below the image, while on the right the text sits above the image

This Is A Dog

Not A Dog

Applitools users can hard-code their check mode for different regions into their page capture. Alternatively, they can use Strict Mode for the entire page and handle the region as a Layout Mode exception in the Applitools UI.

All the Hackathon participants, whether coding their tests for Layout mode for the region or by using Layout mode for the selected area once the baseline had been captured in Applitools, had little difficulty coding their tests.

Learning From Hackathon Participants

At this point, James began describing what we had learned from the 1.5 person-years of coding work done on the Hackathon. We learned what gave people difficulty, where common problems occurred, and how testing with Visual AI compared with conventional assertions of values in the DOM.

Faster Test Creation

I alluded to it in the test description, but test authors wrote their tests much more quickly using Visual AI. On average, coders spent 7 person-hours writing coded assertion-based tests for the Hackathon test cases. In contrast, they spent a mere 1.2 hours writing tests using Visual AI for the same test cases.

Screen Shot 2020 05 13 at 12.29.50 PM

Interestingly, the prize-winning submitters spent, on average 10.2 hours writing their winning submissions. They wrote more thorough conventional tests, which would yield accurate coverage when failures did occur. On the other hand, their coverage did not match the complete-page coverage they got from Visual  AI. And, their prize-winning Visual AI tests required, on average, six minutes more to write than the average of the whole of the test engineers.

More Efficient Coding

The next takeaway came from calculating coding efficiency. For conventional tests, the average participant wrote about 350 lines of code. The prize winners, whose code had greater coverage, wrote a little more than 450 lines of code, on average. This correlates with the 7 hours and 10 hours of time spent writing tests.  It’s not a perfect measure, but participants writing conventional tests wrote about 50 lines of code per hour over 7 hours, and the top winners wrote about 45 lines of code per hour over 10 hours.

Screen Shot 2020 05 13 at 12.30.45 PM

In contrast, with Visual AI, the average coder needed 60 lines of code, and the top coders only 58 lines of code. Visual AI still results in 50 lines of code per hour for the average participant, and 45 lines of code for the winning participant. But, they are much more efficient.

More Stable Code

End-to-end tests depend on element locators in the DOM to determine how to apply test conditions, such as by allowing test runners to enter data and click buttons. Conventional tests also depend on locators for asserting content in the response to the applied test conditions.

Screen Shot 2020 05 13 at 12.31.30 PM

Most software engineers realize that labels and other element locators get created by software developers – who can change locators due to intentional change or unanticipated difference. And element locator using Xpath can suddenly discover the wrong relative locator due to an enhancement. The same is true for labels, which can change between releases – even when there is no visible user behavior difference.

No one wants testing to overconstrain development. No one wants development to remain ignorant of testing needs. And yet, because mistakes sometimes happen, or changes are sometimes necessary, locators and labels change – resulting in test code that no longer works properly.

Interestingly, when evaluating conventional tests, the average Hackathon participant used 34 labels and locators, while the Hackathon prize winners used 47 labels and locators.

Meanwhile, for the Visual AI tests, the average participant used 9 labels and locators, while the winning submissions used only 8. At a conservative measure, Visual AI reduces the dependency of code on external factors – we calculate it at 3.8 x more stable.

Catching Bugs Early

Visual AI can catch bugs early in coding cycles.  Because Visual AI depends on the rendered representations and not on the code to be rendered, Visual AI will catch visual differences that might be missed by the existing test code. For instance, think of an assertion for the contents of a text box. In this new release, the test passes because the box has the same text. However, the box width has been cut in half, causing the text to extend outside the box boundary and be obscured. The test passes, but in reality it fails. The test assumed a condition that is no longer true.

Screen Shot 2020 05 13 at 12.32.15 PM

Visual AI catches these differences. It will catch changes that result in different functional behavior that requires new coding. It will catch changes – like the one described above, that result in visual differences that impact users. And, it will avoid flagging changes that may change the DOM but not the view or behavior from the user’s perspective.

Easier to Learn than Code-Based Testing

The last thing James shared involved the learning curve for users. In general, we assumed that test coverage and score on the Hackathon evaluation correlated with participant coding skill. The average score achieved by all testers using conventional code-based assertions was 79%. After taking a 90-minute online course on Visual AI through Test Automation University, the average score for Visual AI testers was 88%.

Screen Shot 2020 05 13 at 12.32.48 PM

Because people don’t use visual capture every day, testers need to learn how to think about applying visual testing. But, once the participants had just a little training, they wrote more comprehensive and more accurate tests, and they learned how to run those test evaluations in Applitools.

What This Means For You

James and Raja reiterated the benefits they outlined in their webinar: faster test creation, more coverage, code efficiency, code stability, early bug catching and ease of learning. Then they asked: what does this mean for you?

If you use text-based assertions for your end-to-end tests, you might find clear, tangible benefits from using Visual AI in your product release flow. It integrates easily into your CICD or other development processes. It can augment existing tests, not requiring any kind of rip and replace.  Real, tangible benefits come to many companies that deploy Visual AI. What is stopping you?

Screen Shot 2020 05 13 at 12.33.25 PM

Often, learning comes first. Fortunately, Applitools makes it really easy for you to learn Visual AI. Just take a class on Test Automation University. There is Raja’s course: Modern Functional Test Automation through Visual AI. There is Angie Jones’s course: Automated Visual Testing: A Fast Path To Test Automation Success.  And, there are others.

You can sign up for a free Applitools account. Using Applitools helps you understand what you can do With Applitools.  

Finally, you can request a demonstration from a sales engineer.  

At Applitools, we let users make the case for the value of our Visual AI solution. We hope you will get a lot out of trying it yourself.

For More Information

The post Five Data-Driven Reasons To Add Visual AI To Your End-To-End Tests appeared first on Automated Visual Testing | Applitools.

]]>
Upskill and Create New Tests Much Faster – The Impact of Visual AI https://applitools.com/blog/create-new-tests-much-faster/ Wed, 22 Apr 2020 18:12:38 +0000 https://applitools.com/?p=17883 The obvious question -- why is it so much faster to author new tests using Visual AI? It’s because Visual AI uses just a single line of code to take a screenshot of the entire page.

The post Upskill and Create New Tests Much Faster – The Impact of Visual AI appeared first on Automated Visual Testing | Applitools.

]]>

If you’ve been following the quality engineering community over the past couple years, you’re probably familiar with Test Automation University (a.k.a TAU).  New technologies require new skills — it’s a constant for all of us in today’s world. This is why TAU now boasts over 40 free on-line courses for emerging quality engineering techniques including an Introduction to Cypress by Gil Tayar or Scaling Tests with Docker by Carlos Kidman. And most relevant to this blog are courses on Visual AI including Automated Visual Testing by Angie Jones and Modern Functional Test Automation by Raja Rao DV.

Automated Visual Testing
Modern Functional Test Automation

Upskill Yourself — Modernize Your Test Suite

Our main goal in creating a free, open Test Automation University was to help the global test engineering community upskill routinely and have fun doing it. We also recognized the need to help that community understand how and where Applitools Visual AI would fit in. Too often, emerging technologies ask you to rip and replace everything. Not only is that hard to do, but it’s also not realistic. Your team has invested a lot of time and money in their quality management process to date. You’re not going to just throw that all away every one or two years for the latest shiny new tech or trend. Before you make a change, you need to be confident that it will evolve your team from where you are today, yet integrate easily with what is already there.

That’s why we ran the Visual AI Rockstar Hackathon recently. We’re Applitools! Of course we’re going to talk about the wonders of Visual AI. It’s much better, and far more credible, if 288 of your peers in quality engineering do a side-by-side comparison of Visual AI and Selenium, Cypress, or WebdriverIO, then tell you about the wonder of Visual AI themselves. Over the next seven weeks, we will drill into what we learned in a series of blog posts starting this week with test creation time.

Or– if you just can’t wait that long — go ahead and grab the full report here. No better time than the present to learn and upskill for the future.

How Does Visual AI Impact Test Creation Time?

The obvious question — why is it so much faster to author new tests using Visual AI? It’s because Visual AI uses just a single line of code to take a screenshot of the entire page. You’re still automating the browser to click and navigate, but replacing a huge number of code-based assertions with just a simple line of Visual AI code to test for UI elements, form fill functionality, dynamic content, even tables and graphs. With this modern technique, you’re now authoring tests much faster than before and can use the time you save to both test more and test faster. 

What Differences Did We See Among Quality Engineers? 

To help us understand Visual AI’s impact on test automation in more depth, we looked at three groups of quality engineers including:

  • All 288 Submitters – This includes any quality engineer that successfully completed the hackathon project. While over 3,000 quality engineers signed-up to participate, this group of 288 people is the foundation for the report and amounted to 3,168 hours, or 80 weeks, or 1.5 years of quality engineering data.
  • Top 100 Winners – To gather the data and engage the community, we created the Visual AI Rockstar Hackathon. The top 100 quality engineers who secured the highest point total for their ability to provide test coverage on all use cases and successfully catch potential bugs won over $40,000 in prizes.
  • Grand Prize Winners – This group of 10 quality engineers scored the highest representing the gold standard of test automation effort.

By comparing and contrasting these different groups in the report, we learn more about the impact of Visual AI on test creation time.

Looking at the Data

The main point behind this data is subtle, but important. The average test-writer took 7 hours to write conventional code-based assertions that only covered, on average, 65% of potential bugs. How likely will that coverage result in rework? 

World-class tests cover 90% or more of potential failure modes. For code-based assertions, the Grand Prize winning submissions achieved, on average 94% coverage. In order to do a “Grand Prize Winning” submission, quality engineers had to increase their test creation time commitment from 7 hours to 10.2 hours. That’s another 3.2 hours of effort to achieve acceptable test coverage! 

Contrast code-based coverage with Visual AI results. Using Visual AI, all the testers achieved 95% or more coverage in, on average, 72 minutes of coding. It took the Grand Prize winners only an additional 6 minutes to achieve 100% coverage using Visual AI. This trend continues when you compare the data across all 3 groups like we did here:

This table shows that using Visual AI speeds up all test code development. 

In Conclusion

Conventional wisdom says that you should spend time on being sufficiently productive. So, should you be writing code-based assertions to validate your functional tests? In a real world setting, our data shows that, in contrast with Visual AI, code based assertions slow down releases to ensure that quality engineers provide sufficient coverage.  As a practical response, your test team suffers because quality-time tradeoff either makes them a bottleneck or a source of quality concerns. 

By including Visual AI in their approach, quality engineers obtain the same amount of coverage 7.8x times faster. Even better, they can use this found time to better manage quality.

Ready to Try It Yourself?

The data we collected from Hackathon participants makes one clear point: no matter your level of coding expertise, using visual AI makes you faster. 

Each of the testing use cases in the hackathon – from handling dynamic data to testing table sorts – requires you utilize your best coding skills. We found that even the most diligent engineers encountered challenges as they developed test code for the use cases. 

For example, while your peers can easily apply test conditions to filter a search or sort a table, many of them labor to grab data from web element locators, calculate expected responses, and report results accurately when tests fail. 

We encourage you to read The Impact of Visual AI on Test Automation. We also encourage you to sign up for a free Applitools account and try out these tests for yourself.  But, if you do nothing else, just check out the five cases we researched and ask yourself how frequently you encounter these cases in your own test development. We think you will conclude – just as our test data shows – that Visual AI can help you do your job more easily, more comprehensively, and faster.

Cover Photo by Shahadat Rahman on Unsplash

The post Upskill and Create New Tests Much Faster – The Impact of Visual AI appeared first on Automated Visual Testing | Applitools.

]]>
Ask 288 Of Your Peers About Visual AI https://applitools.com/blog/ask-your-peers-about-visual-ai/ Tue, 14 Apr 2020 14:39:18 +0000 https://applitools.com/?p=17345 How do you find out about what works? Ask your peers. So, why not ask your peers about Visual AI? It’s a difficult time. We all know why, so I...

The post Ask 288 Of Your Peers About Visual AI appeared first on Automated Visual Testing | Applitools.

]]>

How do you find out about what works? Ask your peers. So, why not ask your peers about Visual AI?

It’s a difficult time. We all know why, so I won’t dwell on it other than to wish you and yours health and safety above all else. What I will dwell on is the human need to retreat and replenish. Trapped at home, I’ve found myself learning to cook with Thomas Keller, exploring the universe with Neil deGrasse Tyson, or entertaining like Usher through Masterclass.com. My kids are coding their own games and learning about the history of art at Khan Academy. These entertaining explorations not only give us a much-needed break, but they also give us an opportunity to learn and grow even as we struggle with the realities around us. It’s a welcome and much-needed distraction.

With that sentiment in mind – here’s an idea for you. Why not learn about Visual AI (Artificial Intelligence) from 288 of your fellow quality engineers?

Applitools Empirical Evidence Visual AI Report 2020

Each one of them spent 11 hours on average comparing their current test framework of either Cypress, Selenium, or WebdriverIO to that same framework modernized through Visual AI. You can get a summary of what they learned here. Even better, you can take the same free Test Automation University course on Modern Test Automation Through Visual AI and do it all yourself through video tutorials and hands-on learning. Either way, you will find yourself blissfully distracted while learning a cutting-edge approach to test automation.

VisualAI Impact Three Quotes

288 Testers. 11 Hours Each. That’s 1.5 Years of Quality Engineering Effort!

Yes — we were blown away by the enthusiasm to learn Visual AI among the testing community. It says a lot about this group of individuals who recognize the need to keep pushing themselves. In the end, they ended up creating the industry’s largest, highest quality, and freely available data set for understanding the impact of Visual AI on test automation, and ultimately on the impact on quality management and release velocity for modern applications. It’s an amazing amount of learning highly representative of the world of test automation.

We had representation from major test frameworks:

VisualAI Impact Submissions By TestFramework

Representation from major languages:

Representation from 101 countries around the world

VisualAI Impact Participant Flags with Title

Why Should You Learn Visual AI? Ask your peers.

I get it. Quality engineers always seem to be on a treadmill to learn everything. You have new application development frameworks, new coding structures, new test frameworks, and new tools rumbling your way daily. If you plan to learn one more thing, you need a return on your time.

But, let’s face it – testing needs to keep up with the pace of the business. Survey data tells us that the majority of software teams are struggling with their quality engineering efforts. In a recent survey, 68% of teams cited quality management as a key blocker to more agile releases and ultimately CI/CD.

Why? For every test with a handful of conditions and an action, test writers need to write dozens to hundreds of code-based assertions to validate a single response. Traditional frameworks simply don’t have the technical ability to provide front-end functional and visual test automation coverage with the speed and efficiency you need. You end up writing and maintaining too much test code, only to see bugs still escape. It’s maddening and, even worse, it prevents us from doing our core job of managing app quality.

Isn’t AI Just Smoke and Mirrors?

The answer depends on your application. AI promises to solve many modern technical problems, including testing and quality management problems, but it’s hard to separate the truth from the reality in what really works. Many experiments using AI have failed in testing, or these AI approaches require you to “rip and replace” your existing tech stack – a dreaded approach that is unrealistic for most teams.

Rather than asking you to simply trust that Visual AI is different, we decided to prove it, objectively, using real-world examples, in partnership with real quality engineers at real companies dealing with test automation every day.

VisualAI Impact Winners Logos With Title

Gathering Learning – The Visual AI Rockstar Hackathon

To generate all this learning, we built an application involving five common but complex use cases. In November 2019, we issued a challenge to testers all over the world to compete, and learn, by comparing test approaches side-by-side. The competitors created test suites for each of the five use cases using their preferred code-based approach, including Selenium, Cypress, and WebdriverIO. These same quality engineers then repeated the process for the exact same five use cases using Visual AI from Applitools.

To make it fun and push people to do their absolute best, testers competed for 100 prizes worth a total of $42,000. We judged their submissions on their ability to:

  • Provide test coverage on all use cases
  • Successfully run these tests, and
  • Most importantly catch all potential bugs

using both testing approaches.

You can learn about 100 winners here.

VisualAI Impact Winners Logos Only

Your Takeaways

The data we collected from Hackathon participants makes one clear point: using visual AI makes you more efficient. You gain this efficiency no matter what your level of experience.

Each Hackathon use case – from handling dynamic data to testing table sorts – requires you to apply your best coding skills. We found that even the most diligent engineers encountered challenges as they developed test code for the use cases.

For example, many of your peers can easily apply test conditions to filter a search or sort a table. Many can grab data from web element locators. However, many of them struggle to calculate expected responses consistently. And, many have challenges creating accurate reports when tests fail.

We encourage you to review the Hackathon report and results. We also encourage you to sign up for a free Applitools account and try out these tests for yourself.  But, if you do nothing else, just check out the five cases. Ask yourself how frequently you encounter these cases in your own test development. We think you will conclude – just as our test data shows – that Visual AI can help you do your job more easily, more comprehensively, and faster.

To Read, Learn, and Do

James Lamberti is CMO at Applitools.

The post Ask 288 Of Your Peers About Visual AI appeared first on Automated Visual Testing | Applitools.

]]>
How To Ace High-Performance Test for CI/CD https://applitools.com/blog/how-to-ace-high-performance-test-for-ci-cd/ Thu, 26 Mar 2020 15:14:00 +0000 https://applitools.com/?p=17388 If you run continuous deployment today, you need high-performance testing. You know the key takeaway shared by our guest presenter, Priyanka Halder: test speed matters. Priyanka Halder presented her approach...

The post How To Ace High-Performance Test for CI/CD appeared first on Automated Visual Testing | Applitools.

]]>

If you run continuous deployment today, you need high-performance testing. You know the key takeaway shared by our guest presenter, Priyanka Halder: test speed matters.

Priyanka Halder presented her approach to achieving success in a hyper-growth company through her webinar for Applitools in January 2020. The title of her speech sums up her experience at GoodRx:

“High-Performance Testing: Acing Automation In Hyper-Growth Environments.”

Hyper-growth environments focus on speed and agility. Priyanka focuses on the approach that lets GoodRx not only develop but also test features and releases while growing at an exponential rate.

About Priyanka

Priyanka Halder is head of quality at GoodRx, a startup focused on finding all the providers of a given medication for a patient – including non-brand substitutes – and helping over 10 million Americans find the best prices for those medications.  Priyanka joined in 2018 as head of quality engineering – with a staff of just one quality engineer. She has since grown the team 1200% and grown her team’s capabilities to deliver test speed, test coverage, and product reliability. As she explains, past experience drives current success.

Priyanka’s career includes over a dozen years of test experience at companies ranging from startups to billion-dollar companies. She has extensive QA experience in managing large teams and deploying innovative technologies and processes, such as visual validation, test stabilization pipelines, and CICD. Priyanka also speaks regularly at testing and technology conferences. She accepted invitations to give variations of this particular talk eight times in 2019.

One interesting note: she says she would lik to prove to the world that 100% bug-free software does not exist.

Start With The Right Foundation

Three Little Pigs

Priyanka, as a mother, knows the value of stories. She sees the story of the Three Little Pigs as instructive for anyone trying to build a successful test solution in a hyper-growth environment. Everyone knows the story: three pigs each build their own home to protect themselves from a wolf. The first little pig builds a straw house in a couple of hours. The second little pig builds a home from wood in a day. The third little pig builds a solid infrastructure of brick and mortar – and that took a number of days. When the wolf comes to eat the pigs, he can blow down the straw house and the wood house, but the solid house saves the pigs inside.

Priyanka shares from her own experience. – She encounters many wolves in a hyper-growth environment. The only safeguard comes from building a strong foundation. Priyanka describes a hyper-growth environment and how high-performance testing works.  She describes the technology and team needed for high-performance testing. And, she describes what she delivered (and continues to deliver) at GoodRx.

Define High-Performance Testing

So, what is high-performance testing?

Fundamentally, high-performance testing maximizes quality in a hyper-growth startup. To succeed, she says, you must embrace the ever-changing startup mentality, be one step ahead, and constantly provide high-quality output without being burned out.

Agile startups share many common characteristics:

  • Chaotic – you need to be comfortable with change
  • Less time – all hands on deck all the time for all the issues
  • Less resources – you have to build a team where veterans are mentors and not enemies
  • Market pressure – teams need to understand and assess risk
  • Reward – do it right and get some clear benefits and perks

If you do it right, it can lead to satisfaction. If you do it wrong, it leads to burnout. So – how do you do it right?

Why High-Performance Testing?

Leveraging data collected by another company Priyanka showed how the technology for app businesses changed drastically over the past decade. These differences include:

  • Scope – instead of running a dedicated app, or on a single browser, today’s apps run on multiple platforms (web app and mobile)
  • Frequency – we release apps on demand (not annually, quarterly, monthly or daily)
  • Process – we have gone from waterfall to continuous delivery
  • Framework – we used to use singe-stack on premise software – today we are using open source, best of breed, cloud based solutions for developing and delivering.

The assumptions of “test last” that may have worked a decade back can’t work anymore. So, we need a new paradigm.

How To Achieve High-Performance Testing

Priyanka talked about her own experience. Among other things, teams need to know that they will fail early as they try to meet the demands of a hyper-growth environment. Her approach, based on her own experiences, is to ask questions:

  • Does the team appreciate that failures can happen?
  • Does the team have inconsistencies? Do they have unclear requirements? Set impossible deadlines? Use waterfall while claiming to be agile? Note those down.

Once you know the existing situation, you can start to resolve contradictions and issues. For example, you can use a mind map to visualize the situation. You can divide issues and focus on short term work (feature team for testing) vs. long term work (framework team). Another important goal – figure out how to find bugs early (aka Shift Left). Understand which tools are in place and which you might need. Know where you stand today vis-a-vis industry standards for release throughput and quality. Lastly, know the strength of your team today for building an automation framework, and get AI and ML support to gain efficiencies.

Building a Team

Next, Priyanka spoke about what you need to build a team for high-performance testing.

Screen Shot 2020 03 25 at 9.33.53 PM

In the past, we used to have a service team. They were the QA team and had their own identity. Today, we have true agile teams, with integrated pods where quality engineers are the resource for their group and integrate into the entire development and delivery process.

So, in part you need skills. You need engineers who know test approaches that can help their team create high-quality products. Some need to be familiar with behavior-driven design or test-driven design. Some need to know the automation tools you have chosen to use. And, some need to be thinking about design-for-testability.

One huge part of test automation involves framework. You need a skill set familiar with building code that self-identifies element locators, builds hooks for automation controls, and ensures consistency between builds for automation repeatability.

Beyond skills, you need individuals with confidence and flexibility. They need to meld well with the other teams. In a truly agile group, team members distribute themselves through the product teams as test resources. While they may connect to the main quality engineering team, they still must be able to function as part of their own pod.

Test Automation

Priyanka asserts that good automation makes high-performance testing possible.

In days gone by, you might have bought tools from a single vendor. Today, open source solutions provide a rich source for automation solutions. Open source generally has lower maintenance costs, generally lets you ship faster, and expands more easily.

Screen Shot 2020 03 25 at 10.06.28 PM

Open source tools come with communities of users who document best practices for using those tools. You might even learn best-practice processes for integrating with other tools. The communities give you valuable lessons so you can learn without having to fail (or learn from the failures of others).

Priyanka describes aspects of software deployment processes that you can automate.  Among the features and capabilities you can automate:

  • Assertions on Action
  • Initialization and Cleanup
  • Data Modeling/Mocking
  • Configuration
  • Safe Modeling Abstractions
  • Wrappers and Helpers
  • API Usage
  • Future-ready Features
  • Local and Cloud Setups
  • Speed
  • Debugging Features
  • Cross Browser
  • Simulators/Emulators/Real Devices
  • Built-in reporting or easy to plug in

Industry Standards

You can measure all sorts of values from testing. Quality, of course. But what else? What are the standards these days? Who knows what are typical test times for test automation?

Priyanka shares data from Sauce Labs about standards.  Sauce surveyed a number companies and discussed benchmark settings for four categories: test quality; test run time; test platform coverage; and test concurrency. The technical leaders at these companies set some benchmarks they thought aligned with best-in-class industry standards.

In detail:

  • Quality – pass at least 90% of all tests run
  • Run Time – average of all tests run two minutes or less
  • Platform Coverage – tests cover five critical platforms on average
  • Concurrency – at peak usage, tests utilize at least 75% of available capacity

Next, Priyanka shared the data Sauce collected from the same companies about how they fared against the average benchmarks discussed.

  • Quality – 18% of the companies achieved 90% pass rate
  • Run time – 36% achieved the 2 minute or less average
  • Platform coverage – 63% reached the five platform overage
  • Concurrency – 71% achieved the 75% utilization mark
  • However, only 6.2% of the companies achieved the mark on all four.

Test speed became a noticeable issue. While 36% ran on average in two minutes or faster, a large number of companies exceeded five minutes – more than double.

Investigating Benchmarks

These benchmarks are fascinating – especially run time – because test speed is key to faster overall delivery. The longer you have to wait for testing to finish, the slower your dev release cycle times.

Sadly, lots of companies think they’re acing automation, but so few are meeting key benchmarks. Just having automation doesn’t help. It’s important to use automation that helps meet these key benchmarks.

Another area worth investigating involves platform coverage. While Chrome remains everyone’s favorite browser, not everyone is on Chrome.  Perhaps 2/3 of users run Chrome, but Firefox, Safari, Edge and others still command attention. More importantly, lots of companies want to run mobile, but only 8.1% of company tests run on mobile. Almost 92% of companies run desktop tests and then resize their windows for the mobile device.  Of the mobile tests, only 8.9% run iOS native apps and 13.2% run Android native apps. There’s a gap at a lot of companies.

GoodRx Strategies

Priyanka dove into the capabilities that allow GoodRx to solve the high- performance testing issues.

Test In Production

The first capabilities GoodRx uses a Shift Right approach that moves testing into the realm of production.

Screen Shot 2020 03 25 at 10.19.12 PM

Production testing? Yup – but it’s not spray-and-pray. GoodRx’s approach includes the following:

  • Feature Flag – Test in production. Ship fast, test with real data.
  • Traffic Allocation – gradually introduce new features and empower targeted users with data. Hugely important for finding corner cases without impacting the entire customer base.
  • Dog Fooding – use a CDN like Fastly to deploy, route internal users to new features.

The net reduce – this reduces overhead, lets the app get tested with real data test sets, and identify issues without impacting the entire customer base. So, the big release becomes a set of small releases on a common code base, tested by different people to ensure that the bulk of your customer base doesn’t get a rude awakening.

AI/ML

Next, Priyanka talked about GoodRx uses AI/ML tools to augment her team. These tools make her team more productive – allowing her to meet the quality needs of the high-performance environment.

First, Priyanka discussed automated visual regression – using AI/ML to automate the validation of rendered pages. Here, she talked about using Applitools – as she says, the acknowledged leader in the field. Priyanka talked about how GoodRx uses Applitools.

At GoodRx, there may be one page used for a transaction. But, GoodRx supports hundreds of drugs in detail, and a user can dive into those pages that describe the indications and cautions about individual medications.  To ensure that those pages remain consistent, GoodRx validates these pages using Applitools. Trying to validate these pages manually would take six hours. Applitools validates these pages in minutes and allows GoodRx to release multiple times a day.

Screen Shot 2020 03 25 at 10.20.40 PM

To show this, Priyanka used an example of visual differences. She showed a kids cartoon with visual differences. Then she showed what happens if you do a normal image comparison – pixel-based comparison.

Screen Shot 2020 03 25 at 10.22.01 PM

A bit-wise comparison will fail too frequently.  Using the Applitools AI system, they can set up Applitools to look at the images that have already been approved and quickly validate the pages being tested.

Screen Shot 2020 03 25 at 10.23.29 PM

Applitools can complete a full visual regression in less than 12 minutes to run 350 test cases, which runs 2,500 checks.  Manually, it takes six hours.

Screen Shot 2020 03 25 at 10.24.29 PM

Priyanka showed the kinds of real-world bugs that Applitools uncovered. One – a screenshot from her own site GoodRx. A second from amazon.com, and a third from macys.com. She showed examples with corrupt display – and ones that Selenium alone could not catch.

ReportPortal.io

Next, Priyanka moved on to ReportPortal.io. As she says, when you ace automation, you need to know where you stand. You need to build trust around your automation platform by showing how it is behaving. All your data – test times, bugs discovered, etc. reportportal.io shows how tests are running at different times of the day.  Another display shows flakiest tests and longest-running tests to help the team release seamlessly and improve their statistics.

Any failed test case in reportportal.io can link the test results log directly into the reportportal.io user interface.

GoodRx uses behavior-driven design (BDD), and their BDD approach lets them describe the behavior they want for a given feature – how it should behave in good and bad cases – and ensure that those cases get covered.

High-Performance Testing – The Burn Out

Priyanka made it clear that high-performance environments take a toll on people. Everywhere.

She showed a slide referencing a blog by Atlassian talking about work burnout symptoms – and prevention. From her perspective, the symptoms of workplace stress include:

  • Being cynical or critical at work
  • Dragging yourself to work and having trouble getting started
  • Irritable or impassion, lack energy, hard to concentrate, headache
  • Lack of satisfaction from achievement
  • Use food, drugs or alcohol to feel better or simply not to feel

So, what should a good team lead do when she notices signs of burnout? Remind people to take steps to prevent burnout. These include:

  • Avoid unachievable deadlines. Don’t take on too much work. Estimate, add buffer, add resource.
  • Do what gives you energy – avoid what drains you
  • Manage digital distraction – the grass will always be greener on the other side
  • Do something outside your work – Engage in activities that bring you joy
  • Say No too many projects – gauge your bandwidth and communicate
  • Make self-care a priority – meditation/yoga/massage
  • Have a strong support system – talk to you family, friends, seek help
  • Unplugging for short periods helps immensely

The point here is that hyper-growth environments can take a toll on everyone – employees, managers. Unrealistic demands can permeate the organization. Use care to make sure that this doesn’t happen to you or your team.

GoodRx Case Study

Why not look at Priyanka’s direct experience at GoodRx? Her employer, GoodRx, provides prices transparency for drugs. GoodRx lets individuals search for drugs they might need or use for various conditions. Once an individual selects a drug, GoodRx lets the individual see the prices for that drug in various locations to find the best price for that drug.

The main customers are people who don’t have insurance or have high-deductible insurance. In some cases, GoodRx offers coupons to keep the prices low.  GoodRx also provides GoodRx Care – a telemedicine consultation system – to help answer patient questions about drugs. Rather than see a doctor, GoodRx Care costs anywhere between $5 and $20 for a consultation.

Because the GoodRx web application provides high value for its customers, often with high demand, the app must maintain proper function, high performance, and high availability.

Set Goals

Screen Shot 2020 03 25 at 10.28.44 PM

The QA goals Priyanka designed needed to meet the demands of this application. Her goals included:

  • Distributed QA team 24/7 QA support
  • Dedicated SDET Team who specializes in test
  • A robust framework that will make any POC super simple (plug and play)
  • Test stabilization pipeline using Travis
  • 100% automation support to reduce regression time 90%

Build a Team

Screen Shot 2020 03 25 at 10.30.03 PM

As a result, Priyanka needed to hire a team that could address these goals. She showed the profile she developed on LinkedIn to find people that met her criteria – dev-literate, test-literate engineers who could work together as a team and function successfully. More emphasis on test automation and coding abilities rose to the top.

Build a Tech Stack

Screen Shot 2020 03 25 at 10.33.22 PM

Next, Priyanka and her team invested in tech stack:

  • Python and Selenium WebDriver
  • Behave for BDD
  • Browserstack for a cloud runner
  • Applitools for visual regression
  • Jenkins/Travis and Google Drone for CI
  • Jira, TestRail for documentation

CICD success criteria requirements came up with four issues:

  • Speed and parallelization
  • BDD for easy debug and read
  • Cross-browser cross-device coverage in CICD
  • Visual validation

Set QA expectations for CI/CD testing

Finally, Priyanka and her team had to set expectations for testing.  How often would they test? How often would they build?

The QA for CI/CD means that test and build become asynchronous. Regardless of the build state,

  • Hourly; QA runs 73 tests hourly against the latest build to sanity check the site.
  • On Build: Any new build runs 6 cross-browser and makes sure all critical business paths get covered.
  • Nightly 300 test regression tests on top of other tests.

Some of these were starting points, but most got refined over time.

Priyanka’s GoodRx Quality Timeline

Next, Priyanka talked about how her team grew from the time she joined until now.

She started in June 2018. At that point, GoodRx had one QA engineer.

  • In her first quarter, she added a QA Manager, QA Analyst, and a Senior SDET. They added offshore reprocessing to support releases.
  • By October 2018 they had fully automated P0/P1 tests. Her team had added Spinnaker pipeline integration. They were running cross-browser testing with real mobile device tests.
  • By December 2018 she added two more QA Analysts and 1 more SDET.  Her team’s tests fully covered regression and edge cases.
  • And, she pressed on. In early 2019, they had built automation-driven releases. They had added Auth0 support – her team was hyper-productive.
  • Then, she discovered her team had started to burnout.  Two of her engineers quit. This was an eye-opening time for Priyanka. Her lessons about burnout came from this period. She learned how to manage her team through this difficult period.

By August 2019 she had the team back on an even keel and had hired three QA engineers and one more SDET.

And, in November 2019 they achieved 100% mobile app automation support.

GoodRx Framework for High-Performance Testing

Finally, Priyanka gave a peek into the GoodRx framework, which helps her team build and maintain test automation.

The browser base class provides access for test automation. Using the browser base class eliminates the need to use Selenium embed click.

The page class simplifies the web element location. The page class structure assigns a unique XPath to each web element. Automation benefits by having clean XPath elements for automation purposes.

Screen Shot 2020 03 25 at 10.47.12 PM

The element wrapper class allows for behaviors like lazy loading.  Instead of having to program exceptions into the test code, the element wrapper class standardizes interaction between the browser under test and the test infrastructure.

Screen Shot 2020 03 25 at 10.47.27 PM

Finally, for every third-party application or tool that integrates using an SDK, like Applitools GoodRx deploys an SDK Wrapper. As one of her SDET team figured, the wrapper ensures that an SDK change from a third party can mess up your test behavior. Using a wrapper is a good practice for handling situations when the service you use encounters something unexpected.

The framework results in a more stable test infrastructure that can rapidly change to meet the growth and change demands of GoodRx.

Conclusions

Hyper-growth companies put demands on their quality engineers to achieve quickly. Test speed matters, but it cannot be achieved consistently without investment. Just as Priyanka started with the story of the Three Little Pigs, she made clear that success requires investment in automation, people, AI/ML, and framework.

To watch the entire webinar:

For More Information

The post How To Ace High-Performance Test for CI/CD appeared first on Automated Visual Testing | Applitools.

]]>