C# Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/c/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Tue, 10 Jan 2023 23:53:02 +0000 en-US hourly 1 Announcing the $50,000 Ultrafast Cross Browser Hackathon https://applitools.com/blog/ultrafast-cross-browser-hackathon/ Tue, 09 Jun 2020 02:48:22 +0000 https://applitools.com/?p=19616 Ready For a Challenge? If you are up for a challenge and enjoy learning new skills, join the world’s best quality engineers between now until June 30th to compete in the...

The post Announcing the $50,000 Ultrafast Cross Browser Hackathon appeared first on Automated Visual Testing | Applitools.

]]>

Ready For a Challenge?

If you are up for a challenge and enjoy learning new skills, join the world’s best quality engineers between now until June 30th to compete in the industry’s first next generation cross browser, cross device, and cross operating system hackathon. Focused on the use of Visual AI and Ultrafast Grid, this virtual event seeks to educate and upskill developers and test automation engineers all over the world. Test at incredible speeds, deliver higher quality software faster than ever, and earn a chance to win the $5,000 Diamond Prize.

Rubiks Cube Hero 7

Over 500 Winners. $50,000 in Cash Prizes.

So long as you are among the first 5,000 to qualify, you are eligible to win one of 500 prizes worth over $5,000. That’s at least a 10% chance to win! Since this hackathon is about testing at incredible speeds, the first 500 to submit a qualifying submission also earn a $25 ultrafast submission prize. Even better, you become eligible for one of the 100 cash prizes listed below if our panel of expert judges determines your test suites did the best job of providing efficient coverage and catching all the bugs.

As of June 8, almost 2,000 people have signed up, and we have been receiving initial submissions. If you want to qualify for an ultrafast submission prize, you still have time.

Pricing table UFG for hackathon prizes

How Does the Hackathon Work?

Software developers, quality engineers, and QA professionals will compete for $50,000 in cash prizes. For those who qualify, you will be challenged to author cross browser and cross device tests against a real-world app using both your preferred legacy cloud testing solution and Applitools Ultrafast Grid powered by Visual AI. Contestants are free to use any major test framework, such as Cypress, Selenium, WebdriverIO, or TestCafe, and do so in their preferred language including Java, Javascript, Python, Ruby, or C#.

Here is what you need to do:

  1. Apply here for access. Once you qualify, you will get access to the hackathon application, instructions on how to complete the challenge, and full access to Applitools Visual AI and Ultrafast Grid.
  2. Submit when you’re ready. The instructions will guide you. We expect most submissions to take 4 to 6 hours to complete. There is plenty of help to get it done if you need it!
  3. Your submission will be judged by a panel of experts. Those submissions that do the best job of catching all the bugs and doing so in the most efficient way possible will win.
  4. All submissions are due by June 30th, 2020 at 11.59pm PT. No exceptions!
  5. Winners will be announced no later than August 1st, 2020

That’s it! So why wait? Get started today.

The Next Generation of Cross Browser Testing is Ultrafast.

Our hackathons were created to make a point. There is a better way to automate your testing. Browsers do not suffer from the same executional bugs that plagued them five, 10, or 20 years ago. What does create problems, lots of problems, is the rendering of the application across various viewports and screens. This reality means amajor shift in how you need to test, and you will learn and see for yourself what we mean by competing.

In the Ultrafast Cross Browser Testing Hackathon, even more valuable than the prizes you might win, is the learning you will gain from competing. If you take on this challenge, you will learn how next generation cross browser testing works. If you want a quick summary — read this this blog post on next generation cross browser testing. 

We’ve Done This Before. The Visual AI Rockstar Hackathon.

In November 2019, the Visual AI Rockstar Hackathon was a huge success. Almost 3,000 quality engineers participated and the response was overwhelmingly positive. Here is what some of our winners had to say about their experience:

VisualAI Impact Three Quotes from previous hackathon winners

We expect this one to be even bigger, so what’s stopping you?

Take The Challenge

Why participate in the Applitools Cross Browser Testing Hackathon?

First, you will learn new skills. You get hands-on experience seeing how easily you can run tests once and evaluating behavior across the browsers that matter to your customers.

Second, you experience a new way of running application validation. If you have your own multi-browser lab today, or if you use a third-party service that requires multiple tests run on multiple setups in parallel, you can see the difference in running the Applitools Ultrafast grid in comparison. And, if you have not considered running tests across multiple browsers – due to cost or complexity – you can reevaluate your decision.

Finally, you can win prizes and bragging rights as a hackathon winner. To show the world, we will proudly display your name on our website. Your success will demonstrate your engineering acumen to your peers and anyone else that matters to you.

Your opportunity to learn something new and stand out in a crowd awaits. Sign up now.

The post Announcing the $50,000 Ultrafast Cross Browser Hackathon appeared first on Automated Visual Testing | Applitools.

]]>
Five Data-Driven Reasons To Add Visual AI To Your End-To-End Tests https://applitools.com/blog/add-visual-ai/ Thu, 14 May 2020 00:04:59 +0000 https://applitools.com/?p=18411 Do you believe in learning from the experiences of others? If others found themselves more productive adding Visual AI to their functional tests, would you give it a try? In...

The post Five Data-Driven Reasons To Add Visual AI To Your End-To-End Tests appeared first on Automated Visual Testing | Applitools.

]]>

Do you believe in learning from the experiences of others? If others found themselves more productive adding Visual AI to their functional tests, would you give it a try?

In November 2019, over 3,000 engineers signed up to participate in the Applitools Visual AI Rockstar Hackathon. 288 completed the challenge and submitted tests – comparing their use of coded test validation versus the same tests using Visual AI. They found themselves with better coverage, faster test development, more stable test code, with easier code test code maintenance.

On April 23, James Lamberti, CMO at Applitools, and Raja Rao DV, Director of Growth Marketing at Applitools, discussed the findings from the Applitools Hackathon submissions. The 288 engineers who submitted their test code for evaluation by the Hackathon team spent an average of 11 hours per submission. That’s over 3,000 person-hours – the equivalent of  1 ½ years of engineering work.

Over 3000 participants signed up. They came from around the world.

Screen Shot 2020 05 13 at 11.25.28 AM

They used a variety of testing tools and a range of programming languages.

Screen Shot 2020 05 13 at 11.27.03 AM

In the end, they showed some pretty amazing results from adding Applitools Visual AI to their existing test workflow.

Describing the Hackathon Tests

Raja described the tests that made up the Hackathon.

Screen Shot 2020 05 13 at 11.30.42 AM

Each test involved a side-by-side comparison of two versions of a web app. In one version, the baseline, the page rendered correctly. In the other version, the new candidate, the page rendered with errors. This would simulate the real-world issues of dealing with test maintenance as apps develop new functionality.

Hackathon participants had to write code that did the following:

  • Ensure the page rendered as expected on the baseline.
  • Capture all mistakes in the page rendering on the new candidate
  • Report on all the differences between the baseline and the new candidate

Also, Hackathon participants needed to realize that finding a single error on a page met the necessary – but not sufficient condition for testing. A single test that captures all the problems at once has a faster resolution time than running into multiple bug capture/fix loops. Test engineers needed to write tests that captured all the test conditions, as well as properly reporting all the failures.

Hackathon participants would code their test using a conventional test runner plus assertions of results in the output DOM. Then, they used the same test runner code but replaced all their assertions with Applitools Visual AI comparisons.

To show these test results, he used the Github repository of Corina Zaharia, one of the platinum Hackathon winners.

At this point here, Raja walked through each of the test cases.

CASE 1 – Missing Elements

Raja presented two web pages. One was complete. The other had missing elements. Hackathon participants had to find those elements and report them in a single test.

Screen Shot 2020 05 13 at 11.49.52 AM

To begin coding tests, Corina started with the baseline. She identified each of the HTML elements and ensured that their text identifiers existed. She wrote assertions for every element on the page.

Screen Shot 2020 05 13 at 11.52.31 AM

In evaluating submissions, judges ensured that the following differences got captured:

  1. The title changed
  2. The Username icon was missing
  3. The Password icon was missing
  4. The username placeholder changed
  5. The wrong password label
  6. The password placeholder changed
  7. There was extra space next to the check box
  8. The Twitter icon had moved
  9. The Facebook icon had moved
  10. The LinkedIn Icon was missing.

Capturing this page required identifying element locators and validating locator values.

In comparison, adding Visual AI required only three instructions:

  • Open a capture session
  • Capture the page with an eyes.checkWindow() command
  • Close the capture session

No identifiers needed – Applitools captured the visual differences.

With much less coding, Applitools captured all the visual differences. And, test maintenance takes place in Applitools.

CASE 2 – Data-Driven Testing

In Case 2, Hackathon participants needed to validate how a login page behaved when applying different inputs. The test table looked like this:

  • No username, no password
  • Username, no password
  • Password, no username
  • Username and password combination invalid
  • Valid username and password
Screen Shot 2020 05 13 at 11.51.30 AM

Each condition resulted in a different response page.

Hackathon participants found an identical page to the tests in Case 1 – but they were responsible for handling the different responses to each of the different test conditions.

Again, the coding for the conventional test required entering the test conditions via test runner asserting all the elements on the page, including asserting error messages.

Also, the question was left open for testers – what should they test when they test the valid password and username condition? The simplest answer – just make sure the app visits the correct target post-login page. But, more advanced testers wanted to make sure that the target paged rendered as expected.

So, again, the comparison with coded assertions and adding Visual AI makes clear how much more easily Visual AI captures baselines and then compares the new candidate against the baselines.

CASE 3 – Testing Table Sort

The next case – testing table capabilities – fits into capabilities found on many web apps that provide multiple selections. Many consumer apps, such as retailers, reviewers, and banks, provide tables for their customers. Some business apps provide similar kinds of selectors – in retail, financial, and medical applications.  In many use cases, users expect tables with advanced capabilities, such as sorting and filtering.

Screen Shot 2020 05 13 at 11.53.33 AM

Tables can provide some challenges for testers. Tables can contain lots of elements. Many table functions can require complex test coding – for example, sorting and filtering.

To test table sorting with conventional assertion code, Hackathon participants had to write code that captured all the data in the table, performed the appropriate sort of that data, and use the internally-sorted table in the test code with the sorted table on the web page. Great test coders took pains to ensure that they had done this well and could handle various sorting options.  The winners took time to ensure that their code covered the table behavior. This complex behavior did not get caught by all participants, even with a decent amount of effort.  

In contrast, all the participants understood how to test the table sort with Visual AI. Capture the page, execute the sort, capture the result, and validate inside Applitools.

Case 4 – Non-Textual Plug-ins

The fourth case involved using graphical rendering of a table in canvas. How do you test that?

Screen Shot 2020 05 13 at 12.15.01 PM

Without normal web element locators, a lot of participants got lost.  They weren’t sure how to start finding the graphing elements and to build a comparison between the baseline behavior and the new candidate.

Winning Hackathon participants dug into the rendering code to find the javascript calls for the graph and the source data for table elements. This allowed them to extract the values that should be rendered and compare them between the baseline and the new candidate. And, while the winners wrote fairly elegant code, this particular skill took time to dive into JavaScript. And, a fair amount of coding effort.

As with the table sorting Case 3, all the participants understood how to test the graph with Visual AI. Capture the page, and then compare the new candidate with the baseline in Applitools.

Case 5 – Dynamic Data

The final case required the participants to test a page with floating advertisements that can change.  In fact, as long as content gets rendered in the advertising box, and the rest of the candidate remains unchanged, the test passes.

Screen Shot 2020 05 13 at 12.16.59 PM

The winning participants coded conditional tests to ensure that code existed in the advertising boxes, though they did not have the ability to see how that code got rendered.

With Visual AI, participants had to use different visual comparison modes in Applitools. The standard mode – Strict Mode – searches for visual elements that have moved or rendered in unexpected ways. With dynamic data, Strict Mode comparisons fail.

For these situations, Applitools offers Layout Mode instead. When using Layout Mode, the text and graphical elements need to share order and orientation, but their actual visual representation can be different.  In Layout Mode, the following are considered identical – image above text.

This Is A Dog

Not A Dog

However, the pair below has a different layout. On the left, the text sits below the image, while on the right the text sits above the image

This Is A Dog

Not A Dog

Applitools users can hard-code their check mode for different regions into their page capture. Alternatively, they can use Strict Mode for the entire page and handle the region as a Layout Mode exception in the Applitools UI.

All the Hackathon participants, whether coding their tests for Layout mode for the region or by using Layout mode for the selected area once the baseline had been captured in Applitools, had little difficulty coding their tests.

Learning From Hackathon Participants

At this point, James began describing what we had learned from the 1.5 person-years of coding work done on the Hackathon. We learned what gave people difficulty, where common problems occurred, and how testing with Visual AI compared with conventional assertions of values in the DOM.

Faster Test Creation

I alluded to it in the test description, but test authors wrote their tests much more quickly using Visual AI. On average, coders spent 7 person-hours writing coded assertion-based tests for the Hackathon test cases. In contrast, they spent a mere 1.2 hours writing tests using Visual AI for the same test cases.

Screen Shot 2020 05 13 at 12.29.50 PM

Interestingly, the prize-winning submitters spent, on average 10.2 hours writing their winning submissions. They wrote more thorough conventional tests, which would yield accurate coverage when failures did occur. On the other hand, their coverage did not match the complete-page coverage they got from Visual  AI. And, their prize-winning Visual AI tests required, on average, six minutes more to write than the average of the whole of the test engineers.

More Efficient Coding

The next takeaway came from calculating coding efficiency. For conventional tests, the average participant wrote about 350 lines of code. The prize winners, whose code had greater coverage, wrote a little more than 450 lines of code, on average. This correlates with the 7 hours and 10 hours of time spent writing tests.  It’s not a perfect measure, but participants writing conventional tests wrote about 50 lines of code per hour over 7 hours, and the top winners wrote about 45 lines of code per hour over 10 hours.

Screen Shot 2020 05 13 at 12.30.45 PM

In contrast, with Visual AI, the average coder needed 60 lines of code, and the top coders only 58 lines of code. Visual AI still results in 50 lines of code per hour for the average participant, and 45 lines of code for the winning participant. But, they are much more efficient.

More Stable Code

End-to-end tests depend on element locators in the DOM to determine how to apply test conditions, such as by allowing test runners to enter data and click buttons. Conventional tests also depend on locators for asserting content in the response to the applied test conditions.

Screen Shot 2020 05 13 at 12.31.30 PM

Most software engineers realize that labels and other element locators get created by software developers – who can change locators due to intentional change or unanticipated difference. And element locator using Xpath can suddenly discover the wrong relative locator due to an enhancement. The same is true for labels, which can change between releases – even when there is no visible user behavior difference.

No one wants testing to overconstrain development. No one wants development to remain ignorant of testing needs. And yet, because mistakes sometimes happen, or changes are sometimes necessary, locators and labels change – resulting in test code that no longer works properly.

Interestingly, when evaluating conventional tests, the average Hackathon participant used 34 labels and locators, while the Hackathon prize winners used 47 labels and locators.

Meanwhile, for the Visual AI tests, the average participant used 9 labels and locators, while the winning submissions used only 8. At a conservative measure, Visual AI reduces the dependency of code on external factors – we calculate it at 3.8 x more stable.

Catching Bugs Early

Visual AI can catch bugs early in coding cycles.  Because Visual AI depends on the rendered representations and not on the code to be rendered, Visual AI will catch visual differences that might be missed by the existing test code. For instance, think of an assertion for the contents of a text box. In this new release, the test passes because the box has the same text. However, the box width has been cut in half, causing the text to extend outside the box boundary and be obscured. The test passes, but in reality it fails. The test assumed a condition that is no longer true.

Screen Shot 2020 05 13 at 12.32.15 PM

Visual AI catches these differences. It will catch changes that result in different functional behavior that requires new coding. It will catch changes – like the one described above, that result in visual differences that impact users. And, it will avoid flagging changes that may change the DOM but not the view or behavior from the user’s perspective.

Easier to Learn than Code-Based Testing

The last thing James shared involved the learning curve for users. In general, we assumed that test coverage and score on the Hackathon evaluation correlated with participant coding skill. The average score achieved by all testers using conventional code-based assertions was 79%. After taking a 90-minute online course on Visual AI through Test Automation University, the average score for Visual AI testers was 88%.

Screen Shot 2020 05 13 at 12.32.48 PM

Because people don’t use visual capture every day, testers need to learn how to think about applying visual testing. But, once the participants had just a little training, they wrote more comprehensive and more accurate tests, and they learned how to run those test evaluations in Applitools.

What This Means For You

James and Raja reiterated the benefits they outlined in their webinar: faster test creation, more coverage, code efficiency, code stability, early bug catching and ease of learning. Then they asked: what does this mean for you?

If you use text-based assertions for your end-to-end tests, you might find clear, tangible benefits from using Visual AI in your product release flow. It integrates easily into your CICD or other development processes. It can augment existing tests, not requiring any kind of rip and replace.  Real, tangible benefits come to many companies that deploy Visual AI. What is stopping you?

Screen Shot 2020 05 13 at 12.33.25 PM

Often, learning comes first. Fortunately, Applitools makes it really easy for you to learn Visual AI. Just take a class on Test Automation University. There is Raja’s course: Modern Functional Test Automation through Visual AI. There is Angie Jones’s course: Automated Visual Testing: A Fast Path To Test Automation Success.  And, there are others.

You can sign up for a free Applitools account. Using Applitools helps you understand what you can do With Applitools.  

Finally, you can request a demonstration from a sales engineer.  

At Applitools, we let users make the case for the value of our Visual AI solution. We hope you will get a lot out of trying it yourself.

For More Information

The post Five Data-Driven Reasons To Add Visual AI To Your End-To-End Tests appeared first on Automated Visual Testing | Applitools.

]]>
Ask 288 Of Your Peers About Visual AI https://applitools.com/blog/ask-your-peers-about-visual-ai/ Tue, 14 Apr 2020 14:39:18 +0000 https://applitools.com/?p=17345 How do you find out about what works? Ask your peers. So, why not ask your peers about Visual AI? It’s a difficult time. We all know why, so I...

The post Ask 288 Of Your Peers About Visual AI appeared first on Automated Visual Testing | Applitools.

]]>

How do you find out about what works? Ask your peers. So, why not ask your peers about Visual AI?

It’s a difficult time. We all know why, so I won’t dwell on it other than to wish you and yours health and safety above all else. What I will dwell on is the human need to retreat and replenish. Trapped at home, I’ve found myself learning to cook with Thomas Keller, exploring the universe with Neil deGrasse Tyson, or entertaining like Usher through Masterclass.com. My kids are coding their own games and learning about the history of art at Khan Academy. These entertaining explorations not only give us a much-needed break, but they also give us an opportunity to learn and grow even as we struggle with the realities around us. It’s a welcome and much-needed distraction.

With that sentiment in mind – here’s an idea for you. Why not learn about Visual AI (Artificial Intelligence) from 288 of your fellow quality engineers?

Applitools Empirical Evidence Visual AI Report 2020

Each one of them spent 11 hours on average comparing their current test framework of either Cypress, Selenium, or WebdriverIO to that same framework modernized through Visual AI. You can get a summary of what they learned here. Even better, you can take the same free Test Automation University course on Modern Test Automation Through Visual AI and do it all yourself through video tutorials and hands-on learning. Either way, you will find yourself blissfully distracted while learning a cutting-edge approach to test automation.

VisualAI Impact Three Quotes

288 Testers. 11 Hours Each. That’s 1.5 Years of Quality Engineering Effort!

Yes — we were blown away by the enthusiasm to learn Visual AI among the testing community. It says a lot about this group of individuals who recognize the need to keep pushing themselves. In the end, they ended up creating the industry’s largest, highest quality, and freely available data set for understanding the impact of Visual AI on test automation, and ultimately on the impact on quality management and release velocity for modern applications. It’s an amazing amount of learning highly representative of the world of test automation.

We had representation from major test frameworks:

VisualAI Impact Submissions By TestFramework

Representation from major languages:

Representation from 101 countries around the world

VisualAI Impact Participant Flags with Title

Why Should You Learn Visual AI? Ask your peers.

I get it. Quality engineers always seem to be on a treadmill to learn everything. You have new application development frameworks, new coding structures, new test frameworks, and new tools rumbling your way daily. If you plan to learn one more thing, you need a return on your time.

But, let’s face it – testing needs to keep up with the pace of the business. Survey data tells us that the majority of software teams are struggling with their quality engineering efforts. In a recent survey, 68% of teams cited quality management as a key blocker to more agile releases and ultimately CI/CD.

Why? For every test with a handful of conditions and an action, test writers need to write dozens to hundreds of code-based assertions to validate a single response. Traditional frameworks simply don’t have the technical ability to provide front-end functional and visual test automation coverage with the speed and efficiency you need. You end up writing and maintaining too much test code, only to see bugs still escape. It’s maddening and, even worse, it prevents us from doing our core job of managing app quality.

Isn’t AI Just Smoke and Mirrors?

The answer depends on your application. AI promises to solve many modern technical problems, including testing and quality management problems, but it’s hard to separate the truth from the reality in what really works. Many experiments using AI have failed in testing, or these AI approaches require you to “rip and replace” your existing tech stack – a dreaded approach that is unrealistic for most teams.

Rather than asking you to simply trust that Visual AI is different, we decided to prove it, objectively, using real-world examples, in partnership with real quality engineers at real companies dealing with test automation every day.

VisualAI Impact Winners Logos With Title

Gathering Learning – The Visual AI Rockstar Hackathon

To generate all this learning, we built an application involving five common but complex use cases. In November 2019, we issued a challenge to testers all over the world to compete, and learn, by comparing test approaches side-by-side. The competitors created test suites for each of the five use cases using their preferred code-based approach, including Selenium, Cypress, and WebdriverIO. These same quality engineers then repeated the process for the exact same five use cases using Visual AI from Applitools.

To make it fun and push people to do their absolute best, testers competed for 100 prizes worth a total of $42,000. We judged their submissions on their ability to:

  • Provide test coverage on all use cases
  • Successfully run these tests, and
  • Most importantly catch all potential bugs

using both testing approaches.

You can learn about 100 winners here.

VisualAI Impact Winners Logos Only

Your Takeaways

The data we collected from Hackathon participants makes one clear point: using visual AI makes you more efficient. You gain this efficiency no matter what your level of experience.

Each Hackathon use case – from handling dynamic data to testing table sorts – requires you to apply your best coding skills. We found that even the most diligent engineers encountered challenges as they developed test code for the use cases.

For example, many of your peers can easily apply test conditions to filter a search or sort a table. Many can grab data from web element locators. However, many of them struggle to calculate expected responses consistently. And, many have challenges creating accurate reports when tests fail.

We encourage you to review the Hackathon report and results. We also encourage you to sign up for a free Applitools account and try out these tests for yourself.  But, if you do nothing else, just check out the five cases. Ask yourself how frequently you encounter these cases in your own test development. We think you will conclude – just as our test data shows – that Visual AI can help you do your job more easily, more comprehensively, and faster.

To Read, Learn, and Do

James Lamberti is CMO at Applitools.

The post Ask 288 Of Your Peers About Visual AI appeared first on Automated Visual Testing | Applitools.

]]>
New Learning Paths through Test Automation University https://applitools.com/blog/tau-learning-paths/ Fri, 13 Sep 2019 20:07:23 +0000 https://applitools.com/blog/?p=6177 With the new learning paths in Test Automation University, you can quickly access the courses you need help you to do your job effectively.

The post New Learning Paths through Test Automation University appeared first on Automated Visual Testing | Applitools.

]]>
Many paths

We created Test Automation University (TAU) to help you learn best practices for using test automation in your app development process.  As we have added classes to TAU, it became apparent that we needed to make our site more productive for users with different interests. So we included learning paths.

learning paths

Learning paths give you a way to find the courses right for you. Are you writing test automation for APIs in Java? We can send you down the right path. How about testing mobile devices with Swift? Check. What about web tests with Ruby? Yep.

No matter what kind of tests you are trying to automate, you can find a learning path to help you.  For each track, you start with the basics. Once you finish a path, you have learned core concepts and practiced some of the critical skills needed to automate your testing.

With the new learning paths, TAU makes it easy to develop the skill set you seek.

Choosing a learning path is pretty straightforward. On the main page showing all the Test Automation University courses, select the Learning Paths tab:

pasted image 0

Unfiltered, this page shows all paths for all languages. The selector lets you filter the learning paths by path type or by language.

As one of the early users of Test Automation University, I appreciate the new learning paths. They have helped me understand the skills I need to develop in software testing – and I love the hands-on nature of the courses.

Let’s go through the paths to give you a sense of what each provides you.

Learning Path Overview

Learning paths through TAU are broken into general functional test areas:

  • Web UI
  • API Testing
  • Mobile
  • Codeless

Codeless is, as you would expect, a path that needs no direct coding skill.  Otherwise, the paths let you select one of six languages on which to focus your testing skills:

  • Java
  • JavaScript
  • Python
  • C#
  • Ruby
  • Swift

We continue to develop these paths, so expect changes to both this blog post and to TestAutomationU going forward.

How Do I Do Web UI Testing With Java?

To answer this question, let’s select the Web UI Path, and then select Java.

Screen Shot 2019 09 13 at 11.18.08 AM

Let’s take a look at the courses today in the Web UI Java Path.

These ten classes involve a little over 17 hours of instruction. Each course dives into different elements.

Two of these courses apply to the Java Language specifically: programming in Java and the Java automation engine. Two more are common among the web UI test path: finding web elements on a page, and how to test visually. The others are part of the common core courses shared among all the testing paths.

Common Core

Six courses are generally common across almost all the learning paths. These courses answer key questions for understanding and deploying automated tests. These courses and their associated questions are:

The questions these courses answer are easily transferred to any combination of languages and test cases.

How Do I Do API Testing With Java?

Let’s take a look at the Java API course.

Screen Shot 2019 09 13 at 11.19.22 AM

The Java API Testing path looks like this:

This path delivers two API-specific test courses.  We include Amber Race’s course on Exploring Service APIs through Test Automation to introduce API test automation concepts. We share Bas Dijkstra’s course on using REST Assured for REST API test automation to provide alternatives for API testing. The Java course teaches Java. All the other courses are part of the core.

Once you know the core, learning a new language and technology is simply a matter of taking the relevant language and technology courses.

Future Courses

As we continue to add learning paths to TAU, our core goal is to answer these questions:

  • How Do I Do UI Testing?
  • How Do I Do API Testing?
  • How Do I Do Mobile Testing?
  • How Do I Do Codeless Testing?

What is on the horizon for Test Automation University?

I am writing this post early in September 2019, and I expect we will update this post regularly, as we continue to add courses to Test Automation University.

In the immediate term , we just launched the course Introduction to Cypress, which shows how developers and testers can use Cypress to test their web applications. We have added this course to the Web UI JavaScript path as an alternative to using webdriver.io.

Keep checking out Test Automation University for the latest offerings.

Find Out More about Applitools

As the sponsors of Test Automation University, we’re excited to help you get even more efficient in your UI testing.  It’s all up to you to find out more.

You can request a demo of Applitools.

You can sign up for a free Applitools account.

You can check out our tutorials.

 

The post New Learning Paths through Test Automation University appeared first on Automated Visual Testing | Applitools.

]]>
Selenium Functional Testing with Applitools https://applitools.com/blog/selenium/ Fri, 23 Aug 2019 19:15:49 +0000 https://applitools.com/blog/?p=6047 We want you to have one place to show you all things Selenium. We know this is a tall order; you could be a Selenium expert, or you could be getting started with Selenium. No matter where you are on this spectrum, from novice to expert, here are all our Selenium-related articles, webinars, and tutorials in one place.

The post Selenium Functional Testing with Applitools appeared first on Automated Visual Testing | Applitools.

]]>

We have written a lot about Selenium, and about using Selenium and Applitools.

We want you to have one place to show you all things Selenium. We know this is a tall order; you could be a Selenium expert, or you could be getting started with Selenium. Perhaps you would like to:

  • Learn some advanced Selenium skills and capabilities
  • Compare Selenium with other test automation tools
  • Learn about user experiences with Selenium and Applitools
  • Understand what makes the combination of Applitools and Selenium so valuable.

No matter where you are on this spectrum, from novice to expert, here are all our Selenium-related articles, webinars, and tutorials in one place.

Getting Started – Learn about Selenium and Applitools

If you are interested in learning how to get started with Selenium, or how to use Selenium with Applitools, we have a lot of content to help you get started. These include:

Test Automation University

We have courses that teach you Selenium in detail for specific languages.

We have a course to teach you how to use Selenium IDE.

We also have courses that utilize Selenium as part of learning test automation, including:

Tutorials

Applitools has posted the following tutorials for using Applitools with Selenium.

Blog Posts

If you are looking to read about Selenium, we have you covered. Over several years we have written a number of blog posts to help readers get up to speed on test automation with Selenium. Here are some of those key posts.

Selenium tutorial

Webinar Recordings

If you would prefer to watch a webinar instead of reading a blog post, we have also hosted webinars on a range of topics about Selenium. The recordings can be accessed from our blog. These webinar recordings include:

Selenium Conference

Advanced Topics with Selenium and Applitools

Once you have proceeded past the basics, it’s time to dig deeply into test topics that will help you master using Selenium to achieve your larger test goals.

Blog Posts

Webinar Recordings

Comparing Selenium Versus…

Once you know something about Selenium, it becomes important to understand how other test automation tools compare with Selenium. Do they all use the webdriver to apply tests? Do they work on all browsers that support webdriver? Do you get the wealth of programming languages you find with Selenium? Here are some blog posts and webinars that compare and contrast Selenium with other test technologies – including how they link with Applitools.

Blog Posts

Webinars

Customer Stories – Selenium and Applitools

Finally, you’d like to know how companies use Selenium and Applitools together for greater engineering efficiency, greater test quality, and faster time to market. Here are a few stories to review.

 

The post Selenium Functional Testing with Applitools appeared first on Automated Visual Testing | Applitools.

]]>
Cross-Platform Testing in a Dash https://applitools.com/blog/cross-platform-ui-testing/ Tue, 02 Apr 2019 13:52:56 +0000 https://applitools.com/blog/?p=4520 One of the perks of my job is that we have lunch brought in a couple of times a week. On those days, we’re all eagerly awaiting an email from...

The post Cross-Platform Testing in a Dash appeared first on Automated Visual Testing | Applitools.

]]>

One of the perks of my job is that we have lunch brought in a couple of times a week. On those days, we’re all eagerly awaiting an email from our office manager with the link to a Doordash order form. We have less than an hour to get our orders in, so everyone stops whatever they are doing to make their selections before it closes. And of course, we’re all doing different things.

I’m a late bird, so I’m usually in transit to the office during this time. So, I need to make the order from my Android cell phone. And on some days, I’m watching videos when the pleasant interruption comes through, so my phone is in landscape mode.

Another coworker may be at their desk – plugged into a huge desktop monitor.

Someone else may be in a meeting and need to use their laptop to place their order using Chrome. There will be others in that same meeting who may be using Firefox or Safari. Then of course, there will be someone in the meeting who didn’t bring their laptop, so they’ll need to order from their iPhone.

Being a tester, I got to thinking “wow, that sure is a lot of configurations Doordash has to test”. All of us are accessing the same form from different browsers, devices, and viewports, and yet we all have the expectation of a flawless experience. Our lunch depends on it!

How do I test for this?

As I enjoyed the chicken chili that eventually arrived from The Cheesecake Factory, I pondered on this some more: “as an automation engineer, what would be the most efficient way to verify this?”.

I could write my test once and have it executed across all of these configurations. That would certainly work, but I dread writing cross-platform test automation. I have to account for responsive changes of the app within my code.

For example, in this web version of the site, notice the header has the delivery address in the left corner next to the menu icon, the word “DOORDASH”, a search field, and the quantity specified on the cart.

However, on this mobile version, the header is different. The delivery address has been moved under a new horizontal rule, the name of the app is now gone, the search field has been replaced with a simple icon, and the quantity count has been removed from the shopping cart.

image2 New

From an automation standpoint, this is a pain. I have to either write separate tests for each of the various  configurations, or account for these variations within my test code by using conditional statements.

pasted image 0 3

Is there a better way?

When the mobile space exploded with popularity, it became clear that we needed to test on every single device and configuration. My Doordash scenario illustrates how prevalent mobile technology is in our everyday lives. However, as I’ve shown above, it’s really a pain to test all of these configurations, especially for some applications where even your Android and iOS apps differ in appearance. Not only is it a pain to write these automated tests, it’s also a pain to run them. Managing your own device lab is no easy feat. Fortunately, there are cloud providers to help manage this but many companies are starting to wonder if cross-platform testing is even worth it.

Most cross-platform bugs that are found are not functional bugs. In this day in age, browsers have gotten to a point where it’s rare to find a feature that works on, say, Chrome but not Safari. Today, it’s the change in the viewport size where most cross-platform bugs lie.

Here’s a browser-view of an airline’s website on a mobile device. Imagine the annoyance of trying to check on a flight while heading to the airport and receiving this.

Here’s an app asking a user to activate their account. We all know how annoying it is to have extra steps when signing up for an account, especially when having to do this on a mobile view, but it is multitudes more annoying when there are also bugs like the one here where you can’t even see what you’re typing.

pasted image 0 5

The interesting thing about these examples is that our functional tests would never catch such errors. In the airline example, our test code would verify that the expected text exists. Well, the text does exist, but so does other text as well. And it’s all overlapping and making it unreadable for the user. Yet, the test would pass.

In the account activation example, the functional test would make sure the user can type in the activation code. Well, sure, they can, but there’s clearly an issue here which could lead to sign-up abandonments. Yet, the test would pass.

These are not functional bugs;  they are visual bugs. So even if your test automation runs cross-platform, you’d still miss these types of bugs if you don’t have visual validation.

Visual validation certainly solves the problem of catching the real cross-platform bugs and providing a return on your automation investment. However, is this, alone, the most efficient approach?

Run cross-platform, but visually

If the bugs that we’re finding are visual ones, does it really make sense to automate all of these conditional cross-platform routes and then execute functional steps across all of these viewports?

Probably not. UI tests tend to be fragile, by nature. Running these types of tests multiple times across various browsers and viewports increases the chances for instability. For example, if due to infrastructure issues, your tests fail 2% of the time on average, then running those tests on 10 more configurations now increases the chances of flakiness tenfold.

Fortunately, there’s a new, more efficient way to do this…via Applitools Ultrafast Grid!

The Ultrafast Grid runs visual checks across all of the configurations that you’d like. Write your test for one configuration, specify all of your other configurations, then insert visual checks at the points where you’d like cross-platform checks executed.

The Ultrafast Grid will then extract the DOM, CSS, and all other pertinent assets and render them across all of the configurations specified. The visual checks are all done in parallel, making this lightning fast!

To gain a better understanding, let’s consider this in terms of the Doordash scenario.

As opposed to running the following steps across every configuration:

  • opening the link
  • specifying my name
  • choosing items to add to my order
  • finalizing my order

I’d, instead, write and execute this test for one configuration and specify an array of configurations that I’d like Applitools to run against. Within my scenario, I’d make calls to Applitools at any point where I need something visually validated.

pasted image 0 1

Let’s say one of the key steps is adding an item to the order.  I call Applitools and say hey, verify this across all my configurations. With the Ultrafast Grid, there’s no need to execute the steps before this, or even this step. The Ultrafast Grid will instead capture the current state of the app and render that state across all of the other specified configurations – essentially showing what this app looks like when something is added to the cart, and validating that everything is displayed the way it’s intended for every configuration specified!

And because we are able to skip the irrelevant functional steps that are only executed to get the app in a given state, the parallel tests run in seconds rather than several minutes or even hours – making it perfect for continuous integration/deployment where fast feedback is key.

What are the drawbacks?

Again, these are visual checks, not functional ones. So, if your app’s functionality varies across different configurations, then it’s still wise to actually execute that functionality across said configs.

Also, the Ultrafast Grid runs across emulators, not real devices. This makes sense when you’re interested in testing how something looks on a given viewport. However, if you’re testing something that is specific to the device itself such as native gestures (pinching, swiping, etc), then an actual device would be more suitable. With that being said, the Ultrafast Grid runs on the exact same browsers that are used on the real devices, so your tests will be executed with the correct size, user agent, and pixel density.

Get Started

The Ultrafast Grid proves that test automation doesn’t have to be a slow bottleneck for our deployment initiatives. The ability to run UI tests across a plurality of configurations within seconds is a game changer! In fact, I was able to run hundreds of UI tests across ten configurations before I could even get the lid off of the cheesecake I ordered with my lunch. So much for dessert…but that’s ok, the Ultrafast Grid is sweeter than my Dulce de Leche could ever be.

Ready to check out the latest innovation in cross-platform testing? Check out Applitools Ultrafast Grid!

 

For more information

The post Cross-Platform Testing in a Dash appeared first on Automated Visual Testing | Applitools.

]]>
Which programming language is most popular for UI test automation in 2019? https://applitools.com/blog/language-software-test-automation/ Sat, 26 Jan 2019 00:36:47 +0000 https://applitools.com/blog/?p=4107 There is always a lot of uncertainty around which programming language to use when starting a new test automation project.  Should you go with the same language that the development...

The post Which programming language is most popular for UI test automation in 2019? appeared first on Automated Visual Testing | Applitools.

]]>
Programming languages

There is always a lot of uncertainty around which programming language to use when starting a new test automation project.  Should you go with the same language that the development team is using? Or should you choose a language that has an abundance of community support so that you can easily get help when stuck? These are critical points to consider.

What our data is saying

More than half of the top ten companies in software, financial services, and healthcare verticals have enhanced their UI test automation suites with Applitools’ visual validation. With millions of tests running in our cloud every week, we’ve observed interesting trends on how these top companies are succeeding with their test automation initiatives. We’ll share these insights in a series of blog posts, starting with this one.

Here’s what our data says!

1. Java
Java is the most common programming language used for test automation. A whopping 44% of our customers are using Java for their automated checks. There is a wealth of readily available frameworks, plugins, and educational resources that support Java for test automation – which follows the narrative that insists that community support is a driving factor when choosing a programming language for test automation. But also, according to the 2018 Stack Overflow Developer Survey, there was an 18% increase in usage of Java by professional developers in the last year. If teams are aligning their test automation tools with that of their product development stack, this could also explain Java’s popularity for UI checks.

2. JavaScript
The second most common programming language is JavaScript, accounting for 15% of our customers. Because of its overwhelming popularity with developers, it’s no surprise that Javascript is the second most used language for test automation and also the fastest growing. With Javascript being used by 72% of professional developers, many teams choose this language for their test automation projects as well. This can be due to the adoption of shift-left methodologies where developers are also responsible for writing test code, or due to the testers’ desire to “speak the same language” of their developers. Some find this alignment helps with collaboration efforts around test automation.

3. C#
C# comes in third with 13% of our customers using this programming language to develop their automated tests. C# has seen a decline in professional development usage for the last five years, which could explain its dwindling usage for test automation projects as well.

4. Python
While many argue that Python is a great first language for those who are new to coding, and therefore is great for manual testers who are venturing into test automation, our data shows that only 8% of our customers are developing their automated UI checks in this language.

5. Ruby
Seven percent (7%) of our customers use Ruby to write their UI test automation. With only a tenth of professional developers using Ruby, it follows that this language would not be a popular choice for test automation projects.

While this data does not indicate what’s the best programming language to use for test automation, it does highlight which ones are most popular amongst hundreds of companies that use Applitools’ visual validation for their test automation needs.

Stay tuned for more articles in this series!

What language do you prefer for software test automation, and why? Let us know in the comments.


The post Which programming language is most popular for UI test automation in 2019? appeared first on Automated Visual Testing | Applitools.

]]>