Root Cause Analysis Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/root-cause-analysis/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Wed, 11 Jan 2023 00:02:22 +0000 en-US hourly 1 Transforming Software Development through a Modern Feedback Loop https://applitools.com/blog/rally-automated-bug-tracking-integration/ Mon, 10 May 2021 21:16:03 +0000 https://applitools.com/?p=28847 Applitools finds functional & visual bugs for the world’s largest organizations; Rally® is leveraged by the largest enterprises to manage their agile development at scale, including quality management and the...

The post Transforming Software Development through a Modern Feedback Loop appeared first on Automated Visual Testing | Applitools.

]]>

Applitools finds functional & visual bugs for the world’s largest organizations; Rally® is leveraged by the largest enterprises to manage their agile development at scale, including quality management and the tracking of bugs. We collaborated to design a new integration to make this process faster and provide developers with the in-depth details they need to reproduce and triage bugs faster than ever before. 

Rally & Applitools

Rally Software is an enterprise-class platform that is purpose built for scaling agile development practices. Rally serves as a central hub for teams to collaboratively plan, prioritize and track work, from strategy all the way down to stories and bugs through the entire SDLC.

Applitools is building the next generation of test automation platform for cross browser and device testing powered by AI assisted computer vision technology known as Visual AI. Visual AI helps Developers, Test Automation Engineers and QA professionals release high-quality web and mobile apps enabling CI/CD.

Better Together – A Transformative Workflow

Applitools & Rally share enterprise customers. Like all modern businesses, our shared customers are looking for ways to gain an edge over their competition – and oftentimes that edge is gained by improved workflows and automation that saves time and allows teams to release better quality software faster. It’s a never ending battle to improve time-to-market, and for our shared customers, we have just added an integration that can do just this.

The integration allows users to log a bug in Rally without ever leaving the Applitools Eyes user interface and workflow. This means no context shifting, no copy / paste, no additional logins or lost browser tabs. It’s designed for both workflow efficiency and to ensure that every detail gets logged so developers can have everything they need to triage the bug and move on. It’s not only developers and testers who benefit from this integration, now any stakeholder in the entire appdev process from product managers to UX designers can automatically pinpoint the exact cause of issues using the Applitools Root Cause Analysis capabilities, send the results instantly to Rally and have everything needed to reproduce and fix defects on the spot.  

Setting up the Rally Integration

Your Eyes admin can setup the Rally integration via the Admin screen →  Integrations tab. The process takes less than a minute. First you’ll enter their Rally server & API key – then a run through a quick authentication process requiring a Client ID & Client Secret.

Once authenticated, all Rally projects will be available to choose from, so simply choose the first project you will link to Applitools along with the default work item type for issues created from within Eyes (most likely it will be ‘defects’). You can optionally add one or more defined fields by simply choosing the field name and and the default value – repeat as needed. For detailed setup instructions, please visit our Rally integration documentation.

Automating the Defect Feedback Loop

The beauty of this integration is in its simplicity. When a bug is found by Applitools, Users can gather all relevant information including screenshot and full steps to reproduce with a single region-drag and click using the bug region feature. The detailed information is instantly sent to Rally for teams to immediately begin triage.

Applitools Integration Ecosystem

The Rally integration is a continuation of our commitment to extending the Applitools integration ecosystem and fitting seamlessly into customers’ existing workflows and tools. Applitools customers now benefit from seamless integrations with over 60 SDKs for you to choose from. This includes:

  • Testing frameworks and languages such as Selenium, Cypress, Appium, and more
  • Source control solutions like GitHub, GitLab and BitBucket
  • CI\CD platforms like Jenkins, TeamCity and Travis
  • Collaboration tools like Slack
  • Agile planning and defect tracking tools like Jira

Now, with the Eyes 10.11 release, we’re excited to also include Rally on this list.

The post Transforming Software Development through a Modern Feedback Loop appeared first on Automated Visual Testing | Applitools.

]]>
The end of Smoke, Sanity and Regression https://applitools.com/blog/end-smoke-sanity-regression/ Tue, 30 Jun 2020 23:04:40 +0000 https://applitools.com/?p=19990 Test Categories When it comes to Test Strategy for any reasonable sized / complex product, one aspect would always be there – the categories of tests that would be created,...

The post The end of Smoke, Sanity and Regression appeared first on Automated Visual Testing | Applitools.

]]>

Test Categories

When it comes to Test Strategy for any reasonable sized / complex product, one aspect would always be there – the categories of tests that would be created, and (hopefully) automated – ex: Smoke, Sanity, Regression, etc.

Have you ever wondered why these categories are required? Well, the answer is quite simple. We want to get quick feedback from our tests (re-executed by humans, or better yet – by machines in the form of automated tests). These categories offer a stepping-stone approach to getting feedback quickly.

But, have you wondered, why is your test feedback slow for functional automation? In most cases, the number of unit tests would be a decent multiple (10-1000x) of the number of automated functional tests. YET, I have never ever seen categories like smoke, sanity, regression being created for the unit tests. Why? Again, the answer is very simple – the unit tests run extremely fast and provide feedback almost immediately.

The next question is obvious – why are the automated tests slow in giving running and providing feedback? There could be various reasons for this:

  • The functional tests have to launch the browser / native application before running the test
  • The test runs against fully / partially integrated system (as opposed to to a stubbed / mocked unit test)

However, there are also other factors that contribute to the slow functional tests:

  • Non-optimal tool sets used for automation
  • Skills / capabilities on the team do not match what is required to automate effectively
  • Test automation strategy / criteria is not well defined.
  • The Test Automation framework is not designed & architected correctly, hence making it inefficient, slow, and gives inconsistent results.
  • Repeating the execution of the same set of Functional Automated Tests on variety of browsers, and viewport sizes

Getting faster feedback from Automated Functional Tests

There are various techniques & practices that can be used (appropriately, and relevant to the context of product-under-test) to get faster, reliable, consistent feedback from your Functional Tests.

Some of these are:

  • Evolve your test strategy based on the concept of Test Automation Pyramid 
  • Design & architect your Functional Test Automation Framework correctly. This post on “Test Automation in the World of AI & ML” highlights various criteria to be considered to build a good Test Automation Framework.
  • Include Visual AI solution from Applitools that speeds up test implementation, reduces flakiness in your Functional Automation execution, includes AI-based Visual Testing and eliminates the need for Cross-browser testing – i.e. removes the need to execute the same functional tests in multiple browsers

Optimizing Functional Tests using Visual AI

Let’s say I want to write tests to login to github.

github login screen

Expected functionality is that if I click on “Sign In” button without entering any credentials, I see the error message as below:

github login screen with errors

To implement a Selenium-based test for such validation, I would write it as below:

code example

Now, when I run this test against a new build, which may have added functionality, like shown below, lets see the status of this test result.

code example 2

Sure enough, the test failed. If we investigate why, you will see that the functionality has evolved.

github login with errors and labels

However, our implemented test failed for the 1st error it found, i.e.

The title of the page has changed from “SignIn” to “Sign in to Github”

I am sure you have also experienced these types of test results.

The challenges I see with the above are:

  • The product-under-test is always going to evolve. That means your test is always going to report incorrect details
  • In this case, the test reported only the 1st failure it came across, i.e. the 1st assertion failure. The rest of the issues that could have been captured by the test were not even executed.
  • The test would not have been to capture the color changes
  • The new functionality did not have any relevance to the automated test

Is there a better way to do this?

YES – there is! Use Applitools Visual AI!

After signing up for a free Applitools account, I integrated Applitools Selenium-Java SDK using the tutorial into my test.

My test now looks like this:

code example 3

As you can see, my test code has the following changes:

  • There are no assertions that I had before
  • There are hence, fewer locators I need to worry about
  • Hence the test code is more stable, faster, cleaner & simpler

The test still fails in this case as well, because of the new build. However, the reason for the failures is very interesting.

When I look at the Applitools dashboard for these mismatches reported, I am able to see the details of what went wrong, functionally & visually!

pasted image 0 5

Here are the details of the errors

Screen 1 – before login

pasted image 0 1

Screen 2 – after login with no credentials provided

pasted image 0 3

From this point, now, I can report the failures in functionality / user experience as a defect using the Jira integration, and accept the new functionality and update the baseline appropriately, with simple clicks in the dashboard.

Scaling the test execution

A typical way to scale the test is to set up your own infrastructure with different browser versions, or to use a cloud provider which will manage the infrastructure & browsers. There are a lot of disadvantages in either approach – from a cost, maintenance, security & compliance perspective.

To use any of the above solutions, you first need to ensure that your tests can run successfully and deterministically against all the supported browsers. That is a substantial added effort.

This approach of scaling seems flawed to me.

If you think about it, where are the actual bugs coming from? In my opinion, the bugs are related to:

  • Server bugs which are device / browser independent

Ex: A broken DB query, logical error, backend performance issues, etc.

  • Functional bugs – which are 99% are device / browser agnostic. This is because:
    • Modern browsers conform to the W3C standard
    • Logical bugs occur in all client environments.

Examples: not validating an input field, reversed sorting order, etc.

That said, though the browsers are W3C standard compliant, they still have their own implementation of the rendering engines, which means, the real value of running tests in different browsers and varying viewport sizes is in finding visual bugs!

By using Applitools, I get access to another awesome feature, due to which I can avoid running the same set of tests on multiple browsers & viewport sizes. This ended up saving my test implementation, execution and maintenance time. That is the Applitools Ultrafast Grid.

pasted image 0 7

See this quick video from about the Applitools Ultrafast Grid.

To enable the Applitools Ultrafast Grid, I just needed to add a few configuration details when instantiating Eyes. In my case, I added the below to my Eyes configuration.

pasted image 0 2

When I ran my test again on my laptop, and checked the results in the Applitools dashboard, I saw the awesome power of the Ultrafast Grid.

NOTE: The test ran just once on my laptop, however the Ultrafast Grid rendered the same screenshots by capturing the relevant page’s DOM & CSS in each of the browser & viewport combinations I provided above, and then did a visual comparison. As a result, in a little more than regular test execution time, I got functional & visual validation done for ALL my supported / provided configurations. Isn’t that neat!

pasted image 0 1

Do we need Smoke, Sanity, Regression suites?

To summarize:

  • Do not blindly start with classifying your tests in different categories. Challenge yourself to do better!
  • Have a Test Automation strategy and know your test automation framework objective & criteria (“Test Automation in the World of AI & ML” highlights various criteria to be considered to build a good Test Automation Framework)
  • Choose the toolset wisely
  • After all the correct (subjective) approaches taken, if your test execution (in a single browser) is still taking more than say, 10 min for execution, then you can run your tests in parallel, and subsequently, split the test suite into smaller suites which can give you progressive indication of quality
  • Applitools with its AI-power algorithms can make your functional tests lean, simple, robust and includes UI / UX validation
  • Applitools Ultrafast Grid will remove the need for Cross-Browser testing, and instead with a single test execution run, validate functionality & UI / Visual rendering for all supported Browsers & Viewports

The post The end of Smoke, Sanity and Regression appeared first on Automated Visual Testing | Applitools.

]]>
Creating a Flawless User Experience, End-to-End, Functional to Visual – Practical Hands-on Session https://applitools.com/blog/cypress-applitools-end-to-end-testing/ Thu, 02 May 2019 08:57:30 +0000 https://applitools.com/blog/?p=4715 Creating and maintaining a flawless and smooth user experience is no small feat. Not only do you need to ensure that the backend and front-end are functioning and appearing as...

The post Creating a Flawless User Experience, End-to-End, Functional to Visual – Practical Hands-on Session appeared first on Automated Visual Testing | Applitools.

]]>
Cypress-Applitools webinar - Gleb Bahmutov and Gil Tayar

Cypress-Applitools webinar

Creating and maintaining a flawless and smooth user experience is no small feat.

Not only do you need to ensure that the backend and front-end are functioning and appearing as expected, but also you must verify that this is the case across hundreds (if not thousands) of possible combos of screen-size/browser/operating systems.

And if that wasn’t enough – you are deploying and releasing continuously, in a rapidly changing ecosystem of devices, competitors, and technologies.

So how do you keep track of all those moving parts, in real time, in order to prevent functional and UI fails?

In this hands-on session, Gleb Bahmutov (VP Engineering @ Cypress.io) and Gil Tayar (Sr. Architect @ Applitools) demonstrated how you can safeguard your app’s functionality and UI across all digital platforms, with end-to-end tests. They presented — step-by-step — how to write functional tests, which ensure that the application performs user actions correctly, as well as how to write visual tests that guarantee that the application does not suffer embarrassing UI bugs, glitches and regressions.

Watch this practical, hands-on session, and learn how to:

  • Write functional end-to-end tests, while consistently capturing application screenshots for image comparison
  • Add visual regression tests to ensure that the application still appears as expected
  • Analyze visual diffs to determine the root cause of visual bugs
Gil Tayar’s Slide-deck:

Gleb Bahmutov Slide-deck can be found here.

 

Gil’s and Gleb’s GitHub Repo can be found here.

 

Full Webinar Recording:

Additional Resources and Recommended Reading:

— HAPPY TESTING —

 

The post Creating a Flawless User Experience, End-to-End, Functional to Visual – Practical Hands-on Session appeared first on Automated Visual Testing | Applitools.

]]>
Applitools Named to CB Insights’ AI 100 List for the Second Year in a Row https://applitools.com/blog/applitools-top-100-ai-company-2019/ Thu, 07 Feb 2019 20:01:45 +0000 https://applitools.com/blog/?p=4186 We’re excited to share that Applitools has once again been named to the CB Insights 2019 AI 100! This list represents the 100 most promising privately-held artificial intelligence (AI) companies in...

The post Applitools Named to CB Insights’ AI 100 List for the Second Year in a Row appeared first on Automated Visual Testing | Applitools.

]]>
100 most promising privately-held artificial intelligence (AI) companies in the world

We’re excited to share that Applitools has once again been named to the CB Insights 2019 AI 100!

This list represents the 100 most promising privately-held artificial intelligence (AI) companies in the world. These are companies using AI to solve big challenges, and we’re honored to be recognized alongside so many well-respected and innovative companies.

How did we make the list?

Through an evidence-based approach, the CB Insights research team selected the AI 100 from over 3,000 companies based on several factors: patent activity, investor quality, news sentiment analysis, their proprietary Mosaic scores, market potential, partnerships, competitive landscape, team strength, and tech novelty.

pasted image 0 12

Want to give Applitools Eyes a try? Enjoy a free trial of our easy-to-use visual testing SaaS solution

Why Applitools?

Today, if your company has a digital presence, it has to be compelling. It has to be visually flawless. Any less, and you’ll lose the trust of your customers.

After all, you wouldn’t shop in a store with a broken sign. “If they can’t fix such an obvious problem, what else is wrong?” you’d think to yourself.

Image

It’s the same with your digital presence. Visual glitches erode trust in your web or mobile app, which makes customers less likely to buy from you.

But it’s not easy to fix all visual bugs. Software development teams focus on constantly, rapidly delivering a drumbeat of new features. So, if you’re in QA, it’s hard to keep up and check every new feature. Especially given the wide range of phones and web browsers your customers might use.

Traditional functional testing tools aren’t well-suited for catching visual bugs — there are simply too many visual properties to check. So many QA teams revert back to manual testing, which is way too time-consuming to check everything in time.

So visual bugs escape into your release, are seen by your customers, and your business suffers. Like what happened to Amazon during its Prime Day sale last summer.

pasted image 0 7

So, we here at Applitools want to help. We built a visual AI to help you automatically test and monitor your mobile and web apps. This way, you can be sure they appear correctly across all the devices, operating systems, browsers, and screen sizes your customers use.

We’ve been busy

Since receiving this honor last year, we’ve been busy building out our platform, making life easier for developers and QA teams. Back in April, we released the world’s first UI Version Control system that lets you view the history of an application UI over time, and see how it has changed, what has been changed, and by whom.

pasted image 0 8

We also created an ultrafast visual testing grid that lets you test for visual bugs in dozens of combinations of browsers, screen sizes, and mobile device orientations — in parallel, in seconds.

pasted image 0 9

And we released a new root cause analysis feature, letting you pinpoint the cause of bugs in application code within minutes, not hours.

pasted image 0 11

Our goal with all of these releases? To eliminate hours from traditional bug diagnosis practices, and help you keep your software projects and digital transformation initiatives on schedule and looking great.

Helping QA professionals grow

However, it’s not just about new features. Applitools recently launched Test Automation University, to provide educational training courses to help improve test QA automation skill sets for all. Provided free of charge, these online courses help address the lack of accessibility to the training and educational resources needed to fill a growing skills gap in IT, and are vetted by some of the leading test automation experts in the world.pasted image 0 10

Make sure to stay tuned in the coming year for some more exciting announcements, updates to our solutions and the latest trends in test automation. And, thanks again to the CB Insights team for the honor of being added to the AI 100 list!

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

How are you looking to use AI to improve your product development and test automation? 

The post Applitools Named to CB Insights’ AI 100 List for the Second Year in a Row appeared first on Automated Visual Testing | Applitools.

]]>
How to troubleshoot and fix React bugs fast [step-by-step tutorial] https://applitools.com/blog/troubleshoot-fix-react-bugs/ Tue, 22 Jan 2019 20:13:06 +0000 https://applitools.com/blog/?p=4082 I’ve been playing around with Applitools for quite some time now, and I’d like to share what I’ve found to help you quickly troubleshoot React bugs and fix them fast....

The post How to troubleshoot and fix React bugs fast [step-by-step tutorial] appeared first on Automated Visual Testing | Applitools.

]]>
ReactJS logo

I’ve been playing around with Applitools for quite some time now, and I’d like to share what I’ve found to help you quickly troubleshoot React bugs and fix them fast.

My most recent articles in this series showed you how to integrate Applitools with Angular and Storybook. In case you missed these, you can find them here:

If you’re not already familiar, Applitools is an automated visual regression testing framework. It focuses on the visual aspects of your app — as opposed to functional testing — and plays a major role in exposing the visual differences between baseline snapshots and both current and future snapshots.

Applitools integrates with dozens of testing frameworks, such as Cypress.io, Storybook, and Selenium. It provides SDKs for use in your projects to seamlessly communicate and interact with Applitools, to send both baseline and test screenshots.

This article delves into a great new feature offered by Applitools: Automated Root Cause Analysis (RCA). The demo section will use a ReactJS app, together with Cypress.io, to demonstrate the RCA feature and its application.

Disclaimer: The main goal of this article is simply to introduce you to the Applitools RCA feature. If you are looking for an in-depth tutorial on the topics mentioned in this article, I recommend the following links:

Automated Root Cause Analysis

The concept of Root Cause Analysis stems from the world of management, where it’s defined as a method of problem-solving used for identifying the root causes of faults or problems.

So far, Applitools has been focusing on visual testing by comparing the baseline and the current test run snapshot through their AI engine. It finds the visual differences in order to map them graphically. There was previously no way to search your codebase for a reason behind any visual testing bug. As you may know, searching through code is nightmarish, time-consuming, and tedious!

This is where Applitools steps in with RCA, showing you the exact DOM and CSS changes between the baseline snapshot and the current test run snapshot.

Why is this so important for both developers and QA testers? Let’s take a look.

Demo

I’ll demonstrate the RCA feature using the Calculator project, previously published as part of the example projects on the React JS website.

Next, we’ll follow these steps to run the Calculator and add a few Cypress test cases.

Step 1: Clone the repository locally by issuing the following git command:

git clone git@github.com:ahfarmer/calculator.git

Step 2: Install all the npm dependencies by issuing the following command:

npm install

Step 3: Run the app:

npm run start

And you should see something like this:

calculator app

Voila. It works!

Step 4: Add Cypress package to the project:

npm install --save-dev cypress

The Cypress npm package adds a set of test files to help you with writing your own automated tests.

Step 5: Run the Cypress tests available now in the project by issuing the npx Cypress CLI command:

npx cypress run

running cypress

Now that Cypress is running properly, let’s add the Applitools Eyes SDK for Cypress package.

Step 6: Add the Applitools Eyes Cypress SDK package to the project:

npm install @applitools/eyes.cypress --save-dev

The Applitools Eyes Cypress SDK is a simple Cypress plugin. Once installed, it adds a few commands to the main cy object.

More specifically, it adds three main methods: cy.eyesOpen to start the test, cy.eyesCheckWindow to take screenshots (for each test step), and cy.eyesClose to close the test.

Let’s write our first Cypress test case to simulate adding two numbers and validating the result of the addition operation.

Step 7: Inside the cypress\integration folder, create the addition.spec.js file and paste the following:

View the code on Gist.

The spec file is self-explanatory. However, there’s one thing to note here. In terms of the selectors used inside cy.get() methods, the Calculator app never uses any ID or Class to distinguish buttons.

I used a nice feature offered by Google Chrome DevTools to copy the selector of the buttons I am interested in.

To do so yourself, simply follow these steps:

  1. Right-click on the button.
  2. Select “Inspect.” Chrome’s DevTools opens on the Elements tab with the element highlighted in blue.
  3. Right-click the highlighted element.
  4. Select “Copy” > “Copy selector.”

devtools

Next, let’s run the spec file to make sure our test runs successfully.

Step 8: Run the spec file:

npx cypress run --spec="cypress\integration\operations\addition.spec.js"

Now that our spec file runs successfully, let’s make use of the Applitools Eyes Cypress SDK commands to capture a few snapshots.

Step 9: Add the Applitools Eyes Cypress SDK commands to the addition.spec.js file:

View the code on Gist.

The code is self-documented.

Now, to integrate Applitools Eyes Cypress SDK into a spec file, you follow the Workflow below:

Start a new test

cy.eyesOpen({
    appName: '...',
    testName: '...',
    browser: { ... },
});

Take a snapshot (You may repeat this step wherever you want to take a snapshot)

cy.eyesCheckWindow('...');

End the test

cy.eyesClose();

Step 10: Run the spec file:

npx cypress run --spec="cypress\integration\operations\addition.spec.js"

Step 11: Check the test run in Applitools Test Manager.

first run

Notice the Add two numbers test run on the grid. Clicking this test run opens the three snapshots that were taken when we ran the Cypress spec file.

The first snapshot is labeled “Number 8 clicked.”The image shows the number 8 in the display of the calculator.

The second snapshot is labeled “Number 7 clicked.” This one shows the number 7 in the display.

Finally, the third snapshot is labeled “Display value of 15.” The image shows the number 15 in the display of the calculator.

Since this is the first test run, Applitools will save these as the baseline images.

Next, we will simulate a visual difference by changing a CSS selector and letting Applitools detect this change. Then, we’ll run the test again and see the results.

Step 12: Let’s assume that the CSS selectors were changed and caused a change in the background of the Number 8 button. Add the following CSS selector to Button.css file as follows:

div.component-button-panel > div:nth-child(2) > div:nth-child(2) > button {
  background-color: #eee;
}

Ready? Let’s run the spec file once again.

Step 13: Run the spec file. The test case fails, as expected, and Applitools detects a background color change for the Number 8 button.

npx cypress run --spec="cypress\integration\operations\addition.spec.js"

second run

Notice how the Applitools Test Manager recorded the second run of the test case and highlighted the visual difference in the three snapshots.

Step 14: Click on the first snapshot and compare to the baseline.

second run first snapshot

Applitools Test Manager displays both the baseline snapshot and the current test case run. Also, the number 8 button is highlighted to signala visual difference.

Step 15: Click on the RCA tool icon (highlighted in the red rectangle) to get a full analysis of the source of this visual bug.

rca

Next, you will be asked to select the visual difference you want the RCA to assess.

rca diff

The RCA tool runs and shows the root cause of this visual difference, which is a change in the CSS Rules.

The RCA tool can also detect DOM changes. Simply use the RCA Pointer Tool to select two buttons: 7 on the baseline (left side) and 5 on the current test run snapshot (right side). The results are in the image below.

rca diff dom

In the Text content, the RCA shows the difference between the two buttons. It also gives details about the bounding box for each button and the position.

Let’s change other DOM attributes and run the test again.

Step 16: Next, locate the src/component/Button.js file and add a class=”btn” to the HTML button:

<div className={className.join(" ").trim()}>
   <button class="btn" onClick={this.handleClick}>{this.props.name}
   </button>
</div>

Now, let’s run the test and check the results.

Step 17: Run the spec file. The test case fails, and Applitools detects a change in attributes for the buttons used to form the Calculator.

npx cypress run --spec="cypress\integration\operations\addition.spec.js"

rca diff att

Another category RCA detects is the Attributes category. It shows that in the current test run, each button has a new class attribute set.

Applitools RCA is both powerful and convenient: It can pinpoint out the exact DOM/CSS change. Then, all you have to do is locate the codebase and do whatever is required to pass your tests.

Remember that you can always accept the current test run and update the baseline, or reject the test run and keep the existing baseline snapshot.

RCA supports a set of DOM/CSS categories of change, including:

  • Textual changes
  • CSS Property changes
  • Attributes changes
  • Bounding Box changes
  • Tag changes (When an entire Tag gets changed from one test run to another. For instance, when a button is replaced by a hyperlink.)

Conclusion

The advent of Applitools RCA is a game-changer! Not only can you use Applitools for validating visual differences in your web app, but you can also make use of RCA to go in-depth and find the exact source of a visual difference bug with ease and minimal effort.

Happy Testing!

How much time will you save with Root Cause Analysis? Let us know in the comments.

 

The post How to troubleshoot and fix React bugs fast [step-by-step tutorial] appeared first on Automated Visual Testing | Applitools.

]]>
Applitools Root Cause Analysis: Found a Bug? We’ll Help You Fix It! https://applitools.com/blog/applitools-root-cause-analysis-found-a-bug-well-help-you-fix-it/ Wed, 05 Dec 2018 09:32:08 +0000 https://applitools.com/blog/?p=3903 I’m pleased to announce that Applitools has released Root Cause Analysis, or RCA for short. This new offering allows you to instantly pinpoint the root cause of visual bugs in...

The post Applitools Root Cause Analysis: Found a Bug? We’ll Help You Fix It! appeared first on Automated Visual Testing | Applitools.

]]>

I’m pleased to announce that Applitools has released Root Cause Analysis, or RCA for short. This new offering allows you to instantly pinpoint the root cause of visual bugs in your front-end code. I’d like to explain why RCA matters to you, and how it’ll help you in your work.

pasted image 0 4

https://dilbert.com/strip/2015-04-24

Well, maybe RCA doesn’t find THE root cause. After all, all software bugs are created by people, as the Dilbert cartoon above points out.

But when you’re fixing visual bugs in your web apps, you need a bit more information than what Dilbert is presenting above.

The myriad challenges of front-end debugging

What we’ve seen in our experience is that, when you find a bug in your front-end UI, you need to answer the question: what has changed?

More specifically: what are the differences in your application’s Document Object Model (or DOM for short) and Cascading Style Sheet (CSS) rules that underpin the visual differences in my app?

This isn’t always easy to determine.

Getting the DOM and CSS rules for the current version of your app is trivial. They’re right there in the app you’re testing.

But getting the baseline DOM and CSS rules can be hard. You need access to your source code management system. Then you need to fire up the baseline version of your app. This might involve running some build process, which might take a while.

Once your app builds, you then need to get it into exactly the right state, which might be challenging.

Only then can you grab your baseline DOM and CSS rules, so you can run your diffs.

But doing a simple diff of DOM and CSS rules will turn up many differences, many of them have nothing to do with your visual bug. So you’ll chase dead-end leads.

That’s a tedious, time-consuming process.

Meanwhile, if you release multiple times per week or per day, you have less time and more pressure to fix the bug before the next release.

This is pretty darn stressful.

And this is where Applitools RCA comes to the rescue!

AI-assisted bug diagnosis

With Applitools RCA, we’ve updated our SDKs to grab not just UI screenshots — as we always have — but also DOM and CSS rules. We send this entire payload to our test cloud, where we now perform an additional step.

First, our AI finds significant visual differences between screenshots, as it always has, while ignoring minor differences that your users won’t care about (also called false positives).

Then — and this is the new step with RCA — we find what DOM and CSS rules underpin those visual differences. Rather than digging through line after line of DOM and CSS rules, you’ll now only be shown the lines responsible for the difference in question.

We display those visual differences to you in Applitools Eyes Test Manager. You click on a visual difference highlighted in pink and instantly see what DOM and CSS rules are related to that change.

This diagram explains this entire process:

pasted image 0 2

Even better, we give you a link to the exact view you’re looking at — sort of like a blog post’s permalink, which you can add to your Jira bug report, Slack, or email. That way your teammates can instantly see what you’re looking at. Everyone gets on the same page, and bugs get fixed faster.

Here’s a summary of life before and after RCA:

Without Applitools Root Cause Analysis With Applitools RCA
QA finds a bug QA finds a bug
QA files bug report with ONLY current DOM and CSS rules QA files bug report showing exactly the DOM and CSS rule diffs that matter
Dev builds baseline version of app Dev updates the code and fixes the bug
Dev navigates app to replicate state
Dev gets baseline DOM and CSS rules
Dev compares baseline and current DOM and CSS rules
Dev digs through large set of diffs to find the ones that matter
Dev updates the code and fixes the bug

How much would RCA speed up your debugging process?

Making Shift Left Suck Less

If you’re in an organization is that is implementing Shift Left, you know that it’s all about giving developers the responsibility of testing their own code. Test earlier, and test more often, on a codebase you’re familiar with and can quickly fix.

And yes, there’s something to be said for that. But let’s face it: if you’re a developer doing Shift Left, what this means is you have a bunch of QA-related tasks added to your already overflowing plate. You need to build tests, run tests, maintain tests.

We can’t make the pain of testing go away. But with Applitools RCA, we can save you a lot of time and help you focus on writing code!

We intentionally designed RCA to look like the developer tools you use every day. Our DOM diffs look like your Google Chrome Dev Tools, and our CSS diffs look like your GitHub diffs.

All this means you have more time to build features, which is probably the part of your job you like to focus on.

ROI, Multiplied for R&D

This section is for the engineering managers, directors, and VPs.

Applitools RCA lets your team spend more time on building new features. It helps your R&D team be more productive, efficient, and happy!

It’s application features that move modern businesses forward. And RCA helps your team get bug fixing out of the way so they can focus on adding value to your company, and get kudos for adding more features to delight your customers.

So, RCA is good for your developers, for your business, but also for your CFO! Here’s a quick back-of-the-envelope you can share:

Let’s say you have 100 developers on your engineering team. How much money would you save if RCA can accelerate your development by 10%? The quick calculation shows: maybe $2m per year? Maybe more? That’s tons of money!

pasted image 0 3

UI Version Control, Evolved

Applitools RCA helps your product managers too!

With RCA, our user interface version control now includes the DOM and CSS associated with each screenshot.

This means that not only can you see how the visual appearance of your web application has evolved over time, but also how its underlying DOM and CSS have changed. This makes it easier for you to roll back new features that turned out to be a bad idea since they hurt the user experience or decreased the revenue.

You Win Big

Applitools Root Cause Analysis is a major step in the evolution of test automation because, for the first time, a testing product isn’t just finding bugs; it’s telling you how to fix the bugs.

The evolution of software monitoring tools demonstrates a similar pattern. Early monitoring tools would find an outage, but wouldn’t point you in any direction of fixing the underlying problem behind the outage.

Modern monitoring tools like New Relic or AppDynamics, on the other hand, would point you to the piece of code causing the outage: the root cause. The market spoke, and it chose monitoring tools that pointed users to the root cause.

In test automation, we’re where monitoring was ten years ago. Existing tools like Selenium, StormRunner, Cypress, and SmartBear are good and finding problems, but they don’t help you discover and fix the root cause.

Applitools RCA, like New Relic and AppDynamics, helps you instantly find the root cause of a bug. But unlike those tools, Applitools RCA doesn’t force you to rip-and-replace your existing test automation tools. It integrates with Selenium, Cypress, WebdriverIO, and Storybook, allowing you to make your existing testing much more powerful by adding root cause analysis.

integration logos

See for yourself

To see Applitools RCA in action, please watch this short demo video:

Start Using Applitools Root Cause Analysis Today!

If you’re not yet using Applitools Eyes, sign up for a free account.

If you’re an existing Applitools customer, a Free Trial of Applitools Root Cause Analysis is already provisioned in your account. To learn more about how to use it, see this documentation page.

A free trial of Applitools RCA is available until the end of February 2019. After that, it will be available for an additional fee.

The post Applitools Root Cause Analysis: Found a Bug? We’ll Help You Fix It! appeared first on Automated Visual Testing | Applitools.

]]>