The post My Holiday Hackathon 2020 – Kerry McKeever appeared first on Automated Visual Testing | Applitools.
]]>As a test automation architect, I am always on the lookout for the best solutions that can help reduce the feedback loop between development and QA. As development cycles speed up, having the ability to release features at breakneck speed has become crucial to the success of many organizations. With software infrastructure becoming ever more complicated, and multiple teams contributing to a single application to accommodate these release cycles, the need for effective solutions to catch bugs as fast as possible has never been greater. Enter Applitools.
Applitools has been making waves across the test community for years now by disrupting the traditional means by which test engineers automate their solutions with more coverage, fewer lines of code, and less time spent toward test maintenance. In a stroke of genius, they have invited engineers worldwide to participate in hackathons leveraging the Applitools Ultrafast Grid to show them firsthand how powerful the tool can be. ?
…standing up an automated test solution can easily take hours to lay the groundwork before you write a single test. The best part of leveraging Cypress and Applitools is that it took roughly one hour from start to finish including the development of my tests, which is almost unheard of.
Kerry McKeever
During my first Applitools hackathon, I set up my framework how I traditionally would for any company requiring strong cross-browser compatibility. To do this, I leveraged Selenium and C#, since that is what stack I had been working on for multiple employers up to that point. My challenge to myself for that hackathon was to see how I could build comparable tests in Applitools with as few lines as possible compared to my core Selenium tests.
Once I completed the solution, I ran an analysis on both projects to see where I stood. These comparisons do not account for the shared framework code for the Driver classes, enums to handle driver types and viewport sizes, page object models, and helper classes used between both solutions – just the test code itself. In the end, my core Selenium test project contained 845 lines of just test code.
Conversely, to execute the same tests with the same, or greater, level of confidence in the solution, the number of lines written for the Applitools test code was only 174!
Though we SDETS do all we can to ensure that we have a robust test framework that is as resilient to change as possible, UI test code, by nature, is flaky. Having to maintain and refactor test code isn’t a possibility, it’s a promise. Keeping your test code lean = less time writing and maintaining test code = more time focusing on new application features and implementing other test solutions! ?
When I saw the opportunity to join Applitools’ Holiday Shopping Hackathon, I jumped on it. This time, I wanted to challenge myself to create both my solution and test framework in not only as few lines as possible, but also as fast as I possibly could.
Traditionally, building out a test framework takes considerable time and effort. If you are working in Selenium, you can spend hours just building out a proper driver manager and populating your page object models.
Instead, I opted to go another route and use Cypress with my reasoning being two-pronged: 1) Cypress allows for fast ramp up of testing, where you can easily have your tests solution up and running against your application in as little as 5 minutes. Seriously. 2) Cypress is used as the test framework at my current place of employment, and it allowed me to build a functioning POC to showcase the power of Applitools to my colleagues. Win-win!
But there is a caveat…While Cypress allows for cross-browser automation against modern browsers, it does not support Safari or IE11. ? For many companies with a large client base using one or both of these browsers, this can be a non-starter. But that’s where Applitools comes in.
While Cypress has a browser limitation, Applitools allowed me to leverage the easy, quick setup of test automation in Cypress without sacrificing the ability to run these tests on Safari and IE11. Essentially, my test runs once with Cypress in a single browser, then Applitools re-renders the DOM from my test execution and runs the test against all browser combinations I specify. No browser limitations, no duplication of tests, in as few lines as possible.
How few? Let me share with you my simple solution that covered all necessary scenarios:
That’s it! A whopping 84 lines of very lean code that executes all test scenarios across all supported browsers and viewports.
As mentioned previously, standing up an automated test solution can easily take hours to lay the groundwork before you write a single test. The best part of leveraging Cypress and Applitools is that it took roughly 1 hour from start to finish including the development of my tests, which is almost unheard of.
Beyond just the speed and ease of using Applitools, having such a powerful and succinct dashboard where you can collaboratively work with the entire development team on bugs is where they really shine. Using AI to intelligently identify discrepancies between a base image and the test run, capturing the DOM for easy debugging, controlling A/B testing for variants of your application, and integrating with CI/CD pipelines and Jira…I could go on.
All these capabilities provide great visibility into the health of a product, and allow product owners, managers, developers and QA engineers to work harmoniously together in management of test flows, bugs, and regressions from a single dashboard. In any organization adopting a shift-left mentality, you really can’t ask for much more.
Lead image by James Osborne from Pixabay
The post My Holiday Hackathon 2020 – Kerry McKeever appeared first on Automated Visual Testing | Applitools.
]]>The post More than a Hackathon to me – Ivana Dilparic appeared first on Automated Visual Testing | Applitools.
]]>I remember the feeling when I submitted my entry for Holiday Shopping Hackathon. Sure, there is always a bit of relief once you wrap something up. But mostly I was just proud that I managed to handle each task from Hackathon instructions.
I wasn’t eyeing any of the prices nor expected to ever hear back from judges. I simply saw Applitools Holiday Shopping Hackathon as a learning opportunity and went for it. This sense of pride was coming from having my learning mission accomplished.
I see a lot of potential of this kind of testing. I recognize the benefit for the current project my team is working on.
–Ivana dilparic
But Hackathon ended up being much more for me, besides getting fine JavaScript and Cypress practice and getting introduced to this amazing visual testing tool, now I also have lifetime bragging rights and a bit of self esteem boost to keep up with my new tech goals.
I have been in managerial and leadership roles in IT industry for over 12 years. Even though I hold Master’s degree in Computer Science and my first role after graduation was as Software Engineer, 12 years is a lot of time to not be actively developing software.
All this time I have been making constant efforts to build and enhance an array of soft skills, to accumulate industry specific knowledge (for at least 5 industries) and to be able to actively participate in tech discussions. It turned out that this was not enough, at least not for the tech part, as I started getting feedback that I am “behind with the tech side”.
One thing was clear, I needed to craft a plan which will turn the things around.
I know by now that the best way to learn something is to start practicing it actively and to combine theory with practice. My work is not leaving me with much room for something like getting hands on experience with new cool frameworks. So all the learning and practicing had to happen in the evenings and over the weekend.
I subscribed to several podcasts and blogs and I handpicked some development courses which seemed related to technologies currently used with my team. I was investing a lot of time and was absolutely sure that there is no significant improvement. Courses I choose were either focused on very basic examples or were too demanding in terms of mandatory coursework. Even if I managed to stretch my time and cover self-assignments, whatever I learned there would fade away shortly because I was not actively using it.
The hackathon just sounded like a good idea. The instructions were very specific; it was very clear what was expected from participants. Timeframe for submission was very generous – since learning about Hackathon, I had several weeks to complete my submission, so I didn’t need to pause on rest of my life and get behind with the sleep (something I have been associating with Hackathons until now).
For Cypress part I relied on Introduction to Cypress course from Test Automation University. Mr. Gil Tayar did a great job!
I admit that I ignored the manual and relied on exploring Applitools myself. Overall, I find the app to be intuitive and easy to use. All information about test runs is very well structured and easy to navigate through.
Multi-browser testing worked like a charm. It took me no time to set this up, and speed of multi-browser testing was more than I hoped for.
For one of Hackathon tasks, I figured out how bugs work. That was straight forward. Potential issues were very obviously highlighted. They scream action.
Another task was related to root causes. I didn’t figure this one in first attempt, but I have obviously excelled on that second try.
I recall scenarios where QA team on my projects was using Selenium to automate tests. Idea was to automate UI tests as well.
There were too many visual issues which tests were not detecting. Even the issues important for the end user were being undetected by these tests. QA Engineers were explaining the causes for this, coming up with workarounds how to increase the test coverage with limited time investments. This didn’t sit that well with the client.
What can I say, this experience has turned me into advocate for Applitools. I see a lot of potential of this kind of testing. I recognize the benefit for the current project my team is working on. And looking back I see there were many cases over the years where it would have helped QA Engineers I have worked with. It shortens the time to set up UI tests and it probably shortens running time. Plus, it provides better coverage.
Also, I find Test Automation University to be one of the best things that happened in testing community lately. Thank you for doing this, Applitools!
As for my personal development, Hackathon was a great boost for me. It helped me carry on with my learning trajectory. And I expect more Hackathons in my future.
Lead image by Antonis Kousoulas from Pixabay
The post More than a Hackathon to me – Ivana Dilparic appeared first on Automated Visual Testing | Applitools.
]]>The post Stability In Cross Browser Test Code appeared first on Automated Visual Testing | Applitools.
]]>If you read my previous blog, Fast Testing Across Multiple Browsers, you know that participants in the Applitools Ultrafast Cross Browser Hackathon learned the following:
Today, we’re going to talk about another benefit of using Applitools Visual AI and Ultrafast Grid: test code stability.
Test code stability is the property of test code continuing to give consistent and appropriate results over time. With stable test code, tests that pass continue to pass correctly, and tests that fail continue to fail correctly. Stable tests do not generate false positives (report a failure in error) or generate false negatives (missing a failure).
Stable test code produces consistent results. Unstable test code requires maintenance to address the sources of instability. So, what causes test code instability?
Anand Bagmar did a great review of the sources of flaky tests. Some of the key sources of instability:
When you develop tests for an evolving application, code changes introduce the most instability in your tests. UI tests, whether testing the UI or complete end-to-end behavior, depends on the underlying UI code. You use your knowledge of the app code to build the test interfaces. Locator changes – whether changes to coded identifiers or CSS or Xpath locators – can cause your tests to break.
When test code depends on the App code, each app release will require test maintenance. Otherwise, no engineer can ensure that a “passing” test omitted an actual failure, or that a “failing” test indicates a real failure and not a locator change.
Considering the instability sources, a tester like you takes on a huge challenge with cross browser tests. You need to ensure that your cross browser test infrastructure addresses these sources of instability so that your cross browser behavior matches expected results.
If you use a legacy approach to cross browser testing, you need to ensure that your physical infrastructure does not introduce network or other infrastructure sources of test flakiness. Part of your maintenance ensures that your test infrastructure does not become a source of false positives or false negatives.
Another check you make relates to responsive app design. How do you ensure responsive app behavior? How do you validate page location based on viewport size?
If you use legacy approaches, you spend a lot of time ensuring that your infrastructure, your tests, and your results all match expected app user behavior. In contrast, the Applitools approach does not require debugging and maintenance of multiple test infrastructures, since the purpose of the test involves ensuring proper rendering of server response.
Finally, you have to account for the impact of every new app coding change on your tests. How do you update your locators? How do you ensure that your test results match your expected user behavior?
One thing we have observed over time: code changes drive test code maintenance. We demonstrated this dependency relationship in the Applitools Visual AI Rockstar Hackathon, and again in the Applitools Ultrafast Cross Browser Hackathon.
The legacy approach uses locators to both apply test conditions and measure application behavior. As locators can change from release to release, test authors must consider appropriate actions.
Many teams have tried to address the locator dependency in test code.
Some test developers sit inside the development team. They create their tests as they develop their application, and they build the dependencies into the app development process. This approach can ensure that locators remain current. On the flip side, they provide little information on how the application behavior changes over time.
Some developers provide a known set of identifiers in the development process. They work to ensure that the UI tests use a consistent set of identifiers. These tests can run the risk of myopic inspection. By depending on supplied identifiers – especially to measure application behavior, these tests run the risk of false negatives. While the identifiers do not change, they may no longer reflect the actual behavior of the application.
The modern approach limits identifier use to applying test conditions. Applitools Visual AI measures the application response of the UI. This approach still depends on identifier consistency – but with way fewer identifiers. In both hackathons, participants cut their dependence on identifiers by 75% to 90% – basically, they used way fewer identifiers. Their code ran more consistently and required less maintenance.
Applitools Ultrafast Grid overcomes many of the hurdles that testers experience running legacy cross browser test approaches. Beyond the pure speed gains, Applitools offers improved stability and reduced test maintenance.
Modern cross browser testing reduces dependency on locators. By using Visual AI instead of locators to measure application response, Applitools Ultrafast Grid can show when an application behavior has changed – even if the locators remain the same. Or, alternatively, Ultrafast Grid can show when the behavior remains stable even though locators have changed. By reducing dependency on locators, Applitools ensures a higher degree of stability in test results.
Also, Applitools Ultrafast Grid reduces infrastructure setup and maintenance for cross browser tests. In the legacy setup, each unique browser requires its own setup and connection to the server. Each setup can have physical or other failure modes that must be identified and isolated independent of the application behavior. By capturing the response from a server once and validating the DOM across other target browsers, operating systems, and viewport sizes, Applitools reduces the infrastructure debug and maintenance efforts.
Participant feedback from the Hackathon provided us with consistent views on cross browser testing. From their perspective, participants viewed legacy cross browser tests as:
In contrast, they saw Applitools Ultrafast Grid as:
You can read the entire report here.
What holds companies back from cross browser testing? Bad experiences getting results. But, what if they could get good test results and have a good experience at the same time? We ask participants about their experience on the Applitools Cross Browser Hackathon.
The post Stability In Cross Browser Test Code appeared first on Automated Visual Testing | Applitools.
]]>The post Fast Testing Across Multiple Browsers appeared first on Automated Visual Testing | Applitools.
]]>If you think like the smartest people in software, you conclude that testing time detracts from software productivity. Investments in parallel test platforms pay off by shortening the time to validate builds and releases. But, you wonder about the limits of parallel testing. If you invest in infrastructure for fast testing across multiple browsers, do you capture failures that justify such an investment?
Back in the day, browsers used different code bases. In the 2000s and early 2010s, most application developers struggled to ensure cross browser behavior. There were known behavior differences among Chrome, Firefox, Safari, and Internet Explorer.
Annoyingly, each major version of Internet Explorer had its own idiosyncrasies. When do you abandon users who still run IE 6 beyond its end of support date? How do you handle the IE 6 thorough IE 10 behavioral differences?
While Internet Explorer differences could be tied to major versions of operating systems, Firefox and Chrome released updates multiple times per year. Behaviors could change slightly between releases. How do you maintain your product behavior with browsers in the hands of your customers that you might not have developed with or tested against?
Cross browser testing proved itself a necessary evil to catch potential behavior differences. In the beginning, app developers needed to build their own cross browser infrastructure. Eventually, companies arose to provide cross browser (and then cross device) testing as a service.
In the 2020s, speed can provide a core differentiator for app providers. An app that delivers features more quickly can dominate a market. Quality issues can derail that app, so coverage matters. But, how do app developers ensure that they get a quality product without sacrificing speed of releases?
In this environment, some companies invest in cross browser test infrastructure or test services. They invest in the large parallel infrastructure needed in creating and maintaining cross browser tests. And, the bulk of uncovered errors end up being rendering and visual differences. So, these tests require some kind of visual validation. But, do you really need to repeatedly run each test?
Applitools concluded that repeating tests required costly infrastructure as well as costly test maintenance. App developers intend that one server response work for all browsers. With its Ultrafast Grid, Applitools can capture the DOM state on one browser and then repeat it across the Applitools Ultrafast Test Cloud. Testers can choose among browsers, devices, viewport sizes and multiple operating systems. How much faster can this be?
In the Applitools Ultrafast Cross Browser Hackathon, participants used the traditional legacy method of running tests across multiple browsers to compare behavior results. Participants then compared their results with the more modern approach using the Applitools Ultrafast Grid. Read here about one participant’s experiences.
The time that matters is the time that lets a developer know the details about a discovered error after a test run. For the legacy approach, coders wrote tests for each platform of interest, including validating and debugging the function of each app test on each platform. Once the legacy test had been coded, the tests were run, analyzed, and reports were generated.
For the Ultrafast approach, coders wrote their tests using Applitools to validate the application behavior. These tests used fewer lines of code and fewer locators. Then, the coders called the Applitools Ultrafast Grid and specified the browsers, viewports, and operating systems of interest to match the legacy test infrastructure.
The report included this graphic showing the total test cycle time for the average Hackathon submission of legacy versus Ultrafast:
Here is a breakdown of the average participant time used for legacy versus Ultrafast across the Hackathon:
Activity | Legacy | Ultrafast |
---|---|---|
Actual Run Time | 9 minutes | 2 minutes |
Analysis Time | 270 minutes | 10 minutes |
Report Time | 245 minutes | 15 minutes |
Test Coding Time | 1062 minutes | 59 minutes |
Code Maintenance Time | 120 minutes | 5 minutes |
The first three activities, test run, analysis, and report, make up the time between initiating a test and taking action. Across the three scenarios in the hackathon, the average legacy test required a total of 524 minutes. The average for Ultrafast was 27 minutes. For each scenario, then, the average was 175 minutes – almost three hours – for the legacy result, versus 9 minutes for the Ultrafast approach.
On top of the operations time for testing, the report showed the time taken to write and maintain the test code for the legacy and Ultrafast approaches. Legacy test coding took over 1060 minutes (17 hours, 40 minutes), while Ultrafast only required an hour. And, code maintenance for legacy took 2 hours, while Ultrafast only required 5 minutes.
As the Hackathon results showed, Ultrafast testing runs more quickly and gives results more quickly.
Legacy cross-browser testing imposes a long time from test start to action. Their long run and analysis times do not make them suitable for any kind of software build validation. Most of these legacy tests get run in final end-to-end acceptance, with the hope that no visual differences get uncovered.
Ultrafast approaches enable app developers to build fast testing across multiple browsers into software build. Ultrafast analysis catches unexpected build differences quickly so they can be resolved during the build cycle.
By running tests across multiple browsers during build, Ultrafast Grid users shorten their find-to-resolve cycle to branch validation even prior to code merge. They catch the rendering differences and resolve them as part of the feature development process instead of the final QA process.
Ultrafast testers seamlessly resolve unexpected browser behavior as they check in their code. This happens because, in less than 10 minutes on average, they know what differences exist. They could not do this if they had to wait the nearly three hours needed in the legacy approach. Who wants to wait half a day to see if their build worked?
Combine the other speed differences in coding and maintenace, and it becomes clear why Ultrafast testing across multiple browsers makes it possible for developers to run Ultrafast Grid in development.
Next, we will cover code stability – the reason why Ultrafast tests take, on average, 5 minutes to maintain, instead of two hours.
The post Fast Testing Across Multiple Browsers appeared first on Automated Visual Testing | Applitools.
]]>The post Visual Assertions – Hype or Reality? appeared first on Automated Visual Testing | Applitools.
]]>There is a lot of buzz around Visual Testing these days. You might have read or heard stories about the benefits of visual testing. You might have heard claims like, “more stable code,” “greater coverage,” “faster to code,” and “easier to maintain.” And, you might be wondering, is this a hype of a reality?
So I conducted an experiment to see how true this really is.
I used the instructions from this recently concluded hackathon to conduct my experiment.
I was blown away by the results of this experiment. Feel free to try out my code, which I published on Github, for yourself.
Before I share the details of this experiment, here are the key takeaways I had from this exercise:
Let us now look at details of the experiment.
We need implement the following tests to check the functionality of https://demo.applitools.com/tlcHackathonMasterV1.html
For this automation, I chose to use Selenium-Java for automation with Gradle as a build tool.
The code used for this exercise is available here: https://github.com/anandbagmar/visualAssertions
Once I spent time in understanding the functionality of the application, I was quickly able to automate the above mentioned tests.
Here is some data from the same.
Refer to HolidayShoppingWithSeTest.java
Activity | Data (Time / LOC / etc.) |
---|---|
Time taken to understand the application and expected tests | 30 min |
Time taken to implement the tests | 90 min |
Number of tests automated | 3 |
Lines of code (actual Test method code) | 65 lines |
Number of locators used | 23 |
Test execution time: Part 1: Chrome browser | 32 sec |
Test execution time: Part 2: Chrome browser | 57 sec |
Test execution time: Part 3: Chrome: 29 sec | 29 sec |
Test execution time: Part 3: Firefox: 65 sec | 65 sec |
Test execution time: Part 3: Safari: 35 sec | 35 sec |
A few interesting observations from this test execution:
When I added Applitools Visual AI to the already created Functional Automation (in Step 1), the data was very interesting.
Refer to HolidayShoppingWithEyesTest.java
Activity | Data (Time / LOC / etc.) |
---|---|
Time taken to add Visual Assertions to existing Selenium tests | 10 min |
Number of tests automated | 3 |
Lines of code (actual Test method code) | 7 lines |
Number of locators used | 3 |
Test execution time: Part 1: Chrome browser |
81 sec (test execution time) 38 sec (Applitools processing time) |
Test execution time: Part 2: Chrome browser |
92 sec (test execution time) 42 sec (Applitools processing time) |
Test execution time: (using Applitools Ultrafast Test Cloud) Part 3: Chrome + Firefox + Safari + Edge + iPhone X |
125 sec (test execution time) 65 sec (Applitools processing time) |
Here are the observations from this test execution:
See these below examples of the nature of validations that were reported by Applitools:
Version Check – Test 1:
Filter Check – Test 2:
Product Details – Test 3:
Lastly, an activity I thoroughly enjoyed in Step 2 was the aspect of deleting code that now became irrelevant because of using Visual Assertions.
To conclude, the experiment made it clear – Visual Assertions are not a hype. The below table shows in summary the differences in the 2 approaches discussed earlier in the post.
Activity | Pure Functional Testing | Using Applitools Visual Assertions |
---|---|---|
Number of Tests automated | 3 | 3 |
Time taken to implement tests | 90 min (implement + add relevant assertions) | – |
Time taken to add Visual Assertions to existing Selenium tests | – |
10 min Includes time taken to delete the assertions and locators that now became irrelevant |
Lines of code (actual Test method code) | 65 lines | 7 lines |
Number of locators used | 23 | 3 |
Number of assertions in Test implementation |
16 This approach validates only specific behavior based on the assertions. The first failing assertion stops the test. Remaining assertions do not even get executed |
3 (1 in for each test) Validates the full screen, captures all regressions and new changes as well in 1 validation |
Test execution time: Chrome + Firefox + Safari browser |
129 sec (for 3 browsers) | – |
Test execution time: (using Applitools Ultrafast Test Cloud) Part 3: Chrome + Firefox + Safari + Edge + iPhone X | – |
125 sec (test execution time) 65 sec (Applitools processing time) (for 4 browsers + 1 device) |
Visual Assertions help in the following ways:
You can get started with Visual Testing by registering for a free account here. Also, you can take this course from the Test Automation University on “Automated Visual Testing: A Fast Path To Test Automation Success”
The post Visual Assertions – Hype or Reality? appeared first on Automated Visual Testing | Applitools.
]]>The post How Easy Is Cross Browser Testing? appeared first on Automated Visual Testing | Applitools.
]]>In June Applitools invited any and all to its “Ultrafast Grid Hackathon”. Participants tried out the Applitools Ultrafast Grid for cross-browser testing on a number of hands-on real-world testing tasks.
As a software tester of more than 6 years, the majority of my time was spent on Web projects. On these projects, cross-browser compatibility is always a requirement. Since we cannot control how our customers access websites, we have to do our best to validate their potential experience. We need to validate functionality, layout, and design across operating systems, browser engines, devices, and screen sizes.
Applitools offers an easy, fast and intelligent approach to cross browser testing that requires no extra infrastructure for running client tests.
We needed to demonstrate proficiency with two different approaches to the task:
In total there were 3 tasks that needed to be automated, on different breakpoints, in all major desktop browsers, and on Mobile Safari:
These tasks would be executed against a V1 of the website (considered “bug-free”) and would then be used as a regression pack against a V2 / rewrite of the website.
I chose Cypress as I wanted a tool where I could quickly iterate, get human-readable errors and feel comfortable. The required desktop browsers (Chrome, Firefox and Edge Chromium) are all compatible. The system under test was on a single domain, which meant I would not be disadvantaged choosing Cypress. None of Cypress’ more advanced features were needed (e.g. stubbing or intercepting network responses).
The modern cross browser tests were extremely easy to set up. The only steps required were two npm package installs (Cypress and the Applitools SDK) and running
npx eyes-setup
to import the SDK.
Easy cross browser testing means easy to maintain as well. Configuring the needed browsers, layouts and concurrency happened inside `applitools.config.js`, a mighty elegant approach to the many, many lines of capabilities that plague Selenium-based tools.
In total, I added three short spec files (between 23 and 34 lines, including all typical boilerplate). We were instructed to execute these tasks against the V1 website then mark the runs as our baselines. We would then perform the needed refactors to execute the tasks against the V2 website and mark all the bugs in Applitools.
Applitools’ Visual AI did its job so well, all I had to do was mark the areas it detected and do a write-up!
In summary, for the modern tests:
all done in under an hour.
Performing a visual regression for all seven different configurations added no more than 20 seconds to the execution time. It all worked as advertised, on the first try. That is the proof of easy cross browser testing
For the traditional tests I implemented features that most software testers are either used to or would implement themselves: a spec-file per layout, page objects, custom commands, (attempted) screenshot diff-ing, linting and custom logging.
This may sound like overkill compared to the above, but I aimed for feature parity and reached this end structure iteratively.
Unfortunately, neither one of the plug-ins I tried for screenshot diff-ing (`cypress-image-snapshot`, `cypress-visual-regression` and `cypress-plugin-snapshots`) gave results in any way similar to Applitools. I will not blame the plug-ins, though, as I had a limited amount of time to get everything working and most likely gave up way sooner than one should have.
Since screenshot diff-ing was off the table, I chose to check each individual element. In total, I ended up with 57 CSS selectors and to make future refactoring easier I implemented page objects. Additionally, I used a custom method to log test results to a text file, as this was a requirement for the hackathon.
I did not count all the lines of code in the traditional approach as the comparison would have been absurd, but I did keep track of the work needed to refactor for V2 — 12 lines of code, meaning multiple CSS selectors and assertions. This work does not need to be done if Applitools is used, “selector maintenance” just isn’t a thing!
Applitools will intelligently find every single visual difference between your pages, while traditionally you’d have to know what to look for, define it and define what the difference should be. Is the element missing? Is it of a different colour? A different font or font size? Does the button label differ? Is the distance between these two elements the same? All of this investigative work is done automatically.
All in all, it has genuinely been an eye-opening experience, as the tasks were similar to what we’d need to do “in the real world” and the total work done exceeds the scope of usual PoCs.
My thanks to everyone at Applitools for offering this opportunity, with a special shout out to Stas M.!
Dan Iosif serves as SDET at Dunelm in the United Kingdom. He participated in the recently-completed Applitools Ultrafast Cross Browser Hackathon.
The post How Easy Is Cross Browser Testing? appeared first on Automated Visual Testing | Applitools.
]]>The post Why Learn Modern Cross Browser Testing? appeared first on Automated Visual Testing | Applitools.
]]>Why Learn Modern Cross Browser Testing?
100 Cross Browser Testing Hackathon Winners Share the Answer.
Today, we celebrate the over 2,200 engineers who participated in the Applitools Ultrafast Cross Browser Hackathon. To complete this task, engineers needed to create their own cross-browser test environment using the legacy multi-client, repetitive test approach. Then, they ran modern cross browser tests using the Applitools Ultrafast Grid, which required just a single test run that Applitools re-rendered on different clients and viewport specified by the engineers.
Participants discovered what you can discover as well:
Applitools Ultrafast Grid changes your approach from, “How do I justify an investment in cross-browser testing?” to “Why shouldn’t I be running cross-browser tests?”
Of the 2,200 participants, we are pleased to announce 100 winners. These engineers provided the best, most comprehensive responses to each of the challenges that made up the Hackathon.
Before we go forward, let’s celebrate the winners. Here is the table of the top prize winners:
Each of these engineers provided a high-quality effort across the hackathon tests. They demonstrated that they understood how to run both legacy and modern cross-browser tests successfully.
Collectively the 2,200 engineers provided 1,600 hours of engineering data as part of their experience with the Ultrafast Grid Hackathon. Over the coming weeks we will be sharing conclusions about modern cross-browser testing based on their experiences.
At its core, cross-browser testing guards against client-specific failures.
Let’s say you write your application code, compile it to run in containers on a cloud-based service. For your end-to-end tests, you use Chrome on Windows. You write your end-to-end browser test automation using Cypress (or Selenium, etc.). You validate for the viewport size of your display? What happens if that is all you test?
Lots depends on your application. If you have a reactive application, how do you ensure that your application resizes properly around specific viewport break points? If your customers use mobile devices, have you validated the application on those devices? But, if HTML, CSS, and Javascript are standards, who need cross-brower testing?
Until Applitools Ultrafast Grid, that question used to define the approach organizations took to cross-browser testing. Some organizations did cross browser tests. Others avoided it.
If you have thought about cross-browser testing, you know that most quality teams possessed a prejudice about the expense of cross-browser infrastructure. If asked, most engineers would include the cost and complexity of setting up a multi-client and mobile device lab, the effort to define and maintain cross-browser test software, and the tools to measure application behavior across multiple devices.
When you look back on how quality teams approached cross-browser testing, most avoided it. Given the assumed expense, teams needed justification to run cross-browser tests. They approached the problem like insurance. If the probability of a loss exceeded the cost of cross-browser testing, they did it. Otherwise, no.
Even when companies provided the hardware and infrastructure as a cross-browser testing service, the costs still ran high enough that most organizations skipped cross-browser testing.
Some of our first customers recognized that Applitools Visual AI provides huge productivity gains for cross-browser tests. Some of our customers used popular third-party services for cross-browser infrastructure. All the companies that ran cross-browser tests did have significant risk associated with an application failure. Some had experienced losses associated with browser-specific failures.
We had helped our customers use Applitools to validate the visual output of cross-browser tests. We even worked with some of the popular third-party services that helped cross-browser tests without having to install or maintain an on-premise cross-browser lab.
Our experience with cross-browser testing gave us several key insights.
First, we rarely saw applications that had been coded separately for different clients. The vast majority of applications depended on HTML, CSS and JavaScript as standards for user interface. No matter which client ran the tests, the servers responded with the same code. So, each browser at a given step in the test had the same DOM.
Second, if differences arose in cross-browser tests, they were visual differences. Often, they were rendering differences – either due to the OS, browser, or for a given viewport size. But, they were clearly differences that could affect usability and/or user experience.
This led us to realize that organizations were trying to uncover visual behavior differences for a common server response. Instead of running the server multiple times, why not grab the DOM state on one browser and then duplicate the DOM state on every other browser? You need less server hardware. And you need less software – since you only need to automate a single browser.
Using these insights, we created Applitools Ultrafast Grid. For each visual test, we capture the DOM state and reload for every other browser/os/viewport size we wish to test. We use cloud-based clients, but they do not need to access the server to generate test results. All we need to do is reload the server response on those cloud-based clients.
Ultrafast Grid provides a cloud-based service with multiple virtual clients. As a user, you specify the browser and viewport size to test against as part of the test specification. Applitools captures a visual snapshot and a DOM snapshot at each point you tell it to make a capture in an end-to-end, functional, or visual test. Applitools then applies the captured DOM on each target client and captures the visual output. This approach requires fewer resources and increases flexibility.
This infrastructure provides huge savings for anyone used to a traditional approach to cross-browser testing. And, Applitools is by far the most accurate visual testing solution, meaning we are the right solution for measuring cross-browser differences.
You might also be interested in using a flexible but limited test infrastructure. For example, Cypress.io has been a Chrome-only JavaScript browser driver. Would you rewrite tets in Selenium to run them on Firefox, Safari, or Android? No way.
We knew that so many organizations might benefit from a low-cost, highly-accurate cross-browser testing solution. If cost had held people back from trying cross-browser testing, a low-cost, easy-to-deploy, accurate cross-browser solution might succeed. But, how do we get the attention of organizations that have avoided cross-browser testing because their risks could not justify the costs?
We came up with the idea of a contest – the Ultrafast Grid Hackathon. This is our second Hackathon. In the first, the Applitools Visual AI Rockstar Hackathon, we challenged engineers who used assertion code to validate their functional tests to use Applitools Visual AI for the assertion instead. The empirical data we uncovered from our first Hackathon made it clear to participants that using Applitools increased test coverage even as it reduced coding time and code maintenance effort.
We hoped to upskill a similar set of engineers by getting the to learn Ultrafast Grid with a hackathon. So, we announced the Applitools Ultrafast Grid Hackathon. Today, we announced the winners. Shortly, we will share some of the empirical data and lessons gleaned from the experiences of hackathon participants.
These participants are engineers just like you. We think you will find their experiences insightful.
Here are two of the insights.
“The efforts to implement a comprehensive strategy using traditional approaches are astronomical. Applitools has TOTALLY changed the game with the Ultrafast Grid. What took me days of work with other approaches, only took minutes with the Ultrafast Grid! Not only was it easier, it’s smarter, faster, and provides more coverage than any other solution out there. I’ll be recommending the Ultrafast Grid to all of the clients I work with from now on.” – Oluseun Olugbenga Orebajo, Lead Test Practitioner at Fujitsu
“It was a wonderful experience which was challenging in multiple aspects and offered a great opportunity to learn cross browser visual testing. It’s really astounding to realize the coding time and effort that can be saved. Hands down, Applitools Ultrafast Grid is the tool to go for when making the shift to modern cross environment testing . Cheers to the team that made this event possible.” – Tarun Narula, Technical Test Manager at Naukri.com
Look out for more insights and empirical data about the Applitools Ultrafast Grid Hackathon. And, think about how running cross-browser tests could help you validate your application and reduce some support costs you might have been incurring because you couldn’t justify the cost of cross-browser testing. With Applitools Ultrafast Grid, adding an affordable cross-browser testing solution to your application test infrastructure just makes sense.
The post Why Learn Modern Cross Browser Testing? appeared first on Automated Visual Testing | Applitools.
]]>The post How Do You Catch More Bugs In Your End-To-End Tests? appeared first on Automated Visual Testing | Applitools.
]]>How much is it worth to catch more bugs early in your product release process? Depending on where you are in your release process, you might be writing unit or systems tests. But, you need to run end-to-end tests to prove behavior, and quality engineers require a high degree of skill to write end-to-end tests successfully.
What would you say if a single validation engine could help you ensure data integrity, functional integrity, and graphical integrity in your web and mobile applications? And, as a result, catch more bugs earlier in your release process?
Let’s start with the dirty truth: all software has bugs. Your desire to create bug-free code conflicts with the reality that you often lack the tools to uncover all the bugs until someone finds them way late in the product delivery process. Like, say, the customer.
With all the potential failure modes you design for – and then test against – you begin to realize that not all failure modes are created equal. You might even have your own triage list:
So, where does graphical integrity and consistency fit on your list? For many of your peers, graphical integrity might not even show up on their list. They might consider graphical integrity as managing cosmetic issues. Not a big deal.
Lots of us don’t have reliable tools to validate graphical integrity. We rely on our initial unit tests, systems tests, and end-to-end tests to uncover graphical issues – and we think that they’re solved once they’re caught. Realistically, though, any application evolution process introduces changes that can introduce bugs – including graphical bugs. But, who has an automation system to do visual validation with a high degree of accuracy?
Your web and mobile apps behave at several levels. The level that matters to your users, though, happens at the user interface on the browser or the device. Your server code, database code, and UI code turns into this representation of visual elements with some kind of visual cursor that moves across a plane (or keyboard equivalent) to settle on different elements. The end-to-end test exercises all the levels of your code, and you can use it to validate the integrity of your code.
So, why don’t people think to run more of these end-to-end tests? You know the answers.
First, end-to-end tests run more slowly. Page rendering takes time – your test code needs to manipulate the browser or your mobile app, execute an HTTP request, receive an HTTP response, and render the received HTML, CSS, and JavaScript. Even if you run tests in parallel, they’re slower than unit or system tests.
Second, it takes a lot of effort to write good end-to-end tests. Your tests must exercise the application properly. You develop data and logic pre-conditions for each test so it can be run independently of others. And, you build test automation.
Third, you need two kinds of automation. You need a controller that allows you to control your app by entering data and clicking buttons in the user interface. And, most importantly, you need a validation engine that can capture your output conditions and match those with the ones a user would expect.
You can choose among many controllers for browsers or mobile devices. Still, why do your peers still write code that effectively spot-checks the DOM? Why not use a visual validation engine that can catch more bugs?
You have peers who continue to rely on coded assertions to spot-check the DOM. Then you have the 288 of your peers who did something different: they participated in the Applitools Visual AI Rockstar Hackathon. And they got to experience first-hand the value of Visual AI for building and maintaining end-to-end tests.
As I wrote previously, we gave participants five different test cases, asked them to write conventional tests for those cases, and then to write test cases using Applitools Visual AI. For each submission, we checked the conditions each test writer covered, as well as the failing output behaviors each test-writer caught.
As a refresher, we chose five cases that one might encounter in any application:
For these test cases, we discovered that the typical engineer writing conventional tests to spot-check the DOM spent the bulk of their time writing assertion code. Unfortunately, the typical spot-check assertions missed failure modes. The typical submission got about 65% coverage. Alternatively, the engineers who wrote the tests that provided the highest coverage spent about 50% more time writing tests.
However, when using Visual AI for visual validation, two good things happened. First, everyone spent way less time writing test code. The typical engineer went from 7 hours of coding tests and assertions to about 1.2 hours of coding tests and Visual AI. Second, the average test coverage jumped from 65% to 95%. So, simultaneously, engineers took less time and got more coverage.
When you find more bugs, more quickly, with less effort, that’s significant to your quality engineering efforts. You’re able to validate data, functional, and graphical by focusing on the end-to-end test cases you run. You spend less time thinking about and maintaining all the assertion code checking the result of each test case.
Using Visual AI makes you more effective? How much more effective? Based on the data we reviewed – you catch 45% of your bugs earlier in your release process (and, importantly, before they reach customers).
We have previously written about some of the other benefits that engineers get when using Visual AI, including:
By comparing and contrasting the top participants – the prize winners – with the average engineer who participated in the Hackathon, we learned how Visual AI helped the average engineer greatly – and the top engineers become much more efficient.
The bottom line with Visual AI — you will catch more bugs earlier than you do today.
Applitools ran the Applitools Visual AI Rockstar Hackathon in November 2019. Any engineer could participate, and 3,000 did so from around the world. 288 people actually completed the Hackathon and submitted code. Their submissions became the basis for this article.
You can read the full report we wrote: The Impact of Visual AI on Test Automation.
In creating the report, we looked at three groups of quality engineers including:
By comparing and contrasting the time, effort, and effectiveness of these groups, we were able to draw some interesting conclusions about the value of Visual AI in speeding test-writing, increasing test coverage, increasing test code stability, and reducing test maintenance costs.
You now know five of the core benefits we calculate from engineers who use Visual AI.
So, what’s stopping you from trying out Visual AI for your application delivery process? Applitools lets you set up a free Applitools account and start using Visual AI on your own. You can download the white paper and read about how Visual AI improved the efficiency of your peers. And, you can check out the Applitools tutorials to see how Applitools might help your preferred test framework and work with your favorite test programming language.
Cover Photo by michael podger on Unsplash
The post How Do You Catch More Bugs In Your End-To-End Tests? appeared first on Automated Visual Testing | Applitools.
]]>The post Functional vs Visual Testing: Applitools Hackathon appeared first on Automated Visual Testing | Applitools.
]]>Many thanks to Applitools for the exciting opportunity to learn more about Visual AI testing by competing in the hackathon. While trying to solve the five challenges step by step, you truly grasp the fact that proper UI testing is impossible without visual testing. But let’s start from the beginning.
Two versions of the same website were provided. These two versions represented different builds of the same application.
Build 1: https://demo.applitools.com/hackathon.html
Build 2: https://demo.applitools.com/hackathonV2.html
I will evaluate each task for both approaches from 1 to 5, where 1 is “not applicable” and 5 is “the best choice”, and draw some conclusions at the end.
In my tests, I used the JDI Light test automation framework that is based on Selenium, but it is more effective and easier to use.
Here is the link to my final solution: https://github.com/RomanIovlev/applitools-hackathon
Here is the link to Applitools documentation: https://help.applitools.com/hc/en-us
In the first challenge, we needed to validate the view of the Login page in general, meaning to verify that all the elements are displayed properly.
It’s obvious that if you need to validate how the page looks, you can’t rely solely on functional validation. Why?
These are a few of the tricky, yet important things that prevent you from properly testing using traditional functional tests. Sometimes it is difficult to describe all the possible failure modes, and some validations are just impossible to verify. And of course, with the functional approach, in most cases, you can’t check that something unexpected does not appear. Sometimes you even can’t imagine that.
Here are the differences detected by Applitools between Build 1 and Build 2
How can you check this using the traditional approach to test automation? There are more than a dozen UI elements on the page and you need to create a method that validates them all
That’s 40 lines of code and we’ve only checked what we know is there, and not some unexpected surprises.
How does this compare to using Applitools to verify this with visual testing? Only one line of code is needed!
eyes.checkWindow("Login Page view");
Here we use the checkWindow() method which allows us to validate the entire page.
This one line of code will help you to prevent more risks. And at the same time, thanks to the AI used in Applitools these validations will be stable and reduce the number of typical false-negative cases.
And the last point here, in addition to less code and broader validations, visual tests allow you to easily validate multiple issues at once, which is not a simple task in functional testing approaches.
Testing Type | Applicable (1-5) | Comment |
---|---|---|
Functional approach (JDI Light) | 2 | you can try to test most of the valuable elements, but this requires you to write a lot of code and it provides no guarantees |
Visual approach (Applitools) | 5 | With just one line of code, you can check most of the possible issues |
The second challenge is represented by a typical task for most applications. We needed to validate general login functionality for all valuable cases: Success login, Failed login, Empty and Big values, all Allowed symbols (maybe even some JS or SQL injection cases).
Definitely, this task is good for functional testing using the data-driven approach. Create test data for positive and negative cases and let it flow through your scenarios. Thanks to JDI you can describe simple forms in one row without complex PageObjects and submit it using a Data entity.
public static Form<User> loginForm;
And then you can use this form in your tests.
@Test(dataProvider = "correctUsers"...)
public void loginSuccessTest(User user) {
loginForm.loginAs(user);
...
We call this – Entities Driven Testing approach (or EDT). If you are interested in more details, please follow the link https://jdi-docs.github.io/jdi-light/?java#entity-driven-testing or see how it looks on Github.
So, let’s get back to test cases. When I wrote tests and run them against build V2 they were passed but… they should not! See how the incorrect password error message looks like.
The text is correct – this is what we always validate with the functional approach, but the message’s background is broken. This is a really enlightening moment where you realize that all the tests you’ve written in your career are not that thorough and at risk of missing such issues.
I have no idea how such cases can be tested using the functional approach, and what’s worse is that you don’t even consider all these types of issues when writing the tests.
The conclusion here is that in addition to functional validation, it’s always good to have visual checks as well. And with Applitools you can do it in a simple and stable manner.
eyes.checkRegion("Alert: "+message, By.cssSelector(".alert-warning"));
The check() or checkRegion() method used here allows us to validate the view of the exact element.
Testing Type | Applicable (1-5) | Comment |
---|---|---|
Functional approach (JDI Light) | 5 | this task is mostly for Functional testing |
Visual approach (Applitools) | 3 | but in some cases, you can’t avoid visual validation |
The next challenge in the hackathon was to validate the sorting of a data table. The scenario called for clicking on the table header (Amount column) and validating that the data in the sorted column is incorrect order. But this isn’t enough:
This is a really interesting task and I would like you to try it yourself.
Let’s consider how we’d approach this using the functional approach. Programmatically, you’d need to:
With standard frameworks like Selenium, WebDriverIO, Cypres, or Selenide you could write hundreds of lines of code to properly interact with the table (especially with the cells containing images and color-related elements).
Thanks to JDI Light you can describe this table in just one row to test the data.
public static DataTable<TransactionRow, Transaction> transactionsTable;
And a few more rows if you would like to describe in detail the complex structure of each row. (See more details on Github.)
Via the functional approach, the validation has four steps:
So, using the functional approach, the test script is about 5-7 lines of code (including row comparisons):
List<Transaction> unsortedTransactions = transactionsTable.allData();
transactionsTable.headerUI().select("AMOUNT");
transactionsTable.assertThat()
.rows(hasItems(toArray(unsortedTransactions)))
.sortedBy((prev,next) -> prev.amount.value() < next.amount.value());
And what about Visual validation with Applitools? Frankly speaking, we can do this validation with just two lines of code:
amountHeader.click(); eyes.checkElement(transactions, "Transactions Ascending");
But this approach has its limitations:
The best solution here is to mix functional and visual validations with Applitools:
See full code below:
List unsortedTransactions = transactionsTable.allData();
List images = transactionsTable.rowsImages();
transactionsTable.headerUI().select("AMOUNT");
transactionsTable.assertThat()
.rows(hasItems(toArray(unsortedTransactions)))
.sortedBy((prev,next) -> prev.amount.value() < next.amount.value())
.rowsVisualValidation("Description", images);
Testing Type | Applicable (1-5) | Comment |
---|---|---|
Functional approach (JDI Light) | 5 | This task is mostly for functional testing. |
Visual approach (Applitools) | 4 | In the simple case for the table with hardcoded data, you can check the whole table just in one line of code.And in some cases, you can’t verify without visual validation. |
The next task was to validate chart view – definitely task for visual validation.
At first glance, you have no way to get the data from canvas because the details of the chart do not exist in the DOM. With Applitools, you can validate chart view “before” and “after”.
compareExpenses.click();
eyes.checkElement(chart, "Expenses Chart 2017-2018");
showNextYear.click();
eyes.checkElement(chart, "Expenses Chart 2017-2019");
P.S. I would like to show you just one trick that is useful in this particular case. (Note: In most cases with charts you can’t get data.)
Using the following JavaScript snippet, you can get the data for this chart and use it in functional testing. But in any case, you can’t check colors and chart’s layout in this way.
public ChartData get() {
Object rowChartData = jsExecute("return { " +
"labels: window.barChartData.labels, " +
"dataset: window.barChartData.datasets.map(ds => ({ " +
"bgColor: ds.backgroundColor, " +
"borderColor: ds.borderColor, " +
"label: ds.label, " +
"data: ds.data })) " +
"}");
return gson.fromJson(gson.toJson(rowChartData), ChartData.class);
}
See more details on Github.
Testing Type | Applicable (1-5) | Comment |
---|---|---|
Functional approach (JDI Light) | 1 | Only if you are lucky, you can get some data from the canvas; but in general, this is not a case for functional testing. |
Visual approach (Applitools) | 5 | Visual validation is the best choice for testing canvas staff. |
The last task was to validate the advertisement content that comes from an external site.
I think the Applitools team added this task to show Applitools’ MatchLevel: Layout capability, which is great for validating an application’s layout without comparing the exact content.
Here’s the code to do this with visual validation:
eyes.setMatchLevel(MatchLevel.LAYOUT);
eyes.checkElement(advertisements, "Dynamic Advertisement");
But in this case, we can just check isDisplayed() (or validate the size of the advert blocks).
Here, visual validation is a little better, because it also validates the layout, but the difference is not so big for just one ad. However, I can see where the visual validation of an entire page where all of the content is different would be really powerful.
Testing Type | Applicable (1-5) | Comment |
---|---|---|
Functional approach (JDI Light) | 4 | Cannot validate layout. |
Visual approach (Applitools) | 5 | Validate all possible layout problems. |
The traditional approach that we often use for functional test automation is good for validating scenarios and data on the site; but if you validate only these things, you can easily miss layout or visual issues. Users abandon sites that have significant visual bugs, and in some cases, it makes the app unusable altogether.
Your application is the face of your organization. If you really care about your clients’ comfort and would like to show that clients can trust your company because you have good solutions, you must include visual testing together with functional testing in your overall testing strategy.
Roman Iovlev participated in the Applitools Hackathon in November 2019.
Cover Photo by Markus Spiske on Unsplash
The post Functional vs Visual Testing: Applitools Hackathon appeared first on Automated Visual Testing | Applitools.
]]>The post Visual AI Rockstar Hackathon Winners Announced! appeared first on Automated Visual Testing | Applitools.
]]>We are thrilled to announce the Visual AI Rockstar Hackathon winners. We think the people who participated and won the Hackathon are not just some of the top QA engineers but also trailblazers who are leading the way in pushing the QA industry forward. Congrats to all of them!
You can find all the Hackathon results here
In this blog, we provide you a summary of the Hackathon concept. Also, we share how we designed it, and some of the things we learned.
Our idea behind the Hackathon started with a question: what incentive gets an engineer to try a new approach to functional test? Our customers described how they use Applitools to accelerate the development of automated functional tests. We hoped engineers who weren’t regular Applitools users could have similar experiences and get comparable value from Visual AI. The Hackathon seemed to give engineers like you an incentive to compare traditional functional test code with Visual AI.
When you think about it, you realize that all the user-observable functionality has an associated UI change to it. So if you simply take a screenshot after the function has run, if the UI didn’t change as expected, you found a functional bug. And if the functionality worked but something else in the UI didn’t work, then you found a visual bug. Since the screenshot captures both, you can very easily do both visual and functional testing through our Visual AI.
Visual validation overcomes a lot of functional test coding challenges. For example, many apps include tables – like a compare list of purchase options. If you let your customers sort the table by price, how do you validate that the output results from a sort match the correct order – and that all the row contents beyond the sort column behave as expected? Or, what happens when you use a graphics library that must behave correctly, for which you only have HTML checks? For example, your app creates bar-charts using Canvas technology. How do you automate the validation of the Canvas code?
Since we are simply taking screenshots after the functionality, we can capture everything. That simplifies all your validation tasks.
Realistically, we know that free items have real costs. To use our free Applitools account, you need to take the time to learn Visual AI and try it out. While you might be willing to try the free account, would your selected tests highlight the value of Applitools’s Visual AI? We were confident that giving you the right experience would give you an easy way to see the value of Applitools. So, we built the test environment for the Hackathon in which you could run your tests.
We built a sample Hackathon app. Next, we designed 5 common use cases where Applitools and Visual AI result in simpler functional test code or make test automation possible. Finally, we ran the Hackathon and gave people like you the chance to compare writing tests using the traditional approach versus using Appllitools. Engineers like you who tried the Hackathon generated many surprising and valuable experiences. Cumulatively, your experiences show the value of Visual AI in the workflows of many app development teams.
We graded on a scale of 1-100 points. We divided the points across all five use cases. Within each case, we treated the points between Visual AI and the traditional approaches separately. What mattered included:
Our judges spent weeks looking through all the submissions and judged every submission very carefully. The winners scored anywhere from 79 points and all the way to a perfect 100!
Part of our test design included building on-page functionality that required thoughtful test engineering. Hackathon winners needed to cover relevant test cases as well as ensure page functionality. Generally speaking, people who scored the highest points wrote a lot of code and spent a lot of time in the traditional approach. Even with economic coding, the winners wrote many lines of code to validate a large number of on-page changes, such as sorting a table.
Unfortunately, we also found that many participants struggled writing proper tests in the traditional approach. Some struggled with test design, some with the validation of a given page structure, and some struggled with other technical limitations of the code-based approach.
While participants either struggled or succeeded with traditional test code, pretty much every participant excelled at using Visual AI. Many of them succeeded on their first try! We found this to be very gratifying.
We plan to discuss more this in a future webinar, so stay tuned. But in the meantime, check out the Hackathon winners below.
After judging we found that there were some people with the same score or were very close calls. So we have decided to award an additional 9 people a $200 prizes each! So instead of 80, $200 winners, we’ll have 89, $200 winners!
We wanted to leave you with quotes from the Hackathon winners. We are glad to recognize them for their achievements and are pleased with their success.
Blogs about chapters in Raja Rao DV’s series on Modern Functional Testing:
Actions you can take today:
The post Visual AI Rockstar Hackathon Winners Announced! appeared first on Automated Visual Testing | Applitools.
]]>