AI developer toolset series: Applitools on AI-assisted root cause analysis

The Computer Weekly Developer Network is in the engine room, covered in grease and looking for Artificial Intelligence (AI) tools for software application developers to use.

This post is part of a series which also runs as a main feature in Computer Weekly.

With so much AI power in development and so many new neural network brains to build for our applications, how should programmers ‘kit out’ their AI toolbox?

How much grease and gearing should they get their hands dirty with… and, which robot torque wrench should we start with?

Applitools

The following text is written by Al Sargent in his role as VP of product marketing at Applitools — the company is known for its AI-powered visual testing and monitoring software capabilities.

Sargent discusses AI-assisted root cause analysis…

Visual testing is the act of verifying that a User Interface (UI) appears correctly to its users.

Sometimes confused with functional testing, visual testing helps ensure that each UI element appears in the right colour, shape, position and size across any and all viewports, browsers and device types. Visual testing also ensures that UI elements don’t hide or overlap with each other on the screen.

Because visual testing is difficult to automate, oftentimes these tests are done manually.

Front-end debugging

A visual test is comprised of capturing a screenshot, which serves as a baseline image against which all other changes are compared. Once the baseline is established, developers run test code and whenever a change is detected, an image of the change is captured. The test runner then compares these images to the baseline image for this area of the code and if differences are detected between images, the test fails.

After the test code is run and a report is generated, developers need to review the images that are different than the baseline. Automated visual testing will clearly highlight the visual differences between images – in our own tool [Applitools Eyes] for example, the visual differences are highlighted in pink. If the difference was caused by a bug, developers can fix the bug and rerun the test to check that the bug was, indeed, fixed. If the difference was caused by a real change in the UI, developers can review the screenshot and update the baseline so that future test runs will be passed.

Developers also need to know the differences in their application’s Document Object Model (DOM) and Cascading Style Sheet (CSS) rules that underpin the visual differences in their app, which is not always easy to determine. By using automated visual testing combined with existing developer tests, detecting visual bugs in the early stages of a development cycle is made possible.

Line by line sifting

When a developer discovers a visual bug in an app, they have to sift through line after line of DOM and CSS rules to find the root cause. Getting the DOM and CSS rules for the current version of an app is straightforward, but finding the baseline DOM and CSS rules can be hard. Developers need to access the source code management system, fire up the baseline version of their app… and then once the app builds, they then need to get it into exactly the right state.

Only then can a developer grab their baseline DOM and CSS rules, in order to run their diffs.

But doing a simple diff of DOM and CSS rules will turn up many differences, many of them have nothing to do with a visual bug. Oftentimes, devs will be chasing dead-end leads, which is a tedious, time-consuming process. Meanwhile, if a developer is expected to release multiple times per day or week, they have less time (but more pressure) to fix the bug before the next release.

What if an automated visual testing solution could show the specific DOM differences likely to cause a visual bug and also show the relevant CSS rule differences? This would be crucial since it’s often CSS that’s the root cause of a visual bug.

[Ed spoiler alert: it sounds like Sargent is lining up to promote and extol the virtues of his own product here — a good point is being well made here, but the reader will please note that the writer is explaining a possible (albeit effective) solution predominantly within the realms of his firm’s own product set.]

AI-assisted root cause analysis

Applitools Root Cause Analysis shows what DOM and CSS differences underpin each visual difference between a baseline screenshot and a test screenshot and comprise the ‘root cause’ of the bug. Rather than digging through potentially thousands of lines of DOM and CSS to find the root cause, developers typically only need to look at a handful of lines.

Relatedly, Applitools has extended its user interface version control so that it now includes DOM and CSS associated with each screenshot. This helps development teams to see not only how the visual appearance of a web application has evolved over time, but also how its underlying DOM and CSS have changed – making it easier to roll back new features.

Applitools presents DOM differences in a similar way to Google Developer Tools and CSS differences similar to GitHub, so you don’t need to waste any time figuring anything out.

Visual differences are displayed in Applitools Eyes Test Manager where the click on a visual difference is highlighted and developers can instantly see what DOM and CSS rules are related to that change. From here, devs will receive a link to the exact view you’re looking at — sort of like a blog post’s permalink, which you can add to your Jira bug report, Slack, or email.

Sargent: line by line is for the birds, adopt some AI-assisted root cause analysis.

CIO
Security
Networking
Data Center
Data Management
Close