Posted by Data Stems ● Jun 28, 2022 11:45:00 AM
AI for QA and QA for AI; Control AI and reap the benefits
As Artificial Intelligence (AI) software starts to live up to its hype, it finds an extensive range of uses in computer systems. How can AI help with system testing? How do you effectively test AI systems?
Many current AI solutions use machine learning technologies, where a base set of algorithms/rules are used to “learn” by analyzing initial and ongoing data sets to “understand” new input data and “reach conclusions” about that data. For example, having assessed a large pool of horse pictures, an AI program can review a picture of a cow and decide it is not a horse. Developers use various AI tools to establish the rules and consume diverse data pools.
Quality assurance testing uses various techniques to assess applications to identify issues and increase confidence in the application. The traditional and standard testing lifecycle is:
based on technical and functional criteria, a series of test scenarios are identified
test scenarios are converted into test cases,
based on testing goals, test cases are assembled into test runs,
test runs are executed in a test environment,
analysis of the test results determines the success or failure of the test run, identifies issues with the test scripts and/or the application, and
application and/or test script are remediated as required.
Manual testing was demanding but doable when application updates were only released a few times a year. With the recent move to Agile development techniques and DevOps approaches to application deployments, application builds and deployments now happen daily or weekly. Adoption of these approaches have created a high demand for QA automation services.
AI for QA
AI tools are well suited to assist with nearly every aspect of the QA lifecycle and are regularly integrated into testing tools and frameworks. These tools and frameworks are available as both open source and commercial solutions.
Test Scenario Identification
AI tools can add significant value to identifying what to test. AI tools can analyze system source code and identify test scenarios that can provide 100% coverage of the application code. Additionally, AI tools can analyze existing testing scenarios and identify and eliminate duplicate scenarios.
Test Case Creation
Automated test cases are recorded and stored in a testing repository. Based on identified test scenarios, AI tools can generate test cases or are recorded through human interaction with the system.
Test Run Creation
Test runs have specific objectives. AI services can assemble test cases into test runs that meet different testing objectives. For example, you may want a simple “lights on test” that navigates the system and confirms that things are operational. This test run would include test cases for each type of technology or feature, such as navigating to a screen and query a record, launching a report, submitting a processing job, etc.
With Agile development approaches, an application is continuously improved. The result is frequent release, but as a percentage of the code base, there is very little change during each deployment cycle. AI services can do a code comparison to identify new, changed, and unchanged portions of the application. As a result, regression test runs for unchanged portions of the application can be prepared, more thorough test runs can be prepared for modified code areas, and new test cases can be created or requested for new code test runs.
Test Run Analysis
There are various ways that AI can improve test run result analysis. These improvements can be at both the micro and macro levels. An AI service can conduct the root cause of a test case exception. A common problem occurs when minor UI changes break test scripts. If you are entering an accounts payable invoice, you may enter the header information and then press the “Lines” button to navigate to the lines portion of the screen. For each line, you may need to enter different accounting distributions. To do that, you would press the “Dist.” button to navigate to the distributions area. However, user feedback has shown that “Dist.” is confusing, so developers changed the button to “Distrib.” If the testing script used the label to identify the correct UI element to select, the test would fail with an invalid object error message.
AI tools can maintain self-healing test scripts. In the above AP invoice line distribution push button error case, those invalid object errors are automatically corrected by identifying the appropriate substitution element in the script. Versioning of the test cases and AI change reporting allows you to follow script changes.
QA for AI
We have discussed QA practices around testing traditional code. Now let’s look a bit at how to QA AI programs. AI solutions involve continuous learning to improve the decision-making of the algorithms. How do you assess if things are getting better and not worse?
Continuous learning means continuous testing. Therefore, ongoing regression testing confirms that baseline decisions provide the expected outcomes. Baseline test cases are regularly executed to ensure that the AI service still provides correct responses. In the above horse picture assessment service, does it still identify the cow as not being a horse? When new versions of algorithms or rules are being developed, regression testing and targeted testing also need to assess if the new approach produces the desired results. Also, prior false negative and positive test cases should periodically be re-run to determine if learning is happening.
While there is still a lot of hype around AI, AI tools are making significant gains in living up to their promise. Organizations are frequently changing and deploying code by adopting modern Agile and DevOps practices. Manually testing these deployments is too slow and costly. Modern QA tools and platforms have been integrating AI components to speed up and improve testing capabilities. Furthermore, new QA approaches can monitor the ongoing performance of AI services. You can benefit from AI and control AI evolution by using AI for QA and QA for AI.