The landscape for testing software has never been so broad. Applications today interact with other applications through APIs, they leverage legacy systems, and they quickly grow in complexity. Many applications are starting to integrate with AI-based components. What does that mean for testers?

1. Testers need to learn to drive more conversations about AI quality

Discussions about requirements and quality are often lacking in many AI fields. There’s a lot of reasons for this, but even data scientists know this is a problem. One of the reasons it is important that this is discussed is that AI techniques are imperfect. Some level of failure is usually to be expected - but does everyone understand that?

Interesting quality issues with AI systems exist in terms of not only simply accuracy, but bias, security, drift, self-learning and many more areas.

2. Testers need to get a better understanding of statistics

Tests run on machine learning models need thousands or millions of variations in order to be statistically significant. Test results are expressed using mathematical terminology and new metrics. Statistical measurements are key to identifying quality issues like bias in AI systems. Testers need to improve their understanding of statistics to fully understand these, and as we all know, statistics can be very misleading!

3. Testers need to develop an understanding of the technical workflows used to develop complex AI and ML components

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

The process is quite different to conventional software development, and understanding the technology at a conceptual level will help testers work effectively with AI developers and data scientists.

One common pitfall is that the data scientist conducts extensive testing of the model, but the statistical tests are not repeated with an integrated data pipeline. This means plain old coding bugs creep in and invalidate the model testing results.

4. Testers need to learn new test techniques that help with oracle problems

An oracle problem is a difficulty in specifying expected results. Many things contribute to oracle problems with AI systems, from the environment, to the lack of specification of machine learning model requirements, to the complexity of self-learning systems.

Techniques like metamorphic testing, A/B testing and expert panels can help with oracle problems, but they are far from perfect!

5. Testers need to learn about AI testing tools

These tools are starting to look very promising, but on the flip side, they suffer from the same imperfections as other AI technologies. You need to really understand the limitations of the tools before relying on them for testing production software. AI testing tools are emerging to help with test design, test execution and test data management, as well as predicting interesting things like defect concentration.

Find out more about AI

You can learn more about AI and how this will affect your role by studying modules of the BCS Artificial Intelligence pathway or buying the book soon to be published by BCS on the topic: Artificial Intelligence and Software Testing.