Right into the Metaverse with Digital Twin Testing

Jonathon Wright

The Metaverse blurs the boundaries between technology and the real world. The matrix of digital experiences (DX) from companies like Facebook and Microsoft introduce fully immersive digital workforce collaborations within Virtual, Augmented and Mixed Reality (VR/AR/MR) worlds. This leads to complex situations such as “how do you test new technologies like NFTs containing digital artefacts that span across Blockchain / Oracles / Crypto wallets of testing merchandise rendered onto your 3D avatar representation of your digital twin self on your daily-standup on Microsoft Teams Mesh.”

How to shape your testing in the Metaverse to test digital worlds like Meta & NFTs

  • Simple automated tests for VR/AR & MR via OVR
  • Model-Based Testing (MBT) patterns from ISO 29119-8
  • Image-Based Testing (IBT) of NFTs, crypto-wallets and blockchain (Ethereum)

Takeaways

  • Digital twin testing in the physical world (MBT/IBT)
  • Observability of complex real-world oracle meta test data (i.e. hyper-baselining)
  • Digital personas, experiences and interactions (i.e. Polygamification)

Random Exploration of a Chatbot API

James Thomas

Three related coverage risks stood out when I joined a new project to build a chatbot API for Ada Health's medical symptom checker. With an infinite space of possible chats, how could we:

  • look for unintended consequences of changes as we built the API from the ground up
  • discover some of the edge and corner cases bugs that would surely exist
  • exercise the API to any significant extent before the very tight first deadline for delivery

To help mitigate these risks I built a client which would randomly walk through dialogs, unattended, and report on what it had found.

In this talk, I'll describe how I implemented that client by iteratively adding functionality that I hoped would facilitate my exploration of changes and fixes to the emerging API. I'll give examples of features that worked well (such as configuration of probabilities for different types of answers) and those that did not (such as checking for specific classes of medical outcome), explain how I built on top of the client to make a load testing tool, and think about what I'd do differently next time.

Key takeaways:

  • automation is a tool for exploration
  • starting small, and early, and building incrementally, can be powerful
  • you can use tools to review the results of your bulk machine-generated results

How Test Teams Can Help Reduce AI Bias and Improve Product Quality

Nicola Martin

When testing AI models, Data Scientists need to consider numerous factors when working with data to ensure they are taking steps to minimise bias. How can we in quality and testing teams help Data Scientists and engineers to create processes and approaches which help to test models and reduce bias effectively?

The issue of diversity in tech is gaining more attention every day. As an example, only 19% of tech workers in the UK are women, and this is much less for other diverse groups. As QA integrates into engineering teams and get more involved in working with Data Science teams, it is even more important that companies address the lack of representation and how this affects overall software quality. This talk will address ways in which more diverse teams can have an impact on improving software quality and reducing AI bias.

Giving ‘The User’ A Face - Accessibility Testing Using Personas

Alan Giles

Feature requirements and test scenarios are commonly written with the agent as ‘the user’ - but who is this mystical user in the real world?Efficiency says make them representative of the majority, but by doing so are we missing crucial insight of real user experience? When having disabled users in mind, this is especially so - there is no majority. by creating personas based on real user groups, we can ‘be’ that person when designing and testing user experience and empathise so much more effectively. In this talk we walk through some real examples in a healthtech setting, and how they might inspire yours.

Accessibility 101

Deborah Reid

Accessibility enables people with different abilities to access and interact with digital content better. But how can we make an impact as testers?

I will share details around what accessibility means, why it’s important (some information of those affected), how to measure accessibility, engage with the audience in some activities to understand some examples and show some tooling to do some accessibility testing yourself.

I hope participants will take away a learning about the importance of digital accessibility, some insight into how to get started with some accessibility testing, and hopefully an understanding of the importance of the topic.

No prior knowledge or skills required (because the tools are quite simple and easy to use).

Mental Health, Testing and Me

Andy Shaw

Andy Shaw is keen on promoting mental health both within and outside the IT Industry. His talk will be focusing on mental health within software testing, and why mental health is important whilst testing. Andy will also talk about his own experiences of mental health and how professionals can manage their mental health.

Testing Data Science Models

Laveena Ramchandani

Data is the new gold. Everyone is excited about Data Science and Machine Learning models. But there isn’t much exposure to the world of Data Science as a Tester.

In this talk, we will go through my journey of discovering data science model testing and how I contributed value in a field I have never tested intending to help inspire testers to explore data science models.

Together we will understand the background of data science and how data plays a vital role in models? How to train a Data Science model keeping different personas in mind? How will we bring processes and strategies to make sure we capture the right output results, and the consumers still benefit from this? In a nutshell, making sure the model’s quality is good and we have confidence in what we provide to consumers.

By the end of this talk, you will be able to explore testing models and how to make sure the quality of a model is providing the team with enough confidence and helping a business.

Takeaways:

  1. Understanding of Data Science
  2. How to test models
  3. Identify what existing skills we already have that we can apply in a data science team

An overview of the draft EU AI Regulation (AI Regulation)

Sam De Silva

This session will provide an overview and a “walk-through” of the AI Regulation and will cover:

  • What does the AI Regulation apply to?
  • What activities are relevant?
  • What is a “prohibited AI practice”?
  • High-risk AI systems
  • The high-risk AI ecosystem: providers and other operators
  • Conforming with conformity assessments

Software Testing Standards - why do we need them and when are they useful?

Adam Leon Smith

Software testing standards have been the subject of lots of debate. Some people don’t feel they are useful, some people think they are crucial to productivity and interoperability. Adam will explain the pros and cons of standards and give examples of how testers can actually get benefit from them.

Ensemble Testing - How an Experiment Shaped the Way We Work

Andrea Jensen

I believe in collaboration, in diversity, and in the whole team approach. However the fact that I believe in those values does not necessarily mean others believe them too.

So, the question is “How do I share my message successfully with the team?” After experiments, failures, and frustration I found my answer. It’s ensemble testing in a cross-functional team.

I want to share the story of our diverse ensemble testing team. How it changed our way of collaborating and testing and how it helped us to stay connected while working from home.

Diversity of thought in testing remote

Callum Akehurst-Ryan

Socially designed systems and products benefit greatly from considering different viewpoints and a diversity of people. In this talk we'll discuss how smashing monoculture in engineering can greatly help with the designing, development and testing of products.

We'll also discuss barriers to bringing your authentic self into the workplace with Callum sharing his lived experience about facing pushback to being authentic within engineering.

How data is the driving force for better QA

Nuria Manuel

Stripe estimated that engineers spend 33% of their time addressing defects and research shows that up to 56% of defects are encountered at the production stage.

How can we better safeguard software quality and ensure better reliability?

Predictive Quality Assurance relies on measurements to collect quality data across not just the Software Development Lifecycle, but the entire Product Lifecycle helping QA, Product and Development teams improve test quality, product requirements and identify patterns resulting in product risks earlier.

Let's shift quality to the left and right

Parveen Khan

In the current era, organisations are building applications with more complex architectures such as - blockchain, distributed systems, and microservices. The job of maintaining these systems and ensuring it is working as expected has become a challenging task. Gone are the days where testers have to rely on the UI’s to validate an application. Now it is all about what happens underneath the hood and how far you shift testing. I worked on a team where we followed DevOps and started testing as early as possible by shifting left. But that wasn’t enough in terms of the quality of the product or keeping our users happy. That’s where we changed our process and started taking smaller steps into shifting right.

We had some monitoring and logging in place, but we had no clue where, how, and what to look out for whenever there was a problem.

Join this session, where I discuss my journey with shift left and shift right approach. I will share how using this approach has been helpful to our team.

Introduction to Contract Testing

Lewis Prescott

Introduce API contract testing to your test suite to:

  • Open communication between siloed microservices
  • Faster feedback from API changes
  • Less interruption from integration issues
  • Visualise real API usage
  • API Versioning made easy to improve backwards compatibility

In this session you will learn the fundamentals of contract testing with Pact & Pactflow. Contract testing can be applied to API or messaging services, allowing you to test any integration point in isolation. This technique is ideal for delivering services within a microservice architecture confidently, for example an API client communicating with a web front-end.

Testing High Integrity Software: Methods and Importance

Beth Clarke

Understanding what High Integrity Software is and why it will play a vital role in the future of tech

  • Explaining the value and importance in using different testing methods in High Integrity Software
  • Sharing how Capgemini Engineering’s High Integrity Software Expertise Centre’s approach to the verification and validation of this software

What are you looking at? – Modern Art and Testing in the Blink of an Eye!

John McGee

Join me on a dazzling tour that will change the way you look at testing (and art) forever. While looking at a Jackson Pollock painting in New York it struck me that while I loved the painting I didn’t have a clue about what it or most of the other paintings in the gallery represented. I therefore did what all good testers do and looked for an oracle. Quickly concluding that I couldn’t haul an art professor about to answer all my questions I did the next best thing and bought a book about Modern Art.

Whilst reading this it quickly struck me how many similarities there where between what we do as testers and how artists paint, think and promote their work.

I realised the non-cubist nail in a cubist painting was the same as the long leash heuristic keeping the observer rooted into reality.
I blew my mind with Malevich’s black square and lost endless nights of sleep contemplating it, before coming to understand that oracles are as important in art as they are in testing. I found out how America’s greatest modern artist was almost dismissed as worthless by the worlds greatest art collector until he came to the attention of someone who’s opinion mattered. I discovered that the worlds great art movements have manifestos which then led me to think about mission statements and the Agile Manifesto. I learned artists used focusing and defocusing techniques and linked these to techniques we use when exploring and while working out the steps to reproduce issues.
I read about how Cezanne turned the art world on its head by questioning what he saw just as an exploratory tester does
I saw how the surrealists painted with ideas feeding off ideas linking this into our exploration.

I came across so many connections while reading this book that it’s impossible to list them all here, it gave me a greater understanding of perspectives and how bias impacts artists and testers. I found out that artists use models, tools and heuristics in the same way that we do, how we use questioning techniques and how we simplify complex thoughts. I considered stakeholders in the art and development world, how the familiar can be used as an oracle, how we can overcome fears and obstacles and I found how artists told stories and presented images to their audiences.

So, come with me on this journey through modern (and some classical) art, we’ll weave some testing into the mix so that you’ll never look at a painting in the same way.

Component Testing with Cypress

Jordan Powell

Cypress released it's public beta back in October of 2017. Not long after, it became the undisputed heavy weight champion of the automated testing world. It made end to end and integration testing actually effective, fast and fun! It had just one problem...Components. Thankfully with Cypress's 10.0 release of Component Testing, that problem no longer exists! In this talk, you will learn how to get started with Cypress CT and some best practices to start writing component tests in your applications.

Mocking in Front-end and Back-end TypeScript Tests

Rob Richardson

This talk explores the intricacies of swapping out dependencies with fakes so we can run tests faster and assert more granular behavior. This talk also explores how to ensure TypeScript is able to validate the types of our dependencies -- whether real or fake. Attendees leave with a GitHub repo of all the code we explore to continue learning or fork and use in their own work.