On 19th June 2025, the BCS London offices hosted the 2025 SIGiST conference. A two sided event featuring in person and online speakers split between the BCS offices’ Shirley and Sparck-Jones rooms, the day covered a breadth of topics from sustainability to space and psychological safety. Georgia Smith MBCS reports.
Organised by volunteers from the BCS Software Testing Special Interest Group (SIGiST) committee, this year’s conference welcomed nearly 200 people from around the world, with in-person tickets selling out weeks ahead of the event itself. BCS SIGiST is the largest not-for-profit testing community in the UK, and they aim to deliver events that are accessible and useful for anyone who’s work involves aspects of software testing, regardless of experience level or job title.
The theme of their 2025 conference was ‘Finding Calm in Chaos: Applying Testing to a Changing World’, with talks focusing on a range of topics including the growing role of AI in software testing and its future potential, and particularly the idea of how to work with it rather than against it, ensuring ethical deployment that maximises benefit and remains ethical. Other themes that emerged were dealing with the current speed of change in the testing world, and keeping on top of cyber security amongst it.
The good, the right and the fitting
Opening the day in the Sparck Jones room was Chris Briggs, Senior Software Developer at DFDS, with his talk on moral philosophy in software testing. He introduced the key concept of the ‘moral hazard’, or the idea that being unable to deliver something in the way you believe it should be delivered causes psychological harm. This idea underpinned the main theme of the talk: the importance of including ethical frameworks in the development of ideas and products. Chris gave three examples of ethical frameworks: Mill’s Utilitarianism, Kant’s Categorical Imperative, and Aristotle’s Virtue Ethics, each of which have pros and cons. Chris advised companies to develop their own ethical framework, offering three practical pointers:
- Define shared values: use organisational values to create an aspirational but grounded story, and use the ‘language of risk’ to convey the consequences of unethical decision making, such as regulatory or reputational challenges
- 'Shift left': bake ethics into the ideation process rather than retrofitting
- 'Shift right': build capacity within shareholders and internal customers to build their ethical muscle, foster feedback and support the habit of action and reflection
Chris summed up his talk with this message: ‘Ethics are here to stay; they are formed like a habit, practised like a skill. Build yourself a Swiss army knife of philosophical tools — our future depends on it.’
Debugging sustainability — how testing makes retail greener
In the Shirley room, the next talk was from Sivaprasad Pillai, focusing on how software testing is making retail more eco friendly. Sivaprasad highlighted some key areas driving green retail, including digital receipts, consolidated shipments and delivery optimisation.
Currently, he explained, the US alone issues 3 trillion paper receipts and creates 1.5lbs billion of receipt waste annually, while delivery and shipping produces 19 million tonnes of CO₂ across the world’s top 100 cities. Rethinking these practices can significantly reduce environmental impact. However, he expanded, careful testing of new processes is vital, as ‘sustainability without precision can backfire, [resulting in] poor adoption, wasted resources, and greenwashing risks.’
AI powered journalism: how technology is reshaping newsrooms
The Shirley room’s third talk of the day was from Ruona Isikeh, focusing on how new technology is changing journalism. She offered her perspective that technological developments, similar to those being seen in the world of software testing, have reduced the amount of input required to build a story and stresses the impact on accuracy.
She expresses concern about where AI is taking news, highlighting that it is not only increasing the amount of mis- and disinformation, but also affecting how we share, absorb and critique news. She commented that AI can be useful in providing information on what stories people want to hear, but that the dark side of that is that it results in the news being driven by trends versus importance. She concluded that while AI can be useful for some things such as pattern recognition or topic suggestions, human editors, much like software testers, remain vital for fact checking, spotting bias and critical thinking.
Testing: a solved problem in engineering?
Callum Akehurst-Ryan, Staff Quality Engineer at ArtLogic, spoke on how to improve software testing, explaining that while many organisations think they have the problem ‘solved’, they could be doing much more.
He argued that organisations often begin and end their software testing at the ‘pass/fail’ level, analogising this with a toothpaste sandwich: pass/fail testing will tell you whether an object is or isn’t a toothpaste sandwich, but it won’t tell you if anyone wants that.
Offering an in depth demonstration of Nicola Sedgwick’s Quality Radar, Callum urged companies to visualise gaps in their approach and to explore the art of the possible with software testing, investing in their team’s growth.
Failure is not an option
The highlight of the conference was SIGiST’s own Marketing and Communications Secretary and PhD researcher at the University of Strathclyde’s Applied Space Technology Laboratory, Beth Probert, who gave a talk on software testing and space exploration which saw the Shirley room packed to the rafters.
The presentation explored the evolution of space system testing, emphasising the transition from traditional missions to the ‘new space’ era. Beth explained the key role of catching failures early, such as when developing the James Webb telescope. She highlighted the complexity of space systems and the need for extensive, open-access test data; NASA’s ‘lesson learned’ database is a key resource which allows agencies worldwide to improve safety and efficiency.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
Beth explained how the volume and complexity of the necessary testing is growing alongside increasing space activities, and highlighted how traditional testing methods suitable for one or two spacecraft missions are insufficient for the new commercialised landscape with satellite mega-constellations such as Starlink. The advent of these projects, which have many identical components, is starting a shift towards standardised but scalable testing in place of bespoke. Beth also spotlighted the rise of space tourism as a critical area requiring new safety standards, especially since tourists will have minimal training.
She also highlighted the attitude change brought on by privatisation as a game changer; while government agencies are constrained by budgets and risk aversion, private companies like SpaceX can afford to embrace a ‘fail fast, learn fast’ approach which accelerates innovation and deployment.
How do you test an AI based system?
Bryan Jones MBCS, QAT Architect at 2i, gave a presentation on the intricacies of testing an AI based system, discussing its unique challenges compared to testing traditional software. He particularly highlighted the difficulties arising from a lack of clear predicted results, and from data-driven behaviour, saying: ‘the code is not the algorithm; the data defines the behaviour’.
Bryan outlined the new approaches that are now necessary, including testing data for bias and representativeness, integration testing with compounded error rates, and continuous monitoring in live environments. He argued that while traditional testing can still apply, it will certainly need to adapt, and that techniques like metamorphic testing, adversarial testing and cross-validation could help to address AI’s complexity. Bryan repeatedly emphasised that collaboration with data scientists is essential, saying ‘data scientists are our best friends’.
Psychological safety: the link between speaking up, complexity and high performing teams
Jit Gosai, Principal Tester at the BBC, focused his talk on the importance of psychological safety to a functional workplace, enabling collaboration, innovation, and high performance.
Warning against the difficulties of finding the balance between ‘too nice’ and ‘too blunt’, Jit especially took care to distinguish psychological safety from ‘safe spaces’, emphasising that true psychological safety involves feeling comfortable taking interpersonal risks such speaking up about mistakes and challenging ideas without reprimand, whereas safe spaces protect people from challenge. This is vital for software testers, who often need to deliver difficult news around things like identified bugs. He argued hat leadership is critical in creating this environment, but that everyone can contribute by example and help to build trust and promote open feedback.
Ultimately, he concluded, creating psychologically safe teams allows members to step out of their comfort zones, learn from failures, and deliver better outcomes, and making that culture an explicit company value is vital.
Why and how professionals should use AI agents for specific tasks
The final presentation, from Data Mentor and AI Researcher Daniel Emakporuena, opened with the idea that AI adoption is an inevitability, within software testing and beyond, and that debates should focus on how best to work with it ethically and productively rather than futile discussions over whether to try and put the genie back in the bottle.
Daniel suggested that AI agents are at their best when used to automate tasks such as meeting transcription, and argued that while AI offers significant productivity gains and integrates smoothly into workflows, challenges including skill requirements, privacy concerns, misuse and bias still remain. He also emphasised the potential positive role of AI agents in education if they are used to guide learners rather than providing answers. Overall, he argued that AI adoption is inevitable across industries and it is vital that AI agents are built by experts but accessible to use by all, and they should be deployed to handle repetitive, tedious work so that testers to focus on creative and meaningful tasks, saying that ‘the best AI integrates productivity without sacrificing creativity.’
The engaging and passionate speakers and SIGiST 2025 were matched by the enthusiasm of the crowd, leaving everyone present with true food for thought on the future of the industry in the context of a rapidly evolving technological future.
You can watch all the presentations in full on YouTube.