The 43rd BCS Specialist Group for Artificial Intelligence conference saw delegates, keynote speakers, students and academics meet, share, debate and learn. Martin Cooper reports from Peterhouse, Cambridge.
During late 2023, the BCS Specialist Group for Artificial Intelligence (SGAI) held its annual conference at Peterhouse College in Cambridge. Now in its 43rd year, the bustling conference welcomed speakers and attendees from industry and academia. The three day event featured keynote sessions, practical workshops, academic paper presentations, awards, networking sessions, a panel discussion and a gala dinner.
Distilling the SGAI Conference, Nadia Abouayoub explained: ‘The conference is a sounding board — for research and industry — to come and present their thoughts and ideas and get feedback from their peers. We celebrate that work and encourage the new generation…You can see ideas growing here.’
Addressing the conference’s heritage, Nadia said: ‘We’re passionate about our science; we can see that it can bring a lot of good…And it’s important that the conference takes place in Cambridge with its history and place in science and scientific thought. History is important and helps us understand where we are today.’
Speaking in a packed lecture theatre Max Bramer, SGAI chair, said: ‘With the arrival of ChatGPT and other LLMs, 2023 has been an exciting year for AI. [As a science], we have never been more popular and successful. Nor have we been more criticised and feared. In August, I read that AI is better than two radiologists when spotting breast cancer. In October, I read about how AI can help prepare biochemical weapons.’
AI is, Max explained, a technology that has the potential to both help people and society and, in equal measure, harm.
He said: ‘Many benefits — advances in medical research and access to accurate diagnosis — are obvious and welcome. Equally, many of the evils can be obvious, too. But the ambiguous ground in the middle, where benefit or harm is a function of both the technology and society, is complex.’
Summing up AI’s double edged sword potential for good and bad, Max stated: ‘We can’t solve this...But we all must be ethically aware of the consequences of our AI.’ Calling the attending community to action, Max asked: ‘How do we steer AI in the right direction?’ — and this proved to be the event’s defining theme.
Many examples of good
Across the event’s keynotes, paper presentations and posters (one page summaries of academic projects), there were undoubtedly many examples of AI being used positively by scientists determined to ‘head in the right direction’.
Opening the conference’s second day, Professor Robert Stevens (University of Manchester) delivered a session called What’s in a title? Drawing on the world of AI.
Professor Stevens explained that biology relies on comparing and focuses on similarity. If there is, say, an animal or protein about which we know a lot and another that appears to be similar, knowledge can be transferred from the known to the unknown. The problem is, he explained, that scientists have a habit of giving one thing many different names, making this transfer of knowledge challenging.
Summing up the challenge, he said: ‘There is far more biology to know about a thing than anyone can manage. Even specialists in a field don’t know everything there is to know about a certain part of biology…It’s tough to know what we know. That’s where machine learning comes in — it can help us extract that knowledge and [turn] it into a computationally useful form.’
Along with naming conventions making scientists’ lives harder, Professor Stevens also explored how much scientific knowledge sometimes remains locked inside research papers. Papers can remain unfound, unread and uncited.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
In 2011, Professor Stevens explained, scientists experimented with a bespoke AI to make this knowledge more accessible. The project used papers’ titles as a basis for classification, producing good results. With the arrival of LLMs, the team revisited the project and found these provided impressive results while, critically, requiring much less effort to set up than the previous hand-built AI.
In summary, machine learning seems to be a tool that can help scientists do science by assisting them in finding and organising knowledge. This is, of course, very different from asking generative LLMs to do science themselves.
Towards the stars
The final day’s keynote saw Professor Adam Amara (University of Sussex) deliver a talk called Euclid’s Dark Energy Journey — applying AI to frontier science experiments. In this case, Euclid refers to the Euclid telescope and its place in a project designed to find and understand dark matter.
The space telescope will map the universe's large-scale structure. This map will work across space and time by observing billions of galaxies out to 10 billion light-years across more than a third of the sky.
With the telescope working, the challenge, scientists found, was making sense of the petabytes of data the experiment produced. Though striking, their maps contained trillions of what appear to be pixilated blobs, each of which need identifying, measuring and analysing.
The team needed an identification system that was fast, accurate, explainable, unbiased and usable by scientists who were experts in the stars but not in computers. Over time, the team developed a Generative Adversarial Network — a classification system where two neural networks compete — to unpick the captured images. This unpicking involved correcting images for how gravity bends light, how the atmosphere interferes with image collection, and the presence of noise.
Showing the conference’s importance as a crucible for ideas, the audience discussed how the Euclid project’s expertise in classifying small but important images could be transferred to the world of medicine.
Panel discussion
Andrew Lea, a member of the SGAI Committee, steered a panel of guest speakers through a debate focusing on AI’s future implications. The session explored grand questions around our relationship with AI, the pace and scale of change, unintended consequences and human freedoms.
Broad and animated, the panel explored how — when new technologies appear — we tend to overestimate the short-term implications and underestimate the long-term consequences. This led the discussion to focus closely on ethics: truly discerning right from wrong is seldom easy or clear cut.
To explore how society finds it hard to define good, the panel discussed the Trolley Problem — a thought experiment examining whether it is better to run over one person, or to save that person by running over five. Such experiments are acceptable in the confines of a lab, but what will happen when a self driving car meets that dilemma? Is the least bad outcome always the best, or even acceptable?
If there was a conclusion to be taken away, it could be this. As practitioners, we must think carefully about what we engineer — not just about how we design and build but also about the implications and consequences of our products. We might not be able to stop bad actors from doing bad things, but as responsible members of the AI community, we can help steer AI in the right direction.