In the first of a quarterly series, BCS drew together a group of CIOs to gather expertise and share experiences from the coalface of IT leadership. The focus of the CIO Network Launch Event was AI for CIOs.
The inaugural BCS CIO Network focused on managing the risks posed by AI and harnessing the business opportunities the technology affords.
Held under the Chatham House Rule, the CIO Network event was hosted and facilitated by:
- Gillian Arnold - President of BCS, Managing Director of Tectre
- Richard Corbridge - Director General, Chief Digital and Information Officer, CIO of the Year 2022
- Adam Leon Smith - Chair of BCS F-TAG, CTO of Dragonfly
The event, attended by around 30 expert guests, explored key issues highlighted in BCS’ paper, Helping AI Grow Up Without Pressing Pause.
Additionally, round table debates dissected the proposal for AI to be robustly tested in regulatory sandboxes, as outlined in the AI White Paper published by the Department of Science, Innovation & Technology.
Experts delved into topics such as:
- Transparency in AI development and deployment
- Customer awareness of AI products and services
- The importance of competent ethical and inclusive IT professionals in developing AI processes
Setting the scene
At a time when AI systems can make it easy to produce streams of misinformation, it is more important than ever that IT leaders stay informed. MIT Review recently commented that AI-generated content could be likened to the birth of a ‘snowball of bullsh*t’.
Whilst falsehoods can be regurgitated as fact, there is also the problem that the likes of Chat GPT can be a distraction from the beneficial aspects of AI, which can do great good. The regulation and creation of trustworthy AI is where BCS should be involved.
BCS already has a long track record — the Specialist Group in AI launched in 1980 — and the Institute has always aimed to drive professionalism around more than just the nuts and bolts of the tech itself, encompassing the academic view, personal responsibility, qualifications and more. Very recently, the expert Fellows group (F-TAG), a key provider of expert BCS commentary, produced the paper ‘helping AI grow without pressing pause’.
The paper looked at balancing the dangers and benefits of the technology, with an emphasis on positive societal benefits.
For CIOs, the regulatory direction is important to understand — especially in this a fast-developing area. The EU’s AI Act text has now been approved and will be taken forward within the next 18 months or so, with the aim of producing subordinate legislation. By 2025, we will see technical standards too (necessarily a slower process).
In the UK, we have a pro-innovation approach to AI regulation. Though technical standards are still in consultation, the UK aims to be a world leader which, despite recent US advances, it is in a good position to do. The UK currently sits at number four on the global AI index. Interestingly the next EU state to appear on the list, Germany, comes in at eight.
In the EU and UK, policy fluctuates regularly and will continue to do so, but in differing ways. Take the principle of explainability – this is not an EU concern but is key for the UK’s approach, which will be especially useful for the financial services sector, where the UK is traditionally strong.
AI definitions themselves are also complex and changeable – in some contexts, even flavours of the humble spreadsheet have been cited as AI applications. Whilst the early chess emulators are now not really seen as AI — despite that being the term used at the time — the definition is not nailed down. Even the use of words such as ‘autonomous’ is still problematic — an example cited is that even a simple lift could be termed autonomous.
We know that as AI systems proliferate, regulation will cover many organisations. CIOs need to know what is going on, and the implications for different verticals and different parts of their business. An area of focus should be safety critical systems. An example given here was when humans work collaboratively with robots to help lifting. When humans enter an area where robots are present, the machines either stop operations completely or at least slow down substantially. Whilst safe, this is not efficient. Judicious deployment of AI could obviate these kinds of issues.
Guardrails need to be in place, and training needs to be undertaken — and also cascaded — by those who know the intent of the system being developed.
The vexed area of technical standards is some way from being resolved. The meeting noted that even in mature standards, such as GDPR, there is still contention on definitions. The thing to keep an eye on here are the WTO standards, which the UK will use.
Right now, regulatory sandboxes are being created. The EU is also undertaking real-world testing as a halfway house to regulation. CIOs need to be across these developing stories.
What should you do?
Discussions amongst the group raised several areas to be aware of, and to stay abreast of.
Starting to compile asset registers for AI was recommended — these would define the planned use of systems and have a policy in place that includes risk assessment. Internal responsibility needs to be defined. Training will be an ongoing requirement, as will regulatory monitoring to prepare for the ISO/IEC standards.
Making sure all is done in accord with ISO 42001 was also put forward as being vital.
Keeping an eye on other industries is also helpful. It was noted that the CIPD has already produced an AI plan for HR departments.
When trading outside the UK, there are also considerations — a divergence from EU regulations could hamper trade.
An interesting issue raised was around autonomous vehicles: whilst we know that overall deaths will be reduced by well-deployed systems, what are the judicial issues? It was noted that in this area, and that of medical devices, there are lots of ethics studies going on because, currently, there are no clear rules. Ethics is hard to do. Even the relatively new, and needed, role of the AI ethicist is not regulated.
The most common pushback against AI has been inherent bias. A point on nomenclature was made here – bias is not the correct term. The issue is unwanted bias — bias is needed by definition to make decisions. The shorthand for unwanted bias was suggested as being deviations of data on a specific group of records.
Another idea for a practical course of action would be to select a partner to help on this ethical journey. As organisations will need to be accredited against ISO standards, that will be key.
What of the impact of regulations on SMEs? Standards are easier to apply than regulation – and cheaper. Just in the area of compute resource, can small companies afford to train massive models? This could pave the way for a change in thinking as smaller models may be more accurate – or an SME could take other models and re-train them to their needs.
The attendees of this event were given an example of the benefits of using well-deployed AI systems with the DWP Lighthouse framework project in explainable AI.
The DWP gets 25,000 letters a day about accessing DWP systems and benefits, mostly from vulnerable people. These need to be opened, scanned, categorised, and actioned across the country, a process which took five weeks even for the most needy. Optical character recognition was already being used, but the project added generative AI to help it ‘understand’ the concept of vulnerability, enabling the most vulnerable to be put higher on human workers’ agendas. This has had a 91% success rate since its June launch. Now vulnerability can be spotted on the day rather than five weeks down the line. And there is a human in the chain — a big step in explainability.
Be part of something bigger, join BCS, The Chartered Institute for IT.
When benefits decisions go to tribunal, they need to be accompanied with all the relevant documentation and discussion records, work undertaken by 135 people in Newcastle. This is now automated — even to the extent that relevant redaction is done automatically for those who don’t have the right to see certain content.
The result, in DWP, has been lots of requests for AI projects, which the Lighthouse project looks at to evaluate ideas, frameworks, ROI and so on.
An interesting next step from DWP was posited: with £1 billion going through DWP systems every nine days, AI’s power to expose error and fraud has become the next phase for development.
By way of warning: government departments need calm, considered decision-making. Labour market issues need to be understood. Could AI even automate labour market job demand to help the unemployed back into work?
AI has huge traction as a concept, with positive and negative connotations. It’s a multi-pronged idea, not a single approach, and when approached poorly the consequences can be severe. The misapplication of AI in the benefits system in the Netherlands misapplication of AI brought down the government - people even lost their lives.
So, enthusiasm needs to be tempered. But, just as with the industrial revolution, this will happen.
Following the presentations, the talk was opened for the CIOs to discuss with all attendees what role BCS should play going forward. Suggestions included the provision of an information hub, with a specific toolkit for CIOs and a look at the reality of actual costs. Other areas that needed information resources included legal implications and an understanding of the scale of investments — case studies would be useful.
Information on the art of the possible would also be helpful. The risk levels associated with different ideas could be analysed. BCS’ CITP standard should also be raised as relevant to demonstrate the competence of those involved in AI systems and build trust around its ethical development, use and deployment. Those in cyber, data, AI ethicists/prompters/trainers need standards - do we need a specific code of ethics that CITP could cover?
Whoever leads tech in the business needs to be trusted. If CIOs needed a standard such as CITP, it would provide a level of trust, but balance is also required so as not to add unnecessary blocks for social mobility. If all suppliers demanded CITP for government procurement, it would be adopted very quickly.
Another interesting point raised was the need for ‘devious people, those who can spot human devious approaches’ — soft skills in addition to technical skills.
How we talk about tech doesn’t help: we need to make the user benefits clear. In this part of the discussion, terminology came up again – ‘explainability’ is also a word used in different ways.
There is plenty to do: BCS can provide professionalism and qualifications – and address some of the societal implications through, for example, the Digital Divide SG. If we can help organisations balance decision-making on investment versus risks, along with furthering policy and framework understanding, the potential is huge.
This event was undertaken under Chatham House rules in June 2023.