There are some excellent examples of the exploitation of AI today, but the label has also been adopted as a generic description for the latest clever version of an IT product — even when its use is not evident. The term is now widely misused and misunderstood, but it does mark the start of the next significant phase in the development of our digital world. Geoff Codd FBCS FIoD reflects.

Today, the term AI is widely misused and misunderstood, and its true significance is the subject of much conjecture and speculation. However, ‘established’ intelligence, which flows naturally from existing systems and process modelling, continues to improve product effectiveness, usually under the AI label.

When AI was first coming to the fore, there was huge focus on creative areas such as writing or composition, where there are long-established rules of association and recognisable individual styles and ‘established’ intelligence can be identified, defined and then applied creatively. Huge, little-used databases — many of which were simply accumulated reservoirs of information and debate created for a variety of purposes and by a variety of authors — were suddenly revitalised as important stimuli for creative thinking and new search engines were developed under the ‘Generic AI’ label. Most of these accumulations of data were not built for today’s claimed AI uses, and the results they yield therefore need to be treated with great caution.

Generative AI should perhaps more accurately be called generative analytics. In recent impressive examples of its use it typically mined useful interpretations of data which had previously not been seen as relevant to improving performance, and it provided new tools to improve analytical processes in ways previously unrecognised. Moreover the process itself then encouraged a focus on surprisingly useful scenarios that can stimulate further, more detailed exploration.

Certain non-creative specialist areas such as the health sector also have huge accumulations of useful data which can yield considerable, genuine added value.

A little bit of history

Creating a computer based AI capability has been a long term objective within the computer industry since papers on ‘thinking machines’ were published by Alan Turing and Viscount Nuffield in the 1940s and ’50s. At that time quantum computing research projections promised huge future increases in computer power sufficient to enable achievement of that challenging objective. In today’s increasingly sophisticated world, such computer power has evolved naturally to meet ever more complex needs in areas such as space exploration and research, meteorological forecasting and defence.

As time passes, computers continue to become ever more powerful and consequently more able to provide the considerable power and facilities needed to perform superhuman tasks — such as generate ‘artificial’ intelligence. The growth of true AI is therefore an inevitable consequence of the ever-increasing computer power and sophistication that exists today in almost every field of human endeavour.  

But what is ‘artificial’ intelligence?

The first question to consider is whether the results returned by a prompt could have been achieved through human analysis — or ‘established’ intelligence — or if it can only be the result of a highly sophisticated process which dynamically establishes complex interrelationships using methods and speeds not humanly possible, and then recommends actions and options accordingly — or ‘artificial’ intelligence.

The subject of AI needs to be considered from several different angles, starting with the analytical perspective. One definition of intelligence in the Oxford English Dictionary is ‘quickness of understanding’. This can obviously be dependent on our familiarity with key elements of a subject but can also be severely inhibited by our brain’s limited ability to quickly reach conclusions in complex high volume and highly conflicting volatile scenarios.

From this analytical ‘speed of processing’ perspective the workings of computer based ‘artificial’ intelligence are broadly similar to human intelligence, except that a computer is capable of magnifying the analytical capacity in many dimensions at the same time due to the availability of massive computing memory and processing capabilities. This speed and sophistication alongside the inclusion of a huge range of interactive decision making factors produces an ‘artificial’ intelligence result that derives from an immediate understanding of all options and their range of interactive results.

Broadening these intelligence ‘perspectives’ into areas such as forming emotional and spiritual judgments are further, highly complex dimensions of the AI challenge which are already exercising many minds. This fact does not however take away the huge potential impact that is currently being made by simply addressing the analytical perspective in the AI spectrum.

Some emerging areas of AI exploitation

Massive AI capabilities are already well established in many of today’s complex and fast-moving organisations such as those mentioned above, where the growth of an AI capability is being driven and enabled by the coincidence of two factors.  Firstly the increasingly complex and time-critical nature of key business drivers, and secondly the presence of today’s powerful and sophisticated computer systems which are capable of responding to those challenges by providing the immediacy of response and constant re-evaluation of options that is essential for success in such operations.

In the wider business community the recent exploratory use of generative AI has also identified unrecognised important trends within businesses, thereby switching attention from what were mistakenly seen to be today’s key challenges. A much needed refocus of management attention.

The partnership of AI techniques with spatial computing and augmented reality in various areas of our lives has also had a dramatic impact in those areas. Finely tuned expertise becomes available throughout a delicately balanced medical or other process which otherwise would have been impossible to contemplate. Truly ground-breaking initiatives.

From the start of human history mankind has evolved tools to assist in dealing with life’s challenges, with early focus on the development of hand tools. Over the ages the emphasis morphed into how tools could be used more effectively, and the information technology revolution over the last half century has played a major role in furthering that objective in many industries e.g. through NC, ‘expert’ systems and artificial intelligence exploitation.

Today’s challenges

Todays challenges are little different from yesterday’s but certainly more urgent and much more challenging. Recent hype surrounding AI has generated new-found general enthusiasm for such change, but the widespread growth of financial fraud and the emerging damaging social impact of some new technologies, particularly regarding the young, have also generated deep and widespread concern. It is true that over the years there have been substantial investments in education and training related to specific applications of new technology, but there has also been totally inadequate serious investment in identifying and countering such threats to society.

In business the AI publicity has also generated renewed sensitivity to digitisation issues. However, in the business sector generally and in the Public Sector, it was amazing following the Covid period how many organisations were singularly unprepared for the ramifications of remote working, a key facet in our new digital world. Many would argue that this failure was a major reason for subsequent reduced productivity in areas where the reverse should have been expected. How many other fundamentals of our emerging digital business world are also being ignored? Generic AI analysis may in fact stimulate the desperately needed redirection of attention to such business fundamentals. 

Leadership and oversight

The story thus far is full of examples of notable professional leadership in innovation and the management of change. Organisations such as BCS have oiled the wheels of change through promoting high professional standards and process excellence.

However, there is now widespread and glaring evidence of increasingly devastating potential problems that are emerging from the digital transformation process.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

This clearly tells us that we are already past the stage where strong oversight, which is not driven only by IT industry drivers, is desperately needed; however, the direction of travel so far has been largely dictated by IT industry promotion of its wares. During a period of constant innovation, this is entirely understandable. However, this approach is increasingly fraught with the risk of inadvertently sowing the seeds of future trauma for humanity. At some point, it is essential to introduce formal oversight to ensure that the most appropriate balance is struck between what is possible and what is acceptable in carefully defined terms.

An AI test

The next time that you see a product with an AI label, ask yourself the following question. Does it use established information deriving from lessons learned from ongoing experience, or does it use information which would have been impossible to derive by human means without the massive calculation and memory capacity that is only available via a computer? If the latter, that is truly ‘artificial’ intelligence.

Where to now for AI?

Massive AI capabilities are already well established in space research, meteorology, the defence sectors and in many other activities where the move towards using AI capabilities is driven by the increasing complexity and challenge in today’s decision making. The need for AI tools in such organisations will continue to grow, whilst the power that derives from the exploitation of such a sophisticated tool will also be more widely recognised in a growing number of organisations in business, commerce and government. Targets for attention are wherever decision making has shortcomings due to the lack of ability to make decisions promptly in critical areas where significant conflicts of interest are involved. This will build on increases in efficiency already brought about by the digital systems revolution already in train.

Have we learned any lessons from our IT exploitation thus far?

Firstly, many lessons were learned from notable successes and painful failures in our journey to digitisation, and new standards of technical and professional behaviour and practice have evolved as a result. However, insufficient attention was given to identifying and combatting potential criminal and antisocial threats to our and our children’s wellbeing. This omission is proving to be extremely costly and damaging, and the fact is that using AI has much greater potential to inflict major harm and misery which must be avoided at all costs.

Secondly an ideal route to a digital world needs to successfully merge best practices from two very different cultures. Firstly that of the IT Change Professionals with their enthusiasm and zeal for a new world order, and secondly the well established and proven traditional working culture based on generations of experience. In the 1980s I was invited by the Butler-Cox Foundation to produce a research report for members on the damaging culture gap that then existed in most organisations between the IT professionals and their business users. That culture gap severely inhibited deeper understanding of both sides’ priorities and pressures, thereby adversely affecting the quality of the end product and increasing development costs. This was a multi-level issue in most organisations, from the board downwards, but little was done to effectively address this issue.

How do we respond to that record?

There are already some official initiatives in place to deal with emerging AI security challenges. These could be expanded to include strategic user forums which encourage best practice but are also sensitive to potential malpractice. Variations of existing forums such as the BCS panel of practitioners could have an important part to play. Such bodies do however need to be actively coordinated by a monitoring authority with objectives which are not driven by the IT industry.

Secondly, a culture gap between the IT user community and the industry drivers of progress, which now includes venture capitalists with their own agendas, still exists and impedes common acceptance of the best way forward. One cannot effectively manage any situation if one does not properly understand all of the driving forces and potential possibilities for good and evil along the way. It could therefore be argued that a widespread programme to raise sensitivity to potential threats as well as identifying targeted opportunities would result in a sounder foundation for moving forward into a new world of AI. The time may therefore be appropriate for organisations such as BCS and the BBC to explore combining resources to produce imaginative and authoritative documentaries to captivate the public mind with the wonderful achievements to be celebrated but also the disastrous potential consequences to be avoided. A means needs to be in place to prepare for what is to come, so that society nan be sensibly prepared to encourage the good and discourage the bad.

We need a period of calm and well informed direction, rather than hype and poorly informed promotion.