As big tech convinces businesses that Artificial Intelligence (AI) can manage uncomfortable ethical issues leaders are allowing machines to make difficult decisions needing a human touch, say Professors Nada and Andrew Kakabadse of Henley Business School.

AI learning systems and big data are continuing to accelerate the transformation of businesses around the world, but boards and senior management are struggling with how these technologies can best reshape their organisations.

Consider the findings of a recent survey by consulting firm KPMG which asked 250 ‘director-level or higher executives at companies with over 1,000 employees about data privacy. Some 29% admitted the way their own companies collect personal information is ‘sometimes unethical,’ while 33% said consumers should be concerned about how their company uses personal data.

While digital technologies enable new business models and environments to develop – whether in the form of products, services, revenue streams or efficiencies – they also present a significant source of risk.

Leveraging digitised products and systems leads to the growth of new organisational procedures, business models and commercial offers, with the financial industry in particular witnessing a profound evolution in its service delivery. Expanded user-interface connectivity and faster back-office information processing and procedures are hallmarks of this rapid growth.

Digitalisation has shifted from improving traditional job delivery towards the creation of fundamental new business opportunities. This has been exemplified by digital payments being made via blockchain and new protocols for employee and customer engagement courtesy of remote sales and services.

FinTech enterprises and creative financial service providers now offer a mass of new financial innovations, businesses, software and other unique methods of client interaction – all under the banner of ‘digital finance.’

A digital transformation of industry

In effect, digital technology has transformed existing markets, industries and organisations, while also nurturing new ones, namely platform-based markets and decentralised autonomous organisations (DAOs).

Some forecasters estimate state that by 2024 50% of all AI investments will be quantified and linked to specific key performance indicators to measure return on investment. If this does come to pass, then boards will increasingly build return on investment disciplines into any AI development plan.

Despite this progressive march, the use of AI and blockchain technology also introduces significant ethical, cultural and social challenges which require careful stewardship. Board directors and executives need to find the right balance between the benefits and risks of what is possible.

It’s clear that digital technology, while offering significant efficiency gains, cost savings, and potentially improved customer satisfaction, can also involve technical complexities and risks linked to the use of third-party expertise and companies, and structural dilemmas such as unemployment.

Few boards and executives have stepped up to take on the responsibility of addressing these emerging sensitivities, a pattern that is likely to become increasingly noticeable as AI and related technologies continue to disrupt the traditional functions of organisations and society.

According to the United Nations, about 60 per cent of women work in the informal economy, which puts them at greater risk of falling into poverty. As it is, women are more likely to be employed in low-paying jobs, particularly in health and social care, arts, entertainment, the recreational industry, and food and accommodation services. In short, all jobs requiring human care and empathy are undervalued.

Add to this data from the Bank of England stating that the number of credit cards and loans are at a five year high as of February 2022, while UK fuel and energy prices are at their highest level in 30-years, and it becomes clear why 1.5 million UK citizens are now facing absolute poverty.

The threat of societal unrest

This is a precursor to a risk of social unrest and a direct result of the ever-greater search for efficiencies. Leaders view technological advancement as a primary objective, while maintaining employment is secondary.

In this data-driven environment, the leader’s mind-set drifts towards feeling less responsibility for their decisions and actions. The underlying sentiment is that ethics can be easily fixed with the right technology, meaning that ethical decision-making is unwittingly handed over to algorithms that are programmed by technical experts in an inward-looking AI environment.

Today, the emphasis for board directors and executives developing technical competencies overshadows the necessity for human leadership capabilities.

While continuing on this trajectory, business leaders will feel less able and compelled to deal with ethical and moral organisational dilemmas. In fact, the danger is that moral concerns are delegated to AI machines.

The moral lie of AI?

If big technology succeeds in convincing the business community that AI can effectively deal with uncomfortable moral issues, leaders will increasingly feel willing to leave difficult decisions to a machine.

Furthermore, as business leaders become more out of touch with human and ethical concerns, stakeholders are likely to be treated in a manner which is markedly less humane.

We are already living in a world where algorithms are being used to design the new workplace. Employees are being supervised, hired and fired by machines, while their work schedules are judged and adjusted based on live calculations of their performance data.

Such an environment is being driven by the underlying assumption that humans work at the same pace and with the same level of consistency as machines.

Algorithmic management shapes and influences workers’ experience, the structure of their day and the norms of their job, while also setting prices and even determining pay by deciding a minimum or standard rate.

As a result, employees are likely to feel constrained and treated like mere robots, while experiencing pressure and stress levels that in any other circumstance would be recognised for what they are - inhuman.

For you

Be part of something bigger, join the Chartered Institute for IT.

A prime responsibility of the board is to set the tone, monitor and steward the culture and values of the organisation.

The board ultimately decides how to gain value from AI, either by using it as a tool to help employees to do their work better, or as a vehicle which effectively enslaves or replaces them.

Although roles may change due to automation, new roles will be created which require new skills. It is clear that the boardroom’s response to the challenges of AI will define the organisation’s future.

To ensure the best use of technology, boards should define company purpose in a manner that motivates employees and other stakeholders.

To make technology-driven decisions into something sustainable, the board has to transcend from a ‘cult of efficiency’ to the integration of efficiency and effectiveness, creating a responsible organisation that is capable of fulfilling its full purpose.

Boards must define values

Whatever the level of AI sophistication, the board must define the values of the organisation, rather than be defined itself by machine logic. It is crucial for the board to ensure that the values their company endorses are embedded at all levels of the organisation. This will ensure the right questions are asked, and the right kind of data is used to answer.

Organisations need to develop the capabilities to enable data-led decision-making, AI-driven job augmentation, rather than replacement, and innovation that inspires new practices, processes, and organisational structures that can be embedded in humane ways of working.

Looking to the future, leaders at all levels will need to meaningfully engage with stakeholders as in a technology determined world, the human factor needs to be positioned as a premium value-add for any organisation.

The key considerations for boards are:

  • What arrangements are in place for assessing and considering the impact of digital transformation on organisations, the environment and society?
  • What information helps the board understand the company’s technological, social, and environmental risk profiles?
  • What performance information is most relevant to monitoring and managing the impact of digital transformation on employees, society, and the environment?
  • What will it take to act upon the organisation’s technological-risk, climate-related, and social-impact profiles?

AI learning systems are becoming increasingly capable of making decisions, but in order to be effective these actions require a sensitivity of leadership focusing on people’s needs and uncertainties. It is, in fact, a tool that can help leaders make better decisions in an era where being more conscious of the human touch represents a distinct advantage.

The value and opportunity of AI is that it can help leaders envision a better strategy that will promote the interests of all stakeholders.

Make no mistake, choosing how to engage with and deliver strategy effectively is an unreservedly human responsibility. Consumers expect and seek out superior delivery of products and services. Drawing on AI systems without consideration of these realities will come at a damaging cost.

About the authors

Nada Kakabadse is Professor of Policy, Governance & Ethics, and Andrew Kakabadse is Professor of Governance & Leadership at Henley Business School.