The event was on 5 May at the Institute of Directors in London. Two expert speakers stimulated debate by giving short talks and then the 35 participants discussed the topic in small groups over dinner.
Each table included senior people from universities, IT suppliers, commerce and industry, and the public sector. At the end of the evening each table reported back to the entire gathering.
Computer systems are getting increasingly complex and in some cases there is no overall control or knowledge of exactly how they work or how they are performing. The internet is a notable current example.
Another example now emerging is that of computing utilities similar in concept to water and electricity utilities: centralised and metered power accessed via high-speed networks.
Leading suppliers envisage centres housing tens of thousands of blade servers - fast PC-type processors with large memory and disc capacity - which get configured temporarily in sufficient numbers to meet a user's need and are then freed to be linked in other ways for the next demand on the computing centre as a whole.
This will be complicated much further by technology research soon to reach exploitation. Researchers are working on self-organising networks of potentially millions of simple processors; each processor might communicate with 50-100 neighbours to contribute to part of a much bigger overall task, without itself knowing the total picture.
This idea of systems that organise, configure, manage and heal themselves leads to analogies with biological systems – animals and plants: these are complex adaptive systems but are made up of relatively simple components that interact with each other in relatively simple ways.
Just as with biological systems in nature, we might not know exactly how many components in a computer system are working or how many other components they are communicating with; and perhaps, as with the internet, there may be no single point of control over such a system or its behaviour.
This means traditional risk management can no longer be applied. If thousands or millions of different devices are connected around the world, what happens if a certain one fails or 100 decide to launch a denial-of-service attack?
The traditional engineering approach
Such huge increases in complexity throw into question traditional engineering techniques for system development. These are based on breaking a problem into sub-problems and those into sub-problems until there are components small enough to be developed by an individual or a small team.
In traditional engineering disciplines - notably structural engineering - simple components are strong components. In computer system development this approach typically leads to a hierarchical structure with a central control point - which introduces delays.
Trying to apply this to systems in which tens of thousands of components or nodes work together will create systems that are slow, unwieldy and fragile.
A fundamental difference between traditional engineering and the biological systems approach is that engineering is about having control, whereas the biological approach is about not having control.
Formal and informal systems
There is another problematic aspect of computer systems: they are formal systems, precisely programmed, but they serve the informal world, the human or physical world. Bringing these formal and informal worlds together is a significant challenge.
Trying to formalise or engineer the human and physical world - where the system requirements originate and the effects of the system are felt - has been ignored down the years, not least because computing people find computers and programming more interesting.
Software engineers tend to wait for someone to produce a precise specification for them to turn into code.
The requirements from the informal human and physical world are an issue in themselves. Requirements will change during system development: this must be accepted as normal, not regarded as exceptional.
In addition, if requirements are extremely complex, IT professionals need the courage when necessary to say they cannot meet them to the specified time or budget.
What is the user's real problem?
IT professionals and their clients or users also need to think about the problem they are trying to solve before starting system development. A current UK example is that of identity cards. Technologies are available to support these but what is the true perceived problem, and will ID cards solve it?
Similarly, people need to look at the wider boundaries of a system. ID cards can hold biometrics of individuals but who is to say whether a cardholder who gave the biometric the card contains was not pretending to be someone else at the time?
The technical ID card system might work but there could be shortcomings in the human processes surrounding it.
The use of the term 'engineering' in relation to software development is also an issue. In traditional engineering everything is subject to the laws of physics: engineers can calculate the strength of a beam, a bridge span or other physical component.
Software professionals do not have the equivalent of structural engineering disciplines to support this, or if these do exist they are not widely understood or practised.
What does software professional mean in any case? One characteristic of established engineering professions is that there are many specialists: people who design aircraft do not also design tunnels. In software there is some specialisation but not nearly enough.
Any overall system combines components of many types: a banking system includes both accounts management and control of an ATM (Automated teller machine) network.
People cannot say they are banking systems specialists: a banking system has many subsystems, some of which, such as network control, are specialisms in their own right and common to many application areas.
Do systems have to be perfect?
However, do systems have to be perfect? Must software developers qualified as professionals or engineers only produce systems that do not fail? If we think along the lines of biological systems we find that they do not always work: animals get injured, plants fail.
So the biological systems approach does not necessarily guarantee that systems will always work either. Perhaps we have to persuade users that some systems are immensely complicated and they are not always going to work - but in such systems it will be useful to be able to recover back to a known position.
Even so, for some systems it is vitally important that they are correct to as high a degree as possible: for example, safety systems or banking systems. And there are some systems that can be 'good enough' but not perfect.
We need to understand this spectrum and adopt appropriate approaches. So perhaps IT professionals do not have to always produce systems which never fail.
Impact on computer science courses
All this has implications for the computer science curriculum and for continuing professional development for computing professionals.
Computer science graduates are not educated in what science is. They need education in generic engineering principles such as the decomposing of a problem into components.
They will need to be literate in related sciences as computer science becomes a field that is essentially horizontal: they need to understand the physics, biology, economics and social pressures of complex adaptive systems which might be working on a global scale.
They will need to understand how to approach and investigate problems scientifically, how to compare two scientific methods.
Alternatively, perhaps there will be different computer science curricula for different groups: one on traditional computer science and software development and one on the biological systems approach.
Perhaps there will be a hybrid course. Perhaps in computer science and in software development the two approaches will co-exist.
In addition students need to be taught not only about the formal world of software development but also how to deal with the informal human and physical world which requires and uses the systems.
Perhaps traditional computer science courses are too focused on the technical: this might explain why the number of women undergraduates is falling.
Whatever the future of computer science, software development can for now still go a long way with the traditional engineering approach.