Trends in programming

June 2014

Programming codeDr Geoffrey Sharman FBCS CITP, Chair, BCS Advanced Programming Specialist Group, sums up the current trends in programming and developing applications.

Overall, programming languages have been relatively stable for several decades. Almost all modern languages are derived originally from Algol and, more directly, from C.

While there continues to be development of existing languages such as C++ and Java, and of new languages such as Python, Ruby and Groovy, these are recognisable as incremental improvements on an existing paradigm, rather than new paradigms, and therefore exploit widely available programming skills. Notable exceptions are COBOL and FORTRAN, which are firmly established in particular industries, but also stable providing that skills are maintained.

Similarly, programming tools such as compilers, interpreters and debuggers have improved over many years. The introduction of integrated development environments (IDEs) just over a decade ago provided a significant increase in programming productivity, which continues to be improved year on year.

No other technologies are in sight that might offer significant productivity increases and, therefore, current attention is focussed on ‘agile’ development methodologies, which seek to offer shortened development cycles and increased confidence in the outcomes of development projects.

For the most part, these methods are based on iterative development techniques in which a subset of function can be demonstrated early in a development project and then reviewed against user needs, and enhanced or refined as the project progresses. The success of these techniques is based primarily on refining specifications rather than the development process itself. In other words, answering the question ‘am I developing the right thing?’ rather than ‘am I developing the thing right?’

In the light of the potential risks of project failure or error-prone operations, some respected authorities have suggested that all development projects should commence with a formal specification of requirements, expressed in a mathematically precise notation.

Whilst this advice has been followed in a few safety-critical industries such as air traffic control, it is ignored in almost all other industries, for simple yet valid reasons: the requirements are not initially known in sufficient detail to write such a specification, and they change within the lifetime of the development project. The result is that any large, multi-year development project that does not include a process for refining and revising requirements during its course has a significant probability of failure. As noted above, agile development methods are the primary response to this challenge.

One particular way in which requirements are changing at present is the rapid evolution of the technological environment in which business applications are expected to operate. The evolution from operating within privately owned and controlled networks supporting fixed function terminal devices to operating within the publicly accessible internet and supporting programmable end-user personal computers was already demanding.

The current evolution to support a wider range of end-users devices, including mobile devices, tablets, smart TVs and sensors, and the increasing likelihood that these devices are personally owned and/or managed, is much more demanding. There are as yet few stable standards for the silicon ‘system on chips’, operating systems or programming environments used in these devices and therefore, projects that commit to building applications for such environments must carry significant risk.

By contrast, environments for the server-based segments of business applications are much more stable, with essentially no server architectures, operating systems or application execution environments emerging in the last decade and many manufacturers are currently emphasising upward compatibility with future releases of hardware and software. Pressure for change arises from the rapid, cost-driven consolidation of data centre systems using virtualisation techniques and the further potential for cost savings by outsourcing applications using cloud-based services.

Both of these approaches offer worthwhile gains with relatively low levels of change to existing applications. However, longer-term gains including decreased operational costs, increased flexibility in provisioning services and improvements in the speed with which new applications can be introduced are dependent on reducing complexity in the data centre environment.

This can only be achieved by taking a strategic view of the IT infrastructure deployed, including server architecture, storage, networks, operating systems and application execution environments, and may therefore require change to some application systems. Organisations that succeed in reducing complexity will benefit from much improved capability to respond to change in the business environment.

Grady Booch, who gave 2013’s BCS Lovelace Lecture, has, in the past, referred to object-oriented design and development methods for applications. He has been a proponent of these and they can help to simplify some coding. However, a well known index of programming languages (see link below) shows that C (non-object oriented) is still the most popular language for new projects - so object- orientated development methods have not, as yet, been universally adopted.

There is a common theme of simplification, but I am suggesting the need to simplify at a coarser level of granularity - in terms of hardware / software platforms and so on, where many organisations have a multiplicity of heterogeneous bases on which they run different applications, mainly for historical reasons.

Choosing which of these to maintain and enhance is sometimes a difficult decision as it involves discarding some and losing or replacing the applications that run on them, but it's necessary to avoid eventual ‘hardening of the arteries’ in which any change at all becomes difficult.

 

Image: iStock/98001557

Comments (11)

Leave Comment
  • 1
    Chris Webster wrote on 2nd Jul 2014

    There is a growing interest in Functional Programming e.g. with Scala or F#, and different approaches to application design such as Reactive Programming, which are claimed to offer a more effective solution to the problem of handling increasing volumes/velocity of data on multi-core platforms. No doubt there is a certain amount of hype around these trends, but real-world systems are already being built to take advantage of these approaches. OOP or C are not the only games in town.

    Report Comment

  • 2
    Julian Chritchlow wrote on 3rd Jul 2014

    I find it interesting that, having been out of the industry for several years, it really hasn't moved on a lot. The major problem as I see it with OOD code is that it may be easier to in the initial project however later modifications and debugging of large OO systems can be extremely difficult. Also, in my experience of non-safety critical software systems, you should always have some sort of top level functional design to ensure that the end user is on board but it shouldn't be written in stone and equally it shouldn't be changed on a whim. Without these caveats it's entirely possible that the goalposts will be continually moving and the engineers haven't got a clear goal. The Nimrod AWACS is a classic example of the constant change.

    Report Comment

  • 3
    Chris Peacock wrote on 3rd Jul 2014

    " The introduction of integrated development environments (IDEs) just over a decade ago..." ????
    When was this written?

    Report Comment

  • 4
    Andrew Sloss wrote on 3rd Jul 2014

    Have we reached a state of diminishing returns with programming languages? Shouldn’t we be concentrating on the problem definition, not the solution vehicle? We have lots of new wonderful tools appearing in recent years e.g. automation, machine learning, speech recognition, big TVs and gesture technologies (NUIs) - isn't that the area we should be focusing-on to improve the next generation software. Move away from a syntax approach to a more visual and natural approach.

    Report Comment

  • 5
    Peter wrote on 3rd Jul 2014

    Can't remember when I last got a call for a C job, C++ yes, C# yes, Java yes, Scala even but C not for a very long time.

    Here are some stats from job ads (a better trend indicator imho) even then I think people ask for C as a common denominator
    1,768 javascript
    1,706 C#
    1,675 java
    1,455 C
    1,388 HTML
    544 C++
    251 Objective C
    65 Scala

    For me the trend is web, mobile and big data and their associate languages which includes functional languages as noted earlier

    Report Comment

  • 6
    Paul Vincent wrote on 3rd Jul 2014

    I guess there are many interpretations of "advanced" as in advanced programming - I think of R as advanced as it is specialist!.
    One of the biggest advances in recent years is model driven development and SOA (e.g. BPMN and associated BPMS, and factoring applications into services for which appropriate programming languages can be used). For decision logic and business rules there is ongoing work on DMN which can map to decision services (e.g. using a Business Rules engine and language) - I saw a related methodology used recently (KPI TDM) that reduced 17K business rule statements into 150 in a major UK bank. So maybe "advanced development" is a prerequisite for "advanced programming"?

    Report Comment

  • 7
    Ian wrote on 15th Jul 2014

    The quoted index surely does not indicate "the most popular language for new projects" - it seems to be based on number of web pages found.
    I also disagree with the description of agile techniques - the way that the code & test process supports the incremental delivery should be fundamental in agile so it's not just about product definition.
    For these and other reasons noted in the comments above I found this article published by BCS disappointing.

    Report Comment

  • 8
    John Rendall wrote on 26th Aug 2014

    I discovered "electronics" in 1987 and "Programming" in 1972 (FORTRAN). I learned "programming" because it seemd to me it may be useful, but real work was done using real hardware. At the bottom of it all numbers are loaded into registeres, operations are performed on them, and the results are stored in memory. Everything else is smoke, mirrors and someones PhD project. In my career I've seen programing languages rise and fall, and methodologies come and go... remember Teamwork, SASD, Schlaer-Mellor? In the early days data typing was important, nowaday scant regard is paid, unless strange answers are output... I remember one script where I had to keep muliplying by 1.0 to keep it real. OK, so this year we have some new languages, python is on the rise thanks to Raspberry Pi and Michale Gove and some consultants will make losts of money. But, as I said, it all boils down to moving numbers around and operating upon them. I still use C, unless I haveto use assembler...and I write my web pages in html and php!

    Report Comment

  • 9
    Colin Mansfield FBCS wrote on 28th Aug 2014

    When I first started in programming back in the 1970's we were told that to program a computer was an art and not engineering science, as most facilities were not fully understood within a computer language, "anything can happen" and it usually did. S/w errors became a source for experimentation and an opportunity for new methodologies. Since then software engineering as become more like plumbing or rewiring a building. When it came to doing my Masters dissertation I picked the topic of vocalised programming via voice recognition s/w direct into mainframe or server, to let the computer(s) select the best most efficient path to program a situation defining data requirements, entities, attributes, relationships, etc. This was slapped down by my tutors as it would mean the eradication of keyboards & mouse technologies ~ which had only just been standardised in the western world. And that's it the main obstacle to creative programming lies within the UK universities and their inability to change from well worn technical paths. A few months later I discovered that in the USA (via their Science mag) these techniques of direct speech to program, compose & run had already been launched in several languages ~ a bit like advanced satnav technology today: specify your requirement, let the onboard system find the best route, checks your time & distance, puts in rest breaks, and get there, etc.

    Report Comment

  • 10
    Tony Leigh wrote on 28th Aug 2014

    C is still far and away the most popular language for embedded systems - particularly with 8- and 16-bit microprocessors. OO languages like C++ add too much bloat. They also hide all the low-level details, the exact opposite of what you want when you're developing embedded software.

    Report Comment

  • 11
    Chris Reynolds wrote on 28th Aug 2014

    As a long retired Fellow of the Society I read about the comparative lack of progress during the years since my retirement, and I am not really very surprised. The conventional rule based programming approach lacks the flexibility of the human mind and has problems with the messier aspects of the real world. An analogy with the railways of Victorian times illustrates the problem. Both railway lines and programs need to be planned in advance and only when they have been built can “fare-paying customers” (goods/passengers in the case of trains, data for programs) use the systems. Both are prone to considerable disruption if faults occur in key places, and both are unable to cater for low volume non-standard “journeys” (which do not justify the up-front building costs) and unpredictable real world events. Many bigger and more successful computer systems work because people are more flexible and change their behaviour when offered a limited but very much cheaper service – moving to live in houses built near railway stations in late Victorian times, and using hole-in-the-wall banking today.
    However there are many problems where there are very hard to fully pre-define requirements and where low volume and unpredictable requirements cannot be ignored, We still read of projects in such areas running into trouble. Medical records are a good example. They involve active participation of many people to gather the data, which relates to the real life problems of many people who each have an assortment of medical issues. At the same time medical advances lead to changes in our understanding of the diseases, new ways of monitoring the patients, new drugs and medical experiments, and problems such as the development of drug resistance diet.
    I have recently been looking back into the relevant computing history. Many of the early experimental programming languages got squeezed out in the rush to develop better conventional programming tools and one of the “lost” languages seems of particular interest in this context. CODIL (COntext Dependent Information Language) was conceived as the symbolic assembly language of a radically new human-friendly “white box” computer architecture, as an alternative to the human-unfriendly Von Neumann “black box” computer. The research was triggered by a study of the 1967 sales accounting package of one of the biggest commercial computer users, Shell Mex & BP, at a time when many of the sales contracts had been drawn up in pre-computer days. The initial research work into CODIL was financially supported by the LEO pioneers, David Caminer and John Pinkerton, but was axed when the old LEO research labs were closed and ICL was formed. A short talk on the first preliminary research was given to the Advanced Programming Group 45 years ago, and several papers were later published in the Computer Journal describing work with a simulator, as no hardware was ever built.
    A re-examination of the CODIL project papers suggests that the real reason for its failure was that the research concentrated on looking into the possibility of producing a competitive computer package and failed to do any essential unrushed blue sky research into why it worked!
    My current assessment is that the CODIL approach represented an alternative mathematical model of information processing to the “Universal Machine” approach of the conventional stored program computer. Instead of a top down rule based approach which uses numbers to represent instructions, addresses and data, within a precisely defined mathematical framework, CODIL takes a bottom up approach using recursive sets rather than numbers as the basic storage unit and makes no formal distinction between program and data. It uses associative addressing and automatically compares patterns to find and fill up “gaps” in incomplete patterns. It appears that the approach could be implemented on a simple neural network and work done 40 or more years ago may prove to be relevant to understanding how the brain works.
    Of course further examination may show that the CODIL approach is not the answer to building complex human-friendly open-ended systems but its very existence could indicate that there are other interesting research gems which were lost in the mad rat race in the early days of computing to capitalise on the market potential of this new invention.

    Report Comment

Post a comment