How responsive is your business? That is the question preoccupying most board level discussions and sparking a revolution in IT infrastructure.
Enterprises are seeking to become demand-driven, able to instantly respond to fluctuations in the marketplace. The five-year plan is becoming obsolete and it is no longer acceptable to spend two years developing a new application.
Plus the fickle customer increasingly judges your business on website speed and your ability to deliver products or services immediately.
What we are all moving towards is the vision of the Real-Time Enterprise (RTE). Processes throughout the enterprise, from ordering and provisioning to data warehousing, are becoming faster and more streamlined to cope with the mountain of data generated by compliance regulations and new technology. Only businesses that can offer this type of functionality, eradicating queuing and latency, will survive.
To this end, many enterprises are now beginning to examine the performance of their systems. Data centre resources, namely servers, networks, databases and applications, will all be expected to operate in real-time.
In fact, plenty of ecommerce businesses - from airline ticket sales to online book ordering - already offer near real-time processing, which to all intents and purposes, appears to be an instant procedure in the eyes of the customer.
Yet don't be misled. The order may be accepted, credit checked and confirmation given in real-time but order processing will often be queued. And these systems seldom operate within the confines of the legacy infrastructure typically seen in most enterprise environments.
One of the main obstacles to the RTE is the latency caused during processing between the disparate platforms in this kind of infrastructure, such as mainframe, Unix and Linux servers.
Integrating these beasts is a monumental task, so any attempt to do so is often piecemeal. This makes it nigh on impossible to obtain a comprehensive overview of the processing being conducted at any one time.
In recent years, the automation of processing has led to the emergence of more flexible cross-platform systems which overlay this spaghetti of systems. An example is the automated job scheduler which moved beyond batch to time-based processing and, more recently, has developed event-based scheduling.
The latter refers to the ability of the scheduler to use events, such as an online order or a debit from an account, as triggers to fire off a job process. In fact any change in a file, be it contents, size or state, can act as this application trigger, with even JMS queues, resource thresholds, or web services acting as spurs.
The approach taken by the event-driven job scheduler has revealed the potential platform-agnostic systems have for transforming these legacy systems into a responsive infrastructure.
They embody the start of a new breed of systems that can maximize the efficiency and workflow of network resources; a breed Gartner refers to as the IT workload automation brokers (ITWAB) (Gartner Research 'Hype Cycle for IT Operations Management, 2005' by Milind Govekar, et al. July 20, 2005).
As well as job scheduling, the ITWAB category includes application integration and process automation tools. Together, these facilitate end-to-end automation, managing dependencies across applications and infrastructure platforms, both within and between companies. By completely automating these processes, errors that were caused manually can be eradicated and workflow increased.
Each of these ITWAB tools will work to provide batch application integration capabilities to automate straight-through processing requirements based on events, workload and schedules. But, until recently, each has also operated within the limits of its own capabilities: there is therefore a need for an overarching technology which can coordinate this activity.
According to the report about the IT workload automation broker technology, these tools provide batch application integration capabilities to automate straight-through processing requirements based on events, workload and schedules. They manage dependencies across applications and infrastructure platforms, both within and between companies.
The theory is that unlike today's static computing islands which monopolize resources that are provisioned for peak demand, real-time enterprises will use IT service governors to dynamically assign the appropriate level of resources to meet service goals based on business policies, according to Milind Govekar, research vice president for Gartner.
The IT service governor will view these resources holistically, monitoring them across varied platforms and applications to track which are in operation, which are free and which are out of operation.
It will then communicate with the ITWAB tools instructing them to divert processing to spare resources, in effect preventing platforms from lying idle. In this way, the IT service governor will enable the business to comply with strict service level agreements, increasing IT service quality and boosting business agility.
In essence, a service governor works proactively, seeking the best way to accomplish a processing task in the immediate and long term. It assesses the available resources and the time and capacity needed to carry out the task against other factors such as the priority of the job against other processes.
Moreover, the service governor monitors processing speed and can intervene if this is taking too long, instructing the operating system to increase dispatch priority or divert more resources to the process at hand. This means that, in effect, the service governor transforms the environment into a self-optimising or self-healing system, able to recover without human assistance.
In the future, entire business systems can therefore be built without the need to write in legacy code, implement manual processes or invoke scripts.
Of course, the prospect of a truly open architecture will not be to everyone's tastes initially. It will require a change in mindset for those businesses which have traditionally 'owned' their own elements of the network and may be adverse to the idea of sharing their resources.
In a typical scenario, each line of business controls their own customer touch points, UNIX and Wintel servers, a measure implemented to help prevent system overloads. But this method of working will have to become obsolete to enable new methods of computing.
The emerging ITWAB systems will help ease the pain of this transition by straddling the current maelstrom of disparate operating systems, servers and applications and utilising these resources on-demand.
Challenging as the shift to the real-time enterprise may sound now, it will become inevitable in the face of new technologies and concepts such as virtualisation, service-oriented architectures (SOA), on-demand computing or grid computing.
Anyone planning to adopt these methods of processing will have no choice but to automate and manage the environment.
Resource allocation can no longer be department-specific but must be determined by business priorities and service level agreements (SLAs): in short, the creation of a utility-based model. Only then will the real-time enterprise become a reality.
Charles Crouchman is VP product management and marketing at Cybermation.