The route to success

In the next decade the core of the internet will become much simpler, with optical technology dominating the switching and routing. That was the view of Nick McKeown at this year's Lovelace Lecture

Nick McKeown Famous for his algorithm inventions and for being the world's leading expert on router design, his lecture 'Internet Routers (Past, Present and Future)' came after his acceptance of the 2005 BCS LoveLace Medal.

He explained how the emergence of the World Wide Web changed everything for router technology and processing had moved from line cards to switch operation.

'In recent years there has been an explosion of complexity in routers (IPV6, multicast, ACLs, Firewalls, virtual routing, etc),' says McKeown.

'This has signalled the end of the internet 'end to end' model, created a lot of uncertainty in the future and a situation where there is very little competition between those who manufacture routers.'

Predictions for the next 10-15 years

Due to the increased need for dependability and reliability throughout the whole network routers will grow in size but become simpler with fewer features.

In future routers might function via load balancing over passive optics, with packets distributed randomly across the lines. A passive optical switch, which consumes no power itself, will regulate data flow, eliminating the need for arbiters (directional data packet buffers), and increase performance.

Flow by flow load-balancing will enable the building of a mesh network, which will operate over a logical mesh of optical circuits, support all traffic patterns, will be resilient against failure, demonstrate simple routing and cost less to run.

Presently, no network provider makes a profit from generating a public internet service, which has to be subsidised by Voice (especially mobile) and VPN activities. Ultimately this lack of profiteering will lead to the consolidation of the number of network providers, which will inevitably converge into one monopoly provider.

Potentially optical dynamic circuit switches will be used. These are well suited to optics, are simple, have high capacities to unit volume and wattage, low cost, no queues and no delay variation.

However, they are unfashionable as they originate from old technologies. Nick concluded by adding 'If they do make a return they’ll probably put router developers, me included, out of work'!

The background

Internet architecture consists of computers connected by links and routers. These need to be simple and fast and routers function by limiting performance. For example, when an image is downloaded the router enables the breakdown of information into usable packets of data.

Routers have three primary characteristics:

  • They need to look up addresses;
  • Access age of data and drop it if too old;
  • Check headers so data is correctly received by the next section of the process line.

Routers require buffers to hold data at times of congestion. These basically facilitate the queuing up of packets of data until they can be dealt with further on down the line. However, no one can predict when times of congestion will occur and such uncertainties can cause problems.

FedEx process 300,000 packages an hour. Routers will process considerably more than this (About 100 million a second in the core of the internet). Bottlenecks can occur at each of the routing stages. However, address identification problems have now mostly been addressed but buffering remains problematic due to memory speed limitations.

Instead of routers trying to run at the same speed as the rest of the system arbiters where used to allow easier and more controlled transfer of data packets.

However, arbiters themselves became another bottleneck as packets stacked up within them. In order to improve the situation routers were designed with a switch in the middle to help separate data into more manageable units.

The subsequent generation of routers have involved a multi-rack approach with optics inside, e.g. Cisco's CRS -1.

The biggest problem now is the power consumption within a server/chassis. A typical unit, approximately the volume of a man, will use over 10KW and generates excessive heat.

Another major consideration in the design of routers is in making fast queues. Problems arise when packets show up in less time than the DRAM (Dynamic Random Access Memory) can cope with.

SRAM (Static Random Access Memory) is much better but is fifty times more expensive. Each year the network industry buys more than $500M of SRAMs in order to increase the speed of flow.

The internet itself is a relatively small market for manufacturers of DRAM so memory is created for large scale storage not for speed of application. DRAM is designed to maximise the number of bytes, but not access time, which has remained constant. There is, therefore, a need for manufacturers to build a new type of memory.

The computer industry has its own solution, caching in SRAM and storing in DRAM. However, networking can not tolerate a 'miss-rate', hence will not follow suit. Therefore the networks are looking for a format that is as fast as SRAM, as big as DRAM and never misses.

Routers will typically have 40 – 50, 000 flows going through them at any given time, hence it is important that flow is unhindered and buffers can cope with growing queues.

Memory technology is not keeping up sufficiently to maximize the efficiency of such a system. Routers have on average a 1M packet buffering capacity but with improved rate of flow in future this could be dropped to as little as 20.

In the past link speeds have exactly followed Moore's Law (router capacity has doubled every 12 months) to compensate for user demand, which has doubled year on year. Router speed has been limited by memory speed, hindered by the fact that DRAM is no faster than it was ten years ago. If the industry is lucky line rates will not exceed Moore's law.

At the moment the four current network operators have grossly over provisioned (by a factor of 16). Hence there is frequently over provision by a factor of 50 per cent to protect against two per cent failures. This is because no one can predict the traffic flow.

Professor Nick McKeown is associate professor of Electrical Engineering and Computer Science, Stanford University, USA.

The BCS Lovelace Medal is named after Ada Lovelace, the mathematician and scientist who was the inspiration of the computer pioneer Charles Babbage.

The Medal is presented to individuals who have made a contribution which is of major significance in the advancement of Information Systems or which adds significantly to the understanding of Information Systems.