August 2010 Archives

Back to BASIC

| No Comments
| More

Delwyn Holroyd, a volunteer at TNMOC, asks if there is anything new under the sun.

This week's news from TNMOC about schoolchildren learning to program on BBC Micros from 1981 has caused quite a stir. Although the machine itself must seem hopelessly archaic, they learn that the skills required to program it are the same as for any modern machine. The simplicity of the BBC allows them to really understand what it's doing at a low level, something which is lost behind layers of complexity on a modern PC.

We are all familiar with the saying 'there is nothing new under the sun', but it may seem strange to apply this to computing, when every week there is a new must have system or gadget. We have certainly witnessed incredible progress in the technology of implementation, resulting in storage densities and clock speeds that were almost inconceivable when the BBC Micro was first introduced into schools nearly three decades ago. However if we look instead at the design principles used in modern systems, a different picture emerges.

Here are two examples, but there are many to choose from.

First, virtualisation, the ability to run several logical systems on one physical server, is a hot topic nowadays. The phrase 'bare-metal hypervisor' may have been coined relatively recently, but the technique was implemented on mainframes from IBM and ICL as far back as the 1960s.

The ICL operating system CME (or Concurrent Machine Environment) from the 1970s allowed customers to run two mainframe workloads on one system, and the designers had to grapple with all the same issues of shared access to peripherals and resources that challenge the modern hypervisor designer.

The rationale for CME was different of course: the idea was to allow customers to migrate gradually from their old system onto the (then) 'New Range' of machines. The benefits touted for virtualisation today are ease of management, greater flexibility, reduced cost of ownership compared with multiple servers, and so on. This pitch could have been lifted straight from the pages of a mainframe salesman's training manual!

Second, the mainframe systems of old were inherently unreliable. The number of discrete circuit boards, cabinets, ribbon cables and connectors, coupled with minimal protection against electromagnetic interference ensured that bit flips happened with depressing regularity.

In the early days this would generally lead to a system crash, but gradually designers evolved techniques to cope with the problem. Redundant functional units, parity bits, error detection and correction, and retrying failed operations were all common techniques used.

Advances in technology lead to a much higher inherent reliability and many of these techniques were no longer necessary.

However, IC densities are now reaching the point where a transistor, the basic switching element, can no longer be relied upon to switch reliably. Future IC designs will need to cope with this new unreliability of the basic implementation technology, and designers are turning once again to the old techniques. I hope they remember to consult their old textbooks.

Delwyn Holroyd is the restoration team leader for the ICL 2966 at TNMOC. You can follow his progress on the TNMOC restoration project page.

Categories

 

-- Advertisement --

 

About Archives

This page contains links to all the archived content.

Find recent content on the main index.

 

-- Advertisement --