Despite Intel’s backing and a technology advantage, the next generation memory architecture from Rambus faces an uphill battle
The multimedia application is here - it's been a while coming. That last kick by the Internet revolution has meant that we now have media rich applications. The use of high resolution and 3D graphics coupled with high fidelity sound, embedded compressed video and audio feed, all require the corresponding hardware resources in the end user's PC to deliver the full impact.
One of the requirements is therefore for a faster and more capable processor - although this in isolation is never enough. Improvements to graphics subsystems, most notably the introduction of AGP and AGP 4x, coupled with necessary improvements in the adaptor bus, are conspiring to produce a demand for greater and greater amounts of bandwidth. The area of binding constraint once again is the memory subsystem.
The pace of technology
This morning's memory technology can just about cope, but will not deliver enough for this afternoon's enhancements. This afternoon's memory technology will satisfy this afternoon's enhancements, but will fall short of tomorrow's technology.
Tomorrow morning's memory technology is split into two to three camps. One which will satisfy the immediate requirement and perhaps some of the afternoon's and the other two which are designed to last the whole week.
Of the two long-term prospects - Rambus and Synclink - only one appears to be the likely candidate. This is due to an allegiance that has been created between Intel and Rambus.
Intel and Rambus
Intel, having grown frustrated with the rate at which memory technology falls behind the development rate of the processor and graphics subsystems, looked around the industry for a technology that could provide sufficient bandwidth headroom to give them four to five years of chipset stability. Rambus had been achieving some success with the adoption of their technology in the Nintendo range of games consoles and after researching the technology, Intel entered into a joint venture with Rambus to produce Direct Rambus (DRDRAM).
When Intel announced to the PC hardware community that future Intel chipsets would support DRDRAM, it had the immediate effect of undermining any work being done on the competing SLDRAM . Since then, Intel has consistently worked at building up the momentum for Rambus. Statements to the effect that future chipsets would ONLY support DRDRAM served to further demonstrate the commitment to Rambus that Intel had and further suggest the inevitability of DRDRAM being the future standard memory for the PC environment. At a juncture when competing chipset manufacturers could have attempted to undermine Intel's Rambus push by supporting alternative memory technology, many of them have signed up to support Rambus!
Early acceptance and adoption of Rambus by a number of PC OEM's seemed to suggest that it would be the new standard, although such signs of support have fluctuated during the "ramp" to Rambus production. Dell remains committed to Rambus, although only in high-end workstation products.
To combat this "straying", Intel has actively invested in a number of the major silicon manufacturers seemingly in an attempt to catalyse their development process. Micron and Samsung have both received substantial investments that will assist with the mammoth costs of development and subsequent enabling of production processes. Interestingly enough, both of these companies have also been very active in the development of some of the other "competing" technologies.
PC133 and DDR
During this two to three year period, other technologies such as PC 133, PC 166 and DDR have been mooted, developed and implemented. Many have suggested that they are just interim technologies and this may be true if the requirements for bandwidth predicted by Intel hold true. The first of these technologies, PC 133, is already shipping in production quantities and is competitively priced when compared to early costs for Rambus.
Delays in the Rambus ramp and the release in Rambus supporting chipsets from Intel has given some of the "interim" technologies an opportunity to gain acceptance with some PC OEMs and alternative chipset manufacturers. Examples of where these technologies are taking hold will be given later, but in the meantime a brief description of them is in order.
Today's standard technology - Synchronous DRAM - was a significant improvement over the previous technologies of Fast Page Mode and Extended Data Out, shifting from an asynchronous technology to a synchronous one and being developed to support the Front side bus 66Mhz and 100Mhz technologies.
An internal clock that was completely separate from the system clock that the CPU used (asynchronous) originally controlled DRAM chips. This meant that the CPU had to wait for the data to return from memory, as it didn't 'know' exactly when to expect it.
This is analogous to sending someone for a pizza and telling them to bring it back as soon as it is ready. You then have to wait around until the pizza returns. If the memory was particularly slow, an extra "wait state" had to be programmed in (usually through the BIOS setup) so the CPU wouldn't time out waiting for the data.
In order to make the memory transfers more efficient, designers added a signal pin which allowed the system clock to control the timings. With both the CPU and memory "synchronised" to the system clock, the CPU would now know how many clock cycles it would take for the data to be retrieved. This would be like telling someone to get you a pizza, and to meet you back on the corner in 30 minutes. Now you can spend that 30 minutes doing other things.
Current PC 100 SDRAM receives the triggering clock 1 million times per second and uses the positive rising edge of the clock waveform to trigger its activity. The next front side bus speed to be implemented will be the 133Mhz clock, which may be followed by the 166Mhz in non Intel chipset designs, and more likely by 200Mhz FSB from Intel during the year 2000.
The effect of this increase is to drive the memory still faster, given that it is now coupled to the same clock. Obviously, the memory has to be designed to cope with these faster speeds and PC 133 memory is already available. PC 166 may become available towards the end of the year if there is sufficient demand from the clone manufacturers. A move to 200Mhz FSB would however necessitate a move to Double Data Rate SDRAM (DDR) which is an enhancement of SDRAM. With DD, the FSB clock signal is used twice to trigger the memory. To do this it uses both the leading and trailing edge of the clock signal, and by so doing, can generate 200Mhz performance from 100Mhz and 266Mhz from 133Mhz.
The other long-term technology alluded to earlier in this paper is SLDRAM, also known as Synclink. SLDRAM is a further enhancement to SDRAM, giving bandwidths of approximately 1.5Gb/s which compares favourably with SDRAM, which has a typical bandwidth capability of 800Mb/s. There are a number of technical differences between SLDRAM and DRDRAM and one very important commercial one. SLDRAM is a public domain and royalty free technology. DRDRAM is a licensed and therefore royalty payable technology. There are no prizes for guessing which technology is favoured by the silicon manufacturers.
SDRAM, (PC 100,133,166 and DDR) all use a 64 bit (4x16-bit) bus whereas DRDRAM uses a 16-bit bus.
Rambus - the technology
Intended for use in next generation system memory systems, the Direct Rambus technology is intended for a broad range of systems from consumer digital video products to desktop computers to supercomputers. The Direct Rambus program goals were set to meet several challenges faced by memory system designers:
To help close the processor-memory performance gap, DRAMs need to provide ten times improvement in bandwidth per pin
To be affordable in mainstream markets, the DRAM costs in die size and package must be kept comparable to commodity DRAMs
Apply to a broad range of market segments including consumer, computer, and communications
To provide a stable interface to OEMs, the interface must be able to span multiple DRAM and process generations
As part of meeting these goals, the following extensions were made to the existing Rambus interface:
Wider interface: two-byte wide data path
Higher clock frequency: 800MHz transfer rate
More efficient protocol: 95 per cent efficiency
Rambus and Intel purport that a Direct Rambus implementation can achieve three times the effective bandwidth over an SDRAM-100 memory system at comparable cost and lower power. This remains to be seen as Rambus technology has a high cost of entry - and not only for the silicon manufacturers - also for the module manufacturers.
The Direct Rambus Channel can connect memories to devices such as micro-processors, digital signal processors, graphics processors, and ASICs. The Channel uses a small number of very high-speed signals to carry all address, data, and control information. As it is able to transfer data at 1.6Gb/s at a moderate cost, Rambus and Intel feel that the Direct Rambus Channel is ideal for high performance/low cost systems.
The Rambus solution eliminates the need for buffers and decoders and provides a modular and scalable system solution. As the Direct Rambus implementation utilises 16-bit wide silicon it has single chip granularity. That is, capacity can be increased by the addition of a single Direct Rambus chip, unlike traditional RAM implementations, which require a number of DRAMs to satisfy the bit width of the (typically) 64-bit module. Each Direct Rambus DRAM, called a Direct RDRAM, transfers data at up to 800MHz across a two byte-wide Channel. Multiple Channels can be used in parallel to achieve even higher throughput.
So, is Rambus all good news?
A typical PC will have a single Rambus channel. A single Rambus channel only supports 3 RIMM sockets. The total number of Direct Rambus devices that can be employed across those three RIMMs is 32. Monitoring the 32 device limit may be a source of confusion and misconfiguration. Every RIMM socket must be filled - if not with a RIMM module, then with a continuity RIMM designed to permit the daisy chained bus to propagate through the entire Rambus channel.
Some critics of Rambus technology suggest that Rambus offers extremely high bandwidth, but has slower latency than even standard SDRAM. They feel that this will compromise CPU performance, but its higher bandwidth exceeds that currently required by today's CPUs. This does not necessarily translate to "fast".
It is described as 800MHz DRAM, but the bus actually runs at a 400MHz clock with a double data rate approach like AGP and DDR SDRAM. In order to operate at this clock speed, there is inevitably a restriction in bus width. At 16-bits wide, it is not wide enough to issue commands to the DRAM in the standard manner. It therefore has to package and serialise the commands and data between the controller and the DRAM chip. This adds delays in the path between the chipset and DRAM, resulting in slower access latency.
This memory will cost more. Who will pay the bill? The end user will probably not want to pay a premium. The price of Rambus will be high until the technology becomes a volume product and the economies of scale kick in. In any event, there will be significant investment to recover.
The cost of Rambus memory will restrict it to the high end of the market initially, however, as production ramps and manufacturing processes are refined, we should start to see it cascade down the tiers of specifications eventually.
Lastly, the bandwidth capabilities of Direct Rambus will hopefully provide sufficient capacity for at least the next three to four years. It will not control the whole playing field however, as many other technologies now have a firm hold on certain sectors of the market. They will not be given up easily.
Compiled by Ajith Ram
(c) 1999, Kingston Technology Company
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.