Feature

White Paper: Facts about memory

The memory architecture inside desktop PCs is scheduled to undergo a sea change. But it’s too early to tell what will prevail

Between now and the year 2000 the memory landscape will include numerous changes. Faster microprocessors and more advanced software applications are driving the need for speed. In fact, in the past 17 years, the speed of microprocessors has increased roughly 60 times (5MHz to 300MHz). To complement this speed race, memory technology is gaining more horsepower in the form of faster, next-generation memory architectures.

To gain a practical overview of these evolutionary DRAM technologies, some brief definitions and comparisons will serve to expand our understanding of the rapidly changing memory landscape. In the past, Fast-page mode was the most common type of DRAM used during the span from 1991 through 1995. In the next three to four year period however, at least five rather than one DRAM technology significantly influenced the PC industry.

EDO (Extended Data Out) DRAM technology shortens the read cycle between memory and the CPU. On computer systems designed to support it, EDO memory allows a CPU to access memory 10 to 20 per cent faster than comparable fast-page mode chips.

Synchronous DRAM uses a clock to synchronise signal input and output on a memory chip. The clock is co-ordinated with the CPU clock so the timing of the memory chips and the timing of the CPU are "in synch". Synchronous DRAM saves time in executing commands and transmitting data, thereby increasing the overall performance of the computer. In pure speed tests, SDRAM is about 50 per cent faster than EDO memory, with actual performance gains of around 25 per cent.

Double-data rate SDRAM is a faster version of SDRAM that is able to read data on both the rising and the falling edge of the system clock, thus doubling the data rate of the memory chip. In music, this would be similar to playing a note on both the upbeat and the downbeat.

RDRAM is a unique design developed by a company called Rambus, Inc. RDRAM is extremely fast and uses a narrow, high-bandwidth "channel" to transmit data at speeds about ten times faster than standard DRAM. Two other flavors of RDRAM are also soon to arrive: Concurrent and Direct RDRAM. Concurrent is based on the fundamental design of the standard RDRAM, yet is enhanced to increase speed and performance. Direct is also based on RDRAM, yet through additional enhancements will be even faster than concurrent RDRAM.

Both RDRAM and Concurrent RDRAM technology are not currently utilised for PC main memory, but targeted as memory for various consumer, workstation, and PC multimedia applications like Nintendo 64 video game systems and Creative Labs PC add-in cards. In late 1996, Rambus agreed to a development and license contract with Intel that would lead to Intel's PC chipsets supporting Rambus memory starting in late 1999. As a result, Direct RDRAM has the potential to become the prevalent technology for PC main memory from 1999 on.

SLDRAM is a "joint effort" DRAM that may be the closest speed competitor with Rambus. Development is co-ordinated through a consortium of 12 DRAM manufacturers and system companies. SLDRAM is an enhanced line extension of SDRAM architecture that extends the current four-bank design to 16 banks. SLDRAM is currently in the development stage and is scheduled for production in 1999.

In the final analysis, predicting where the "DRAM dust" will settle is difficult. All of the top 10 DRAM manufactures like Samsung, Toshiba, and Hitachi are developing Direct RDRAMs, yet also continuing aggressive R&D related to alternative next-generation DRAM technologies such as DDR and SLDRAM.

All these types of memory must not be confused with cache memory. Cache memory is a special high speed memory designed to supply the processor with the most frequently requested instructions and data. Instructions and data located in cache memory can be accessed many times faster than instructions and data located in main memory. The more instructions and data the processor can access directly from cache memory, the faster the computer runs as a whole.

In general, there are two levels of cache memory; internal cache, which is typically located inside the CPU chip; and external cache, which is normally located on the system board. Internal cache is sometimes referred to as primary cache or level 1 (L1) cache. External cache is sometimes referred to as secondary cache or level 2 (L2) cache. In most desktop PCs, the internal cache will range from 1Kb to 32Kb in size. In contrast, external cache configurations are usually much larger, ranging in size from 64Kb to 1Mb.

When we talk about upgrading cache, we are most often talking about external cache. Upgrading external cache may involve plugging individual cache components into sockets located on the system board or plugging a cache module into a dedicated cache expansion socket. In most cases, upgrading internal cache would require the replacement of the CPU.

An interesting way to look at cache is to imagine yourself at a party with a host that is required to serve you the exact beverage you request. The beverages are the data, the corner store is main memory, and the refrigerator is cache memory. If someone at the party requests a diet Pepsi, the host of the party makes a trip to the refrigerator first, to see if it is there. If the diet Pepsi is in the refrigerator, the requester can have it right away. However, if it is not in the refrigerator, the host has to run to the corner store to get it. This may take considerably longer. The host can save a lot of time by purchasing a 6-pack at the store. This logic insures that most of the time, the next request can be fulfilled directly from the refrigerator.

In the same way, when the cache controller retrieves an instruction from main memory, it also takes the next several instructions back to cache with it. This increases the chances that the next instruction requested by the CPU is already in cache. (When a request from the CPU is found in cache, this is referred to as a "cache hit".)

On a typical 100MHz Intel motherboard, it takes the CPU as much as 180ns to get information from main memory versus as little as 45ns to get information from cache memory. (This represents the total memory retrieval process, including request, verification, and data access time.) With the incredible performance advantage cache memory offers, it would seem logical to use cache for all the computer's main memory.

However, cache memory typically uses SRAM (Static RAM) chips, which cost more than six times as much as the DRAM chips normally used for main memory. Thus, it is not cost effective to use a large amount of cache in a system. In our party example, using cache as main memory would be similar to buying the corner store in order to stock every type of beverage that exists. While having one refrigerator saves a lot of time and inconvenience, the added benefit of having the corner store in the back yard may not be worth the investment. This is how cache works as well. The first 256Kb of cache saves the computer a lot of time by holding all the most frequently used instructions. However, adding 256Kb more of cache for a total of 512Kb does not increase the overall performance of the computer as much as the first 256Kb does.

Compiled by Ajith Ram

(c) 1999, Kingston Technology Company


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in January 1999

 

COMMENTS powered by Disqus  //  Commenting policy