Random access memory is a key component of any modern computing system. System vendors have begun to offer DDR5 memory as an option on the latest systems, replacing the DDR4 memory found in systems offered for the past 8 or so years. This leaves many to wonder: what is DDR5 memory and how is it better than the DDR4 memory that it replaces?
THE FUNCTION OF RANDOM ACCESS MEMORY
The primary function of RAM is to hold onto data in a high-speed container so that the CPU can quickly access it when needed. System RAM sticks act as a sort of middleman in the memory hierarchy – There exists the small but incredibly high-speed cache memory found directly on the CPU die (which is further broken down into several levels of increasingly small-but-fast memory) is what the computer is actively and currently processing. On the other end of the spectrum, non-volatile memory in the form of storage drives is used to store data, and therefore is capable of significantly greater capacities while operating on a much slower bus, or connection to the processor. RAM sits between the CPU cache and storage drives – it sits on a relatively high-speed connection to the CPU while having much more capacity than the on-die cache memory. It can buffer the slower data from the storage drives and hold it ready to be transferred at high speeds to the CPU cache. As an analogy, data in the cache would be like food in your mouth and digestive system, data held in RAM would be like the rest of your meal sitting on your plate, and data held on storage drives would be like the food and ingredients you have put away in the fridge/pantry.
WHAT IS DDR MEMORY AND DDR5
DDR stands for double data rate. To understand why it is called as such, we will delve a little in the signal processing involved in operating DDR memory. Digital systems are inherently binary in the manner of which they understand and process data. Computers are effectively an organized series of switches that turn on and off, typically pictured as the 1s and 0s associated with digital systems. How data is transmitted around the various boards and subsystems of a computer is largely via electrical impulses over conductive connections between the various components. While the nature of the data itself is digital, engineers can take advantage of the analog nature of the electrical signaling systems to package and transfer data in a much more efficient manner. DDR memory takes advantage of the wave form nature of the electrical signals to pack additional data into the rising and falling edges of each clock cycle, resulting in the doubling of data rates from which DDR derives its name. DDR5 is the 5th generation standard of RAM using this principle – its full acronym is DDR5 SDRAM, which stands for Double Data Rate 5, Synchronous Dynamic Random-Access Memory. We’ve already explained the DDR aspect of the name. Synchronous means that the system coordinates data transfers in conjunction with an externally supplied clock signal; think of this as the conductor in an orchestra, making sure all the disparate “notes”, or in our case, packets of data arrive with the correct timing and ordering. Dynamic Random-Access Memory denotes that the data held on the memory device is stored in cells, or logically arranged physical locations in which transistors and capacitors acting as switches can hold data and can be accessed “randomly” or out-of-order from one another. Since the electrical charge held in the cells can dissipate quickly, this sort of memory is deemed volatile (loses data quickly sans power) and required an external circuit to refresh the memory’s electrical charge.
Being the 5th generation of this standard, DDR5 brings with it a number of improvements in regards to performance and efficiency. With every revision of DDR comes an increase in the available bandwidth and addressable capacity. Density has been increased so that DDR5 is capable of up to 64GB on a single-die package, quadruple that of DDR4 and allowing for DIMMs with up to 256GB of capacity. DDR5 currently has a transfer rate of approximately 4.8 Gbps at 1.6 GHz with potential for scaling up to 8.4 Gbps in later iterations, a substantial increase over the 3.2 Gbps DDR4 was capable of running. DDR5 allows for speeds up to 51.2 GB/s per module, with up to 2 memory channels per module. Various new features such as the Decision Feedback Equalizer (DFE) were implemented to achieve the increase in IO speeds and data rates. The voltage regulators have been moved on-board, rather than relying on external regulators found on the motherboard. This, along with changes to the data buffer and registering clock driver, allows for a drop to 1.1V nominal voltage from the 1.2V reference voltage needed for DDR4 and 1.5V used for DDR3. This additionally allows for higher stable clock speeds, as the on-board power management integrated circuit is able to provide better granularity of power loading, reduced signal noise, and improved signal integrity. On the IO side, DDR5 has support for two independent channels per DIMM and has double the burst chop/length of DDR4, with each channel operating at 40-bits wide (32 data, 8 ECC). DDR4 operated on a single channel per DIMM with a 72-bit wide bus. While the data width remains at 64 bits per DIMM total, spitting the bus into two independent channels and increasing the burst chop/length allows for significant improvements to memory efficiency, especially as a single burst is now able to fully saturate a typical 64-byte L1 cache.
Going over all these improvements, one could summarize the advantages of DDR5 quite clearly: DDR5 is faster, has more capacity, is more efficient, all without sacrificing data integrity or latency. As a new standard in computer hardware, what more could you ask for?