GDDR5 vs GDDR5X vs HBM vs HBM2 vs GDDR6 vs GDDR6X

We explain how video memory differs from system memory as well as clarify what different types of video memory are out there.

With such an abundance of types of GPU video memory, also known as VRAM, you’re bound to be a bit confused about which is the best choice.

Before diving into the specifics of each of these memory types, we’re going to look at the significance of RAM (random access memory). It’s important to understand why there are so many different types of memory and why one might be great for a particular user but not so great for another.

First and foremost, it’s good to know what RAM, and by extension VRAM, is used for.

Table of ContentsShow

Random Access Memory – RAM

RGB RAM
Four RGB RAM sticks inside a PC configuration

Technically the first RAM appeared back in 1947. Since then, it has been highly improved. So, what is RAM?

There is a difference between RAM and other components for data storage, such as hard drives and optical discs. With RAM, data can be changed in any order, and it can access data (both to read and write) in the same time frame, regardless of its physical location.

Whether RAM ranks highly in terms of memory importance depends on how you prioritize memory. Overall, the data used with RAM sits somewhere betwixt cache memory and hard drive. This means the CPU will work on data by first using cache memory for the simplest operations because it is directly on the CPU chip, and the cost of data transfer is lower.

At times, the cache, despite having a modern size of 256 MB, cannot properly help the CPU. In such situations, the processor will contact the RAM instead and bear the cost of that transfer. There is also a chance to involve hard drive memory in aiding this procedure, but it is rare and seldom enhances performance.

Video RAM – VRAM

NVIDIA GPU

VRAM is a specialized version of DRAM (dynamic random access memory). Just as RAM is used to supply the CPU with data, VRAM is designed to help the graphics processor in terms of recollection.

VRAM helps the graphics chip by holding information such as textures, frame buffers, shadow maps, bump maps, lighting, and others. There are many factors in the required amount of VRAM, the most obvious being the display resolution.

If you’re gaming at Full HD, the frame buffer needs to hold images that are roughly 8 Megabytes. While playing your games at 4K resolution, that number increases to a whopping 33.2 Megabytes. This example illustrates why modern GPUs are using more and more VRAM with each iteration but also why developers experiment with different types of VRAM to achieve the best possible results.

Something else that can cause performance issues is anti-aliasing (AA). This process requires an image to be rendered multiple times, so the differences between them can be smoothed out. AA can significantly improve the look of a game, but the trade-off is a potential FPS loss.

It’s important to note that if you’re thinking about connecting two or more GPUs via CrossFire or SLI, you won’t be able to simply take both memories and add them up. In reality, the memory will be cloned across the connected cards, so you will effectively have the same amount of memory as if you were using a single card.

SLI connector

It’s also important to keep in mind that the amount of VRAM you need largely depends on what you need it for. If the game you’re playing uses one gigabyte of VRAM, and you have two, the performance won’t be improved. However, if you have only 512 MB of VRAM, you will experience a significant drop in performance.

It’s time to explore the different VRAM types and their pros and cons.

GDDR5

ASUS Radeon RX 580
AMD’s Radeon RX 580 is equipped with GDDR5 memory

This is the most ancient memory type we will be looking at, but that doesn’t necessarily mean it’s the worst. While it’s no longer the best, it still has its uses.

Its full name is GDDR5 SDRAM, which stands for graphics double data rate type five synchronous dynamic random-access memory. This lengthy abbreviation might look confusing, but we’ll do our best to break it down.

The graphics double data rate part is largely self-explanatory. If that isn’t enough, just know that DDR means a bus is performing data transfer at rising and falling edges of the clock signal. This means the data transfer is available twice during a single clock cycle, thus doubling the performance of the previous standard SDR (Single Data Transfer).

The DDR part of the name indicates that it has a rapid-bandwidth interface that allows the specific data needed for graphical calculations to be processed far more expeditiously.

The “type five” part is intriguing as GDDR5 is based on DDR4, despite the name suggesting an improvement. While there are differences, the truth is that these memory types are made for specific purposes, and each does exactly what it was designed to do.

The SDRAM part refers to the synchronous dynamic RAM. As we already went over what RAM is, let’s focus on those descriptors.

It would be easy to say that these are just broad terms used for marketing, but there’s more to it than that.

NVIDIA GeForce GTX 1650
The NVIDIA GeForce GTX 1650 can be found with GDDR5 or GDDR6 memory configurations.

In the context of RAM, dynamic means that this type of memory stores every bit of data in one memory cell with its own transistor and capacitor. Due to the small capacity for the charge, DRAM is a volatile type of memory; if it loses power, all data on it will be lost.

To combat this, DRAM has an external circuit that handles memory refreshing, a process that periodically rewrites the existing data with itself, returning the charge to the capacitor. The “synchronous” in the SDRAM ensures that the memory’s clock is in sync with the microprocessor it services. This increases the processor’s potential for executing instructions.

So, how efficient is GDDR5?

This technology was absolutely groundbreaking. However, many advances have been made since the early 2010s.

It’s important to highlight that GDDR5 was first announced in 2007 and entered mass production in early 2008. It was only dethroned in mid-2016 by GDDR5X’s debut. This means there were nearly ten years in which GDDR5 reigned supreme, unprecedented in the technology world.

This is partly because GDDR5 gradually increased its memory capacity, starting at a modest 512 MB and reaching 8 GB. Some considered this to be a generational bump, although it technically wasn’t.

The latest and best GDDR5 version details are nothing to take lightly, considering they were made to handle 4K resolutions. The first crucial detail is the 8 Gb/s (gigabit, which with simple math equates to 1 gigabyte) per pin. This number greatly improved on the previous standard of 3.6 Gb/s.

Those 8 Gb/s per pin are multiplied by 170 pins, and, with 32-bits per cycle, this brings it to a total of 256 Gb/s of bandwidth in a given chip.

Although GDDR5 has been surpassed, some graphics cards still use it, but they are primarily mobile versions.

HBM

AMD R9 Fury X
AMD’s R9 Fury X is equipped with HBM memory

HBM stands for High Bandwidth Memory. Despite not having the GDDR prefix, this is still a type of GPU video RAM. It performs the same role: it handles frame storing in the frame buffer and stores textures, lighting information, etc.

So, how is it different?

GDDR5 (and its predecessors) requires DRAM chips to be placed directly on the PCB and spread around the processor. HBM is found on the GPU itself, and stacks die on top of each other. This approach is unquestionably faster.

To increase the number of chips on GDDR5, these will occupy more space on the card, which requires further data and power traces. This increases manufacturing costs and is consequently pricier for the end-user.

On the other hand, HBM’s stacked dies communicate via microbumps and through-silicone vias (TSV) which, as the name suggests, goes through the dies and allows for swifter communication and reduced power consumption.

However, the close proximity of the dies limits their potential due to overheating issues derived from the number of dies.

Keep in mind that, as this type of memory is still very novel and currently less popular, the cost for one HBM chip is higher. As this technology evolves and other companies potentially accept it as a superior alternative to GDDR, it could become much cheaper to manufacture and implement into a device.

HBM’s 100 GB/s bandwidth dwarves GDDR5’s 28 GB/s but, interestingly, GDDR5 has a higher clock at 7 Gb/s (875 MHz) by a large margin, considering HBM is at 1 Gb/s (125 MHz).

That stat can be misleading. While it is technically true, it doesn’t tell the whole story. As GDDR needs to travel further, it requires a higher clock. However, as HBM dies are stacked on top of each other, the distance is minimal, and their 1 Gb/s is effectively faster than GDDR5.

AMD claims that this bandwidth per Watt is no less than three times higher when compared to its competitors. While that number is up for debate, no one can deny HBM’s speed, allowing it to consume less power.

GDDR5X

NVIDIA GeForce GTX 1060
The NVIDIA GeForce GTX 1060 is equipped with GDDR5X memory

This memory type is a direct successor to GDDR5, having initially been ignored as just another update. However, this perception quickly changed, and it is now clear that GDDR5X improved upon GDDR5 as prominently as GDDR6 did from GDDR5X.

We can say that because GDDR5X doubled the specifications of GDDR5, the most recent type at the time. The bandwidth for GDDR5X is 56 GB/s per single chip, while GDDR5 sits at 28 GB/s. It is also doubled in the memory clock department, where it scores 14 Gb/s (1750 MHz) and its predecessor only 7 Gb/s (875 MHz), although this is still nowhere near HBM.

Concerning energy usage, its 1.3 V power requirement is an enhancement compared to GDDR5’s 1.5 V.

As previously mentioned, and as its name suggests, GDDR5X is inferior to GDDR6. Although you can still find really good graphics cards with GDDR5X, it might be best to opt for GDDR6 or even HBM2.

HBM2

NVIDIA Titan V
The NVIDIA Titan V is equipped with a whopping 12GB of HBM2 memory

Just as GDDR5X is a successor to GDDR5, so is HBM2 to HBM. Much like GDDR5X doubled the performance of GDDR5, HBM2 did the same to HBM.

To avoid confusion, it’s crucial to state that HBM2E isn’t to HBM2 what GDDR5X is to GDDR5. We’ll look at both HBM2 and HBM2E in this section.

Upon its release in 2016, HBM2 came equipped with a 2 Gb/s maximum transfer rate for a single pin, which doubles that of HBM, much like its ultimate capacity, at 8 GB. Bandwidth was also increased two-fold from 128 GB/s to 256 GB/s.

Although these numbers were accurate as of the release, things have dramatically changed with HBM2E.

The transfer rate has increased to 2.4 Gb/s and the bandwidth to 307 GB/s. These slight increases make it understandable that it’s called HBM2E and not HBM3. However, the max capacity increase from 8 GB to 24 GB was the truly impressive part, something that even high-end GDDR6 cards rarely achieve.

Unfortunately, as HBM is still novel, HBM2 is even newer, which makes it a very expensive option to add to consumer products. It’s no wonder that AMD decided to go with GDDR6 for their RX 6000 lineup.

In 2017, the estimated cost for HBM2 was around $150 and $170. That’s a lot of money when compared to the most expensive GDDR6 memory, which costs $90 for 8GB.

Since 2017, the cost of the HBM2 memory system has likely dropped, but it still might not be sufficient. Time will tell whether HBM3 will be able to stay competitive in terms of performance and pricing.

GDDR6

NVIDIA GeForce RTX 2080 Ti
The NVIDIA GeForce RTX 2080 Ti is equipped with 11GB of GDDR6 memory

This update to GDDR wasn’t as eagerly-awaited as GDDR5X, but the initial reception was much better. Although the story of generational jumps has already been explained, let’s compare the numbers anyway, for clarity’s sake.

GDDR5 had a data rate that reached a maximum of 8 Gb/s with a maximum bandwidth of 336.5 GB/s and 12 GB capacity. These numbers come from NVIDIA’s GTX Titan X.

Up next, there’s GDDR5X with a 12 Gb/s peak data rate, a bandwidth of 484 GB/s, and a total of 11 GB capacity found in the GTX 1080.

Finally, we have GDDR6 found in the RTX 2080 Ti, the current top seller. Its data rate is 16 Gb/s, and it comes with 11 GB of storage. The largest improvement is the enormous 616 GB/s bandwidth.

Of course, these are consumer specs. Professional GPUs are getting close to 1 TB/s.

GDDR6X

As we move closer to getting our hands on the next generation of GPUs from both AMD and NVIDIA, the news keeps coming. As speculated before, NVIDIA ushered in the era of GDDR6X. Using the identical naming convention from GDDR5 to GDRR6X, the reveal almost came as an astonishment.

As GDDR6 was launched just three years after the 5-year gap of its predecessors GDDR5X and GDDR5, a number of people thought that both AMD and NVIDIA would not hurry with GDDR6X. This was proven wrong; the competition between GPU manufacturers seems to have resumed, and both sides cannot risk losing progress.

The biggest feature of GDDR6X technology is its PAM4 signaling, which doubles the effective bandwidth and improves clock efficiency and speed. This isn’t the first time PAM4 has been explored. In fact, Micron has been developing it since 2006.

Micron has commended NVIDIA for launching GDDR6X, highlighting their collaboration in assisting to develop the technology. This means NVIDIA’s RTX 3080 and RTX 3090 will incorporate GDDR6X, while AMD will have to settle for GDDR6.

What Does The Future Hold for VRAM?

An thrilling rumor exists in the HBM camp regarding the release of HBM3. Although these rumors have lost steam since they first surfaced a couple of years ago, it isn’t unreasonable to assume that the next memory generation might be near. Many now believe those HBM3 rumors referred to HBM2E, but it’s nice to be optimistic.

Concerning GDDR7, we can’t offer much beyond speculation. The GDDR6X technology is comparatively new, but with ever briefer intervals between new products, we might see news of GDDR7 faster than we expect.

Which VRAM Reigns Supreme?

If you meticulously consider these details, the best option right now is probably either HBM2 or GDDR6. The fact is that both types of video memory have strengths and weaknesses, and your choice should be based on your needs.

Overall, it’s safe to say that HBM2 is the best choice for tasks such as machine learning and AI simulations, as well as 3D modeling and video editing, due to its superior bus width. In contrast, as far as gaming is concerned, GDDR6 is probably what you should invest your money in.

Of course, HBM2 can also run games, and GDDR6 is also useful for editing, it’s just that each memory type is better suited to different purposes.

Recommended Reads

What Is Nvidia Reflex And Should You Use It
What Is NVIDIA Reflex And Should You Use It?
Aleksandar Cosic

Alex is a Computer Science student and a former game designer. That has enabled him to develop skills in critical thinking and fair analysis. As a CS student, Aleksandar has very in-depth technical knowledge about computers, and he also likes to stay current with new technologies.