With such a profusion of types of GPU video memory, also known as VRAM, you’re bound to be a bit perplexed about which is the best choice.
Before diving into the specifics of each of these memory types, we’re going to look at the importance of RAM (random access memory). It’s vital to understand why there are so many different types of memory and why one might be excellent for a particular user but not so great for another.
First and foremost, it’s crucial to know what RAM, and by extension VRAM, is used for.
Table of ContentsShow
Random Access Memory – RAM
Technically the first RAM appeared back in 1947. Since then, it has been greatly enhanced. So, what is RAM?
There is a difference between RAM and other components for data storage, such as hard drives and optical discs. With RAM, data can be changed in any sequence, and it can access data (both to read and write) in the same timespan, regardless of its physical location.
Whether RAM ranks highly in terms of memory importance depends on how you prioritize memory. Overall, the data used with RAM sits somewhere between cache memory and hard drive. This means the CPU will work on data by first using cache memory for the simplest operations because it is directly on the CPU chip, and the cost of data transfer is lower.
At times, the cache, despite having a modern size of 256 MB, cannot properly help the CPU. In such situations, the processor will contact the RAM instead and bear the cost of that transfer. There is also a chance to involve hard drive memory in aiding this procedure, but it is rare and seldom enhances performance.
Video RAM – VRAM
VRAM is a specialized version of DRAM (dynamic random access memory). Just as RAM is used to supply the CPU with data, VRAM is designed to help the graphics processor in terms of remembrance.
VRAM helps the graphics chip by holding information such as textures, frame buffers, shadow maps, bump maps, lighting, and others. There are many factors in the required amount of VRAM, the most apparent being the display resolution.
If you’re gaming at Full HD, the frame buffer needs to hold images that are approximately 8 Megabytes. While playing your games at 4K resolution, that number increases to a staggering 33.2 Megabytes. This example illustrates why modern GPUs are using more and more VRAM with each iteration but also why developers experiment with different types of VRAM to achieve the best possible results.
Something else that can cause performance issues is anti-aliasing (AA). This process requires an image to be rendered multiple times, so the differences between them can be smoothed out. AA can significantly improve the look of a game, but the trade-off is a potential FPS loss.
It’s important to note that if you’re thinking about connecting two or more GPUs via CrossFire or SLI, you won’t be able to simply take both memories and add them up. In reality, the memory will be duplicated across the connected cards, so you will effectively have the same amount of memory as if you were using a single card.
It’s also important to keep in mind that the amount of VRAM you need largely depends on what you need it for. If the game you’re playing uses one gigabyte of VRAM, and you have two, the performance won’t be improved. However, if you have only 512 MB of VRAM, you will experience a significant drop in performance.
It’s time to explore the various VRAM types and their advantages and disadvantages.
GDDR5
This is the oldest memory type we will be looking at, but that doesn’t necessarily mean it’s the worst. While it’s no longer the best, it still has its uses.
Its full name is GDDR5 SDRAM, which stands for graphics double data rate type five synchronous dynamic random-access memory. This lengthy abbreviation might look confusing, but we’ll do our best to break it down.
The graphics double data rate part is largely self-evident. If that isn’t enough, just know that DDR means a bus is performing data transfer at rising and falling edges of the clock signal. This means the data transfer is available twice during a single clock cycle, thus doubling the performance of the previous standard SDR (Single Data Transfer).
The DDR part of the name indicates that it has a swift-bandwidth interface that allows the specific data needed for graphical calculations to be processed far more quickly.
The “type five” part is fascinating as GDDR5 is based on DDR4, despite the name suggesting an improvement. While there are differences, the truth is that these memory types are made for specific purposes, and each does exactly what it was designed to do.
The SDRAM part refers to the synchronous dynamic RAM. As we already went over what RAM is, let’s concentrate on those particular descriptors.
It would be simple to say that these are just broad terms used for marketing, but there’s more to it than that.
In the context of RAM, dynamic means that this type of memory stores every bit of data in one memory cell with its own transistor and capacitor. Due to the minuscule capacity for the charge, DRAM is a transient type of memory; if it loses power, all data on it will be lost.
To combat this, DRAM has an external circuit that handles memory refreshing, a process that periodically rewrites the existing data with itself, returning the charge to the capacitor. The “synchronous” in the SDRAM ensures that the memory’s clock is in sync with the microprocessor it services. This enhances the processor’s capability for executing instructions.
So, how efficient is GDDR5?
This technology was utterly revolutionary. However, many advances have been made since the early 2010s.
It’s crucial to highlight that GDDR5 was first announced in 2007 and entered mass production in early 2008. It was only dethroned in mid-2016 by GDDR5X’s debut. This means there were nearly ten years in which GDDR5 reigned supreme, unparalleled in the technology world.
This is partly because GDDR5 gradually increased its memory capacity, starting at a modest 512 MB and reaching 8 GB. Some considered this to be a generational leap, although it technically wasn’t.
The latest and greatest GDDR5 version details are nothing to take lightly, considering they were made to handle 4K resolutions. The first vital detail is the 8 Gb/s (gigabit, which with simple math equates to 1 gigabyte) per pin. This number greatly improved on the previous standard of 3.6 Gb/s.
Those 8 Gb/s per pin are multiplied by 170 pins, and, with 32-bits per cycle, this brings it to a collective of 256 Gb/s of bandwidth in a particular chip.
Although GDDR5 has been surpassed, some graphics cards still use it, but they are mainly mobile versions.
HBM
HBM stands for High Bandwidth Memory. Despite not having the GDDR prefix, this is still a type of GPU video RAM. It performs the same role: it handles frame storing in the frame buffer and stores textures, lighting information, etc.
So, how is it different?
GDDR5 (and its predecessors) requires DRAM chips to be placed directly on the PCB and spread around the processor. HBM is found on the GPU itself, and stacks die on top of each other. This approach is undeniably quicker.
To increase the number of chips on GDDR5, these will occupy more space on the card, which requires further data and power traces. This increases manufacturing costs and is consequently pricier for the end-user.
On the other hand, HBM’s stacked dies communicate via microbumps and through-silicone vias (TSV) which, as the name suggests, goes through the dies and allows for swifter communication and reduced power consumption.
However, the close proximity of the dies limits their potential due to overheating issues derived from the number of dies.
Keep in mind that, as this type of memory is still very novel and currently less popular, the cost for one HBM chip is higher. As this technology evolves and other companies potentially accept it as a superior alternative to GDDR, it could become much cheaper to manufacture and implement into a device.
HBM’s 100 GB/s bandwidth dwarves GDDR5’s 28 GB/s but, curiously, GDDR5 has a higher clock at 7 Gb/s (875 MHz) by a substantial margin, considering HBM is at 1 Gb/s (125 MHz).
That stat can be deceptive. While it is technically true, it doesn’t tell the whole story. As GDDR needs to travel further, it requires a higher clock. However, as HBM dies are stacked on top of each other, the distance is minimal, and their 1 Gb/s is effectively faster than GDDR5.
AMD claims that this bandwidth per Watt is no less than three times higher when compared to its competitors. While that number is up for debate, no one can deny HBM’s speed, allowing it to consume less power.
GDDR5X
This memory type is a direct successor to GDDR5, having initially been disregarded as just another update. However, this perception quickly changed, and it is now evident that GDDR5X improved upon GDDR5 as prominently as GDDR6 did from GDDR5X.
We can say that because GDDR5X doubled the specifications of GDDR5, the most recent type at the time. The bandwidth for GDDR5X is 56 GB/s per single chip, while GDDR5 sits at 28 GB/s. It is also doubled in the memory clock department, where it scores 14 Gb/s (1750 MHz) and its predecessor only 7 Gb/s (875 MHz), although this is still nowhere near HBM.
Concerning energy usage, its 1.3 V power requirement is an improvement compared to GDDR5’s 1.5 V.
As previously mentioned, and as its name suggests, GDDR5X is inferior to GDDR6. Although you can still find outstanding graphics cards with GDDR5X, it might be best to opt for GDDR6 or even HBM2.
HBM2
Just as GDDR5X is a successor to GDDR5, so is HBM2 to HBM. Much like GDDR5X doubled the performance of GDDR5, HBM2 did the equivalent to HBM.
To avoid confusion, it’s essential to state that HBM2E isn’t to HBM2 what GDDR5X is to GDDR5. We’ll look at both HBM2 and HBM2E in this section.
Upon its release in 2016, HBM2 came equipped with a 2 Gb/s maximum transfer rate for a single pin, which doubles that of HBM, much like its ultimate capacity, at 8 GB. Bandwidth was also increased two-fold from 128 GB/s to 256 GB/s.
Although these numbers were accurate as of the release, things have significantly changed with HBM2E.
The transfer rate has increased to 2.4 Gb/s and the bandwidth to 307 GB/s. These slight increases make it understandable that it’s called HBM2E and not HBM3. However, the max capacity increase from 8 GB to 24 GB was the truly noteworthy part, something that even high-end GDDR6 cards rarely achieve.
Unfortunately, as HBM is still novel, HBM2 is even more recent, which makes it a very costly option to add to consumer products. It’s no wonder that AMD decided to go with GDDR6 for their RX 6000 lineup.
In 2017, the estimated cost for HBM2 was around $150 and $170. That’s a substantial amount of money when compared to the priciest GDDR6 memory, which costs $90 for 8GB.
Since 2017, the expense of the HBM2 memory system has likely dropped, but it still might not be sufficient. Time will tell whether HBM3 will be able to stay competitive in terms of performance and cost.
GDDR6
This update to GDDR wasn’t as eagerly-awaited as GDDR5X, but the initial reception was much better. Although the story of generational jumps has already been explained, let’s compare the numbers anyway, for clarity’s sake.
GDDR5 had a data rate that reached a maximum of 8 Gb/s with a maximum bandwidth of 336.5 GB/s and 12 GB capacity. These numbers come from NVIDIA’s GTX Titan X.
Up next, there’s GDDR5X with a 12 Gb/s maximum data rate, a bandwidth of 484 GB/s, and an aggregate of 11 GB capacity found in the GTX 1080.
Finally, we have GDDR6 found in the RTX 2080 Ti, the current top seller. Its data rate is 16 Gb/s, and it comes with 11 GB of storage. The largest improvement is the enormous 616 GB/s bandwidth.
Of course, these are consumer specs. Professional GPUs are getting near to 1 TB/s.
GDDR6X
As we move closer to getting our hands on the next generation of GPUs from both AMD and NVIDIA, the news keeps coming. As speculated before, NVIDIA ushered in the era of GDDR6X. Using the same naming convention from GDDR5 to GDRR6X, the reveal almost came as an astonishment.
As GDDR6 was launched just three years after the 5-year gap of its predecessors GDDR5X and GDDR5, a number of people thought that both AMD and NVIDIA would not hurry with GDDR6X. This was proven wrong; the competition between GPU manufacturers seems to have resumed, and both sides cannot risk losing progress.
The biggest feature of GDDR6X technology is its PAM4 signaling, which doubles the effective bandwidth and improves clock efficiency and velocity. This isn’t the first time PAM4 has been explored. In fact, Micron has been developing it since 2006.
Micron has commended NVIDIA for launching GDDR6X, highlighting their collaboration in assisting to develop the technology. This means NVIDIA’s RTX 3080 and RTX 3090 will incorporate GDDR6X, while AMD will have to settle for GDDR6.
What Does The Future Hold for VRAM?
An exhilarating rumor exists in the HBM camp regarding the release of HBM3. Although these rumors have lost steam since they first surfaced a couple of years ago, it isn’t unreasonable to assume that the next memory generation might be near. Many now believe those HBM3 rumors referred to HBM2E, but it’s nice to be optimistic.
Concerning GDDR7, we can’t offer much beyond speculation. The GDDR6X technology is relatively new, but with ever shorter intervals between novel products, we might see news of GDDR7 quicker than we expect.
Which VRAM Reigns Supreme?
If you meticulously consider these details, the best option right now is probably either HBM2 or GDDR6. The fact is that both types of video memory have strengths and weaknesses, and your choice should be based on your needs.
Overall, it’s safe to say that HBM2 is the optimal choice for tasks such as machine learning and AI simulations, as well as 3D modeling and video editing, due to its superior bus width. In contrast, as far as gaming is concerned, GDDR6 is probably what you should invest your money in.
Of course, HBM2 can also run games, and GDDR6 is also useful for editing, it’s just that each memory type is better suited to different purposes.