With such an abundant variety of types of GPU video memory, also known as VRAM, you’re bound to be a bit confused as to which is the best choice.
Before getting into the specifics of each of these memory types, we’re going to delve into the importance of RAM (random access memory). It’s crucial to understand why there are so many different types of memory and why one might be great for a certain user, but not so great for another.
First and foremost, it’s good to know what RAM, and by extension VRAM, are used for.
Table of ContentsShow
Random Access Memory – RAM
Although technically the first actual RAM dates all the way back to 1947, it has since been highly improved. So what is RAM?
There is a difference from RAM and other media for data storage, such as hard drives and optical discs. With RAM, data can be changed in any order and it can access data (both read and write) in the same time frame, regardless of physical location.
Whether RAM ranks high in terms of memory importance depends on how you prioritize memory. Overall, data used with RAM is somewhere between cache memory and hard drive. This means that the CPU will work on data by first using cache memory for the simplest operations because it’s directly on the CPU chip, and the cost of data transfer between is lower.
Sometimes, however, the cache, even with modern insane numbers of 256 MB, simply can’t service the CPU properly, in which case the processor will ping the RAM and swallow that transfer cost. There is even the possibility of using actual hard drive memory to help out in this process, but that is fairly extreme and rarely improves the performance.
Video RAM – VRAM
VRAM is a specialized version of DRAM, (dynamic random access memory). Much like RAM is used to supply the CPU with data, VRAM is designed to help the graphics processor in terms of memory.
VRAM helps the graphics chip by holding stuff like textures, frame buffers, shadow maps, bump maps, lighting information, among others. There are many factors counted in the required amount of VRAM, with the most obvious one being the display resolution.
So, if you’re gaming at Full HD, the frame buffer needs to hold images that are roughly 8 MB, while playing your games at 4K resolution will bump up that number to a whopping 33.2 MB. This example clearly showcases why modern GPUs are using more and more VRAM with each iteration, but also why they’re willing to experiment with different types of VRAM to get the best possible result.
Something else that can cause performance issues is anti-aliasing (AA). This process requires an image to be rendered multiple times, so then the differences between them can be smoothed out. AA can largely improve the look of the game, but the trade-off is a potential FPS loss.
It’s fundamental to note that if you’re thinking about connecting two or more GPUs together via CrossFire or SLI, you won’t be able to simply take both memories and add them up. As it turns out, the truth is that the memory will be cloned across the connected cards, so you’ll basically have the same amount of memory, as if you were only running a single card.
It’s also important to keep in mind that the amount of VRAM you need pretty much depends on what you need it for. If the game you’re playing uses a gigabyte of VRAM and you have two, the performance won’t be improved. However, if you have only 512 MB of VRAM, you will experience a significant drop in performance.
Now it’s time to explore the different VRAM types and their pros and cons.
The fact that this is the oldest memory type we’ll be looking at today doesn’t necessarily mean it’s the worst. While it’s not the best out there, it still has its uses.
Its full name is GDDR5 SDRAM, which stands for graphics double data rate type five synchronous dynamic random-access memory. The length of the abbreviation might look confusing, but we’ll try our best to break it down.
The graphics double data rate part is pretty self-explanatory, but if that isn’t enough, just know that DDR means that a bus is performing data transfer at rising and falling edges of the clock signal. This means that the data transfer is available twice during a single clock cycle thus doubling the performance of the previous standard SDR (Single Data Transfer).
The DDR part of the name indicates that it has a high-bandwidth interface that allows the specific data needed for graphical calculations to be processed a lot faster.
The type five part is quite curious as GDDR5 is based on DDR4, despite the name implying an improvement. While there are differences, the truth is that both these memory types are made for specific uses and each is doing exactly what it was made to do.
The SDRAM part refers to the synchronous dynamic RAM. Since we already went over what RAM is, let’s focus on those quantifiers.
It would be easy to say that those are just blanket terms used for marketing, but we’re better than that.
In the context of the definition of RAM, dynamic means that this type of memory stores every bit of data in one memory cell that has its own transistor and capacitor. Due to the small capacity for the charge, DRAM is a volatile type of memory, which means that if it loses power, all data on it will be lost.
To combat this, DRAM has an external circuit that handles memory refreshing, which is a process that periodically rewrites the existing data with itself, returning the charge to the capacitor. The synchronous in the SDRAM ensures that the memory’s clock is in sync with the microprocessor that it services. This increases the processor’s potential for executing instructions.
So, how good is GDDR5?
This technology was absolutely path-breaking; however, many advances have been made since the early 2010s.
What’s important to highlight here is that GDDR5 was first announced in 2007 and began mass production in early 2008, having only been dethroned by mid-2016 with GDDR5X’s debut. That means there were nearly ten years in which GDDR5 reigned supreme, which was unprecedented in the technology world.
Full disclosure: GDDR5 slowly upped its memory capacity, starting at a humble 512 MB and stoping at 8 GB, which was considered by some as a generational bump, even though it technically wasn’t.
The specifics of the latest and the best GDDR5 version are nothing to sneeze at, considering they were made to handle 4K resolutions. The first crucial detail is the 8 Gb/s (that’s gigabit which with simple math equates to 1 gigabyte) per pin. This number largely improved on the previously used standard of 3.6 Gb/s.
Those 8 Gb/s per pin are multiplied by 170 pins and, with 32-bits per cycle, this brings it to a total of 256 Gb/s of bandwidth in a given chip.
Although GDDR5 has been outperformed, there are still some graphics cards that use it, but they are mostly mobile versions.
HBM stands for High Bandwidth Memory. Despite not having the same GDDR prefix, this is still a type of GPU video RAM. It does the same job: it handles frame storing in the framebuffer, it stores textures, valuable lighting information, etc.
So in what way is it different?
Well, GDDR5 (and its predecessors) require DRAM chips to be directly on the PCB and spread around the processor, while HBM is found on the GPU itself and stacks die on top of each other. This method is undeniably better.
If previous GDDR5 wants to increase the number of chips, these will take up more space on the card, which needs further data and power traces. This leads to a greater manufacturing cost and is consequently more expensive for the end-user.
On the other hand, HBM’s stacked dies communicate with microbumps and through-silicone vias, (TSV) which, as the name suggests, goes through the dies and allows for faster communication and less power consumption.
However, this proximity between the dies limits its potential to a certain degree due to overheating issues derived from the great quantity of dies.
Keep in mind, since this type of memory is still very new and not as popular, the cost for one HBM chip is ultimately higher. As this technology evolves and as other companies start to accept it as a better alternative than GDDR, it might become much cheaper to manufacture and implement in a device.
HBM’s 100 GB/s bandwidth absolutely dwarves GDDR5’s 28 GB/s but, interestingly enough, GDDR5 has a higher clock at 7 Gb/s (875 MHz) by a large margin, considering that HBM is at 1 Gb/s (125 MHz).
Okay, that stat can be misleading; while it is technically true, there is still a huge caveat. Seeing how GDDR has to travel further, it requires a higher clock. However, as HBM dies are stacked on top of each other, the distance is minimal and their 1 Gb/s is actually faster than GDDR5.
AMD claims that this bandwidth per Watt is no less then three times higher when compared to their competitors’. While that number is certainly up for debate, no one can deny HBM’s speed, which also enables it to consume less power.
This memory type is a direct successor to GDDR5, having been disregarded at first as just another update. However, this perception quickly changed, and it’s now clear that GDDR5X improved upon GDDR5 as much as GDDR6 did from GDDR5X.
The main reason why we say it like that is that GDDR5X doubled the specifications of GDDR5, the latest offer at the time: the bandwidth for GDDR5X is 56 GB/s per single chip, while GDDR5 sits at 28 GB/s. Another doubling comes in the memory clock department, where it scores 14 Gb/s (1750 MHz) and its predecessor only 7 Gb/s (875 MHz), although still nowhere near HBM.
Concerning energy consumption, its 1.3 V power requirement is still an upgrade from GDDR5’s 1.5 V.
As previously mentioned, and as the name suggests, GDDR5X is inferior to GDDR6 and even though you can still get really good graphics cards with GDDR5X, it might be best to opt for GDDR6. Or HBM2.
As much as GDDR5X is a sequel to GDDR5, so is HBM2 to HBM. And much like GDDR5X doubled the performance of GDDR5, so did HBM2 to HBM.
In order to avoid confusion, it’s important to state that HBM2E isn’t to HBM2 what GDDR5X is to GDDR5, despite what the name might suggest. We’ll get into both HBM2 and HBM2E in this section.
Upon its release in 2016, HBM2 came equipped with 2 Gb/s maximum transfer rate for a single pin, which doubles that of HBM, much like its maximum capacity, at 8 GB. Bandwidth was also increased two-fold from 128 GB/s to 256 GB/s.
Although these numbers were accurate at the time of the release, things have dramatically changed with HBM2E.
The transfer rate has since gone up to 2.4 Gb/s and the bandwidth to 307 GB/s. These slight increases make it understandable why it’s called HBM2E and not HBM3. However, the max capacity increase from 8 GB to 24 GB was the truly impressive part, something that even high-end GDDR6 cards rarely achieve.
Unfortunately, as HBM is new, HBM2 is even newer making it still a very costly option to add it in consumer products. No wonder AMD decided to go for GDDR6 with their RX 6000 lineup.
Back in 2017, the estimated cost for HBM2 was somewhere around $150 and $170. That’s a lot of money even when compared to the more expensive GDDR6 memory that costs around $90 for 8GB.
Of course, since 2017, the cost of the HBM2 memory system has probably dropped, but it may not be enough. We’ll see whether HBM3 will be able to stay competitive in terms of performance and pricing.
This update to GDDR wasn’t as eagerly-awaited as GDDR5X, but the initial reception was much better. Even though the story of generational jumps has been quite thoroughly explained, let’s compare the numbers anyway, for posterity’s sake.
GDDR5 had a data rate that peaked at 8 Gb/s with a peak bandwidth at 336.5 GB/s and 12 GB capacity. These numbers come from Nvidia’s GTX Titan X.
Up next, there’s GDDR5X with 12 Gb/s peak data rate, bandwidth of 484 GB/s and with a total of 11 GB capacity found in GTX 1080.
Finally, we have GDDR6 found in RTX 2080 Ti, the current bestseller. Its data rate is 16 Gb/s and it comes with 11 GB of storage. The biggest improvement is the monstrous 616 GB/s bandwidth.
Of course, these are consumer specs, whereas in professional GPUs are getting close to 1 TB/s.
As we get closer to getting our hands on the next generation of GPUs from both AMD and Nvidia, the news keeps on coming. As speculated before, Nvidia ushered in the era of GDDR6X. Following the same naming convention switch from GDDR5 to GDRR6X, the reveal almost came as a surprise.
Seeing how GDDR6 was released only three years after the 5 year-long interval between its predecessors GDDR5X and GDDR5, many believed that neither AMD nor Nvidia would rush with GDDR6X, but that turned out to be false. The GPU war appears to be on again and neither side can afford to lose ground.
The biggest feature of the GDDR6X technology is its PAM4 signaling, which doubles the effective bandwidth and improves clock efficiency and speed. This isn’t the first time PAM4 has been explored. In fact, Micron has been developing it since 2006.
Micron commends Nvidia for the release of GDDR6X, pointing out their cooperation in helping to develop the technology. This means that Nvidia’s RTX 3080 and RTX 3090 will feature GDDR6X, while AMD will have to do with GDDR6.
What Does The Future Hold for VRAM?
An exciting rumor exists in the HBM camp about the release of HBM3. Although these rumors lost steam since they first surfaced a couple of years ago, it’s not beyond reason to assume that a next memory generation might be in sight. Many believe those HBM3 rumors were referencing HBM2E, but it’s nice to be hopeful.
Concerning GDDR7, we really can’t offer much besides speculation. The GDDR6X technology is relatively new, but with the increasingly shorter intervals between new products, we might be getting news from GDDR7 faster than we expect.
Which VRAM Reigns Supreme?
If you carefully consider these arguments, the best option right now is probably either HBM2 or GDDR6. The fact of the matter is that both types of video memory have their strengths and weaknesses, and your choice should be based on your needs.
Overall, it’s safe to say that HBM2 is the best choice for things like machine learning and AI simulations, but also 3D modeling and video editing, due to its superior bus width. In turn, as far as gaming is concerned, GDDR6 is probably what you should invest your money in.
Of course HBM2 can run games as well and GDDR6 is also useful for editing, it’s just that each memory type is better for different purposes.