With an abundance of choice for the GPU’s video memory, otherwise known as video RAM, there’s bound to be some confusion as to what type does what and which is the best choice.
Before getting into the specifics of each of these memory types, we’re going to delve into the importance of RAM (random access memory) as it’s important to understand why there are differences and why one memory type might be great for a certain user, but a downside for another.
First and foremost, it’s good to know what RAM and by extension VRAM is used for.
Table of ContentsShow
Random Access Memory – RAM
Although technically the first actual RAM dates all the way back to 1947, we are very far removed from that.
What’s cool about RAM is that the data on it can be changed in any order and that it can access data (both read and write) in the same time frame, with the physical location of it being irrelevant. That’s something that other media for data storage such as hard drives and optical discs can’t say about themselves.
Where RAM is on the memory importance hierarchy is dependent on your views of memory, but we can say that data used with RAM is somewhere between cache memory and hard drive. This means that the CPU will work on data by first using cache memory for the simplest operations because it’s directly on the CPU chip and the cost of data transfer between is lowest.
However, there are occasions where the cache, even with modern insane numbers of 256 MB, simply can’t service the CPU properly, and in that case, the processor will ping the RAM and swallow that transfer cost. There is even a possibility of using actual hard drive memory to help out in this process, but that is fairly extreme and can rarely really improve the performance.
Video RAM – VRAM
VRAM is a specialized version of DRAM (dynamic random access memory) and much like the system RAM is used to supply the CPU with data, VRAM is designed to help the graphics processor with its memory needs.
VRAM helps the graphics chip by holding stuff like textures, frame buffers, shadow maps, bump maps, lighting information, and plenty more. There are many factors counted in the required amount of VRAM with the most obvious one being the display resolution.
So if you’re gaming at Full HD, the frame buffer needs to hold images that are roughly 8 MB, while playing your games at 4K resolution will bump up that number to a whopping 33.2 MB. This example clearly showcases why modern GPUs are using more and more VRAM with each iteration, but also why they’re willing to experiment with different types of VRAM to get the best possible result.
Another thing that can cause performance issues is anti-aliasing. This process requires an image to be rendered multiple times and then the differences between them can be smoothed out. Why AA is needed is that it can massively improve the look of the game, but the trade-off is a potential FPS loss.
It’s important to note that if you’re thinking about connecting two or more GPUs together via CrossFire or SLI you won’t be able to simply take both memories and add them up and think that you’ll have that much VRAM. Sadly, the truth is that the memory will be cloned across the connected cards so you’ll only have access to the same amount of memory as if you were running a single card.
Another important thing to remember as far as the amount of VRAM is concerned is that in the end, it pretty much depends on the game you’re playing. For example, if the game you’re playing utilizes a gigabyte of VRAM and you have two, the performance won’t be improved. Likewise, if you have 512 MB of VRAM, you will experience a significant drop in performance.
It’s time to find out what different VRAM types are out there and what are their pros and cons.
The fact that this is the oldest memory type we’ll be looking at today, doesn’t necessarily mean it’s the worst. It’s not the best, but it has its uses.
Its full name is GDDR5 SDRAM which stands for graphics double data rate type five synchronous dynamic random-access memory. That many letters in an abbreviation are bound to feel daunting, but we’ll give it our best to explain what it means.
The graphics double data rate portion is pretty self-explanatory, but if that isn’t enough, just know that DDR means that a bus is performing data transfer at rising and falling edges of the clock signal. This means that the data transfer is available twice during a single clock cycle thus doubling (hence the name) the performance of the previous standard SDR (single data transfer).
The DDR part of the name indicates that it has a high-bandwidth interface which allows the specific data needed for graphical calculations to be processed a lot faster.
Type five is a bit of a curiosity here as GDDR5 is based on DDR4, despite the name implying that it’s better. While there are differences, we have to state that both of these memory types are made for specific uses and each is doing exactly what it was made to do.
The SDRAM refers to the synchronous dynamic RAM and since we already went over what RAM is, let’s focus on those quantifiers.
It would be easy to say that those are blanket terms used for marketing, but we’re better than that.
Dynamic in this dynamic RAM definition means that this type of memory stores every bit of data in one memory cell that has its own transistor and tiny capacitor. Due to the small capacity for the charge, DRAM is a volatile type of memory which means that if it loses power, all data on it will be lost. To combat this DRAM has an external circuit that handles memory refreshing, a process that periodically rewrites the existing data with itself which returns the charge to the capacitor.
Synchronous in the SDRAM defines that the memory’s clock is in sync with the microprocessor that it services. This increases the processor’s potential for executing instructions.
So, how good is GDDR5?
Make no mistake about, this technology was absolutely at the top of the world at one point. Problem is that that point was five years ago.
What’s important to highlight as far GDDR5 goes is that it was first announced in 2007 and began mass production in early 2008 and it wasn’t until mid-2016 and GDDR5X’s debut that it was dethroned. That means that there were nearly ten years in which GDDR5 reigned supreme which is unprecedented in the technology world.
Full disclosure: GDDR5 slowly upped its memory capacity, starting at humble 512 MB and ending at 8 GB and this was sort of taken by some in the same vein as generational bumps, although they technically weren’t that.
The specifics of the latest and the best GDDR5 version are nothing to sneeze at were made to handle 4K resolutions. The first important detail is the 8 Gb/s (that’s gigabit which with simple math equates to 1 gigabyte) per pin. This number largely improves on the previously used standard of 3.6 Gb/s.
Those 8 Gb/s per pin are multiplied 170 pins and then with 32-bits per cycle which brings to a total number of 256 Gb/s of bandwidth in a given chip.
Although GDDR5 has all but completely faded away, there still are some graphics cards that use it, but they are mostly mobile versions.
HBM stands for High Bandwidth Memory and despite not having the same GDDR prefix, this is still a type of video RAM for a GPU. It does the same job – handles frame storing in the framebuffer, storing textures, as well as valuable lighting information, and so on.
How does it differ?
Well, GDDR5 (and its predecessors) require that DRAM chips be directly on the PCB spread around the processor, while HBM is found on the GPU itself and stacks dies on top of each other.
This method is better because if GDDR5 wants to increase the number of chips, they have to take up more space on the card which requires further data and power traces. All this leads to a higher manufacturing cost and is therefore expensive for the end-user as well.
On the other hand, HBM’s stacked dies communicate with microbumps and through-silicone vias (TSV) which as the name suggests goes through the dies and allows for faster communication and less power consumption.
But, this closeness of the dies limits the potential to a certain degree because of overheating issues that so many dies together are bound to produce.
HBM’s 100 GB/s bandwidth absolutely dwarves GDDR5’s 28 GB/s, but interestingly enough GDDR5 has a higher clock at 7 Gb/s (875 MHz). And by a large margin as well, seeing how HBM is at 1 Gb/s (125 MHz).
Okay, that stat is simply misleading. Technically true, but there is a huge caveat. Seeing how GDDR has to travel further, it’s a necessity to have a higher clock. However, as HBM dies are stacked on top of each other, the distance is minimal and their 1 Gb/s is actually faster than GDDR5.
AMD claims that this higher bandwidth per Watt is as much as three times when compared to their competitors. While that number is certainly up for debate, no one can deny HBM’s speed and it is exactly this that enables it to have a lower power need.
This memory type is a direct successor to GDDR5 and many actually disregarded it as just another update in the beginning. However, that perception changed very quickly and it’s our obligation to state that GDDR5X brought as much change from GDDR5 as did GDDR6 from GDDR5X.
The main reason why we say it like that is that 5X literally doubled the latest (at the time) GDDR5 specifications.
The bandwidth for GDDR5X is 56 GB/s per single chip, while GDDR5 sits at 28 GB/s. Another doubling comes in the memory clock department where it scores 14 Gb/s (1750 MHz) and its predecessor has 7 Gb/s (875 MHz). Still nowhere near HBM though.
Although not half the energy consumption, its 1.3 V power requirement is still an upgrade from 1.5 V from GDDR5.
As previously mentioned, and as the name suggests, GDDR5X is inferior to GDDR6 and even though you can still get really good graphics cards with GDDR5X, it might be best to choose GDDR6. Or HBM2.
As much as GDDR5X is a sequel to GDDR5, so is HBM2 to HBM. And much like GDDR5X doubled the performance of GDDR5, so did HBM2 to HBM.
In order to avoid confusion, it’s important to state that there is HBM2E which isn’t to HBM2 like GDDR5X is to GDDR5, despite the naming convention. We’ll get into both HBM2 and HBM2E in this section.
Upon its release in 2016, HBM2 came equipped with 2 Gb/s maximum transfer rate for a single pin which doubles that of HBM. Another thing that’s doubled is a maximum capacity at 8 GB. Bandwidth was also increased two-fold from 128 GB/s to 256 GB/s.
Although these numbers were accurate at the time of the release, things have dramatically changed with HBM2E.
Now, the transfer rate went a little up to 2.4 Gb/s and bandwidth went from 256 GB/s to 307 GB/s. Looking at those slight increases, it’s understandable why they called it HBM2E and HBM3. However, where things get truly insane is the increase of max capacity from 8 GB to 24 GB, something that even high-end GDDR6 cards struggle to get to.
Of course, that number is huge, but we also have to add that for now, that might be a bit of an overkill.
This update to the GDDR side of things wasn’t as long-awaited as GDDR5X, and the initial reception was much better.
I think the story of generational jumps is well-told, but for posterity’s sake, let’s compare the numbers anyway.
So, GDDR5 had a data rate that peaked at 8 Gb/s with a peak bandwidth at 336.5 GB/s and 12 GB of capacity. These numbers come from Nvidia’s GTX Titan X.
Up next, there’s GDDR5X with 12 Gb/s peak data rate and bandwidth of 484 GB/s with a total of 11 GB capacity found in GTX 1080.
Finally, we have GDDR6 found in RTX 2080 Ti, the king at the time. Its data rate is 16 Gb/s and it comes with 11 GB of storage. The biggest improvement is the monstrous 616 GB/s bandwidth.
Of course, these are consumer specs, whereas in professional GPUs are getting close to 1 TB/s.
What Does The Future Hold?
There have some pretty credible leaks regarding the RTX 3000 series and some of the newest ones mention GDDR6X. This certainly fits with the previous GDDR5/5X naming convention, but playing devil’s advocate, we can say that somebody simply noticed that and improvised the next GDDR move.
What gives 6X theory credence is that a leaked benchmark test showed an unknown memory type and while that certainly is possible, let’s err on the side of caution and believe that there would likely be at least a formal announcement that this technology exists before getting released.
Another exciting rumor exists in the HBM camp and that is the release of HBM3. Although these rumors lost steam since they first surfaced a couple of years ago, it’s not beyond reason to assume that a next memory generation is coming. Important note: many believe that those HBM3 rumors were referencing HBM2E.
Which VRAM Reigns Supreme?
If you carefully consider the arguments, the best option right now is probably either HBM2 or GDDR6. The fact of the matter is that both types have their strengths and weaknesses and that your choice should probably come from your needs.
Overall, it’s safe to say that HBM2 is the best choice for stuff like machine learning and AI simulations, and 3D modeling and video editing due to its superior bus width, which enables graphics cards that use to perform a bunch of calculations that are needed for the stuff mentioned.
As far as gaming is concerned, GDDR6 is probably where you should look to invest your money.
And that’s not to say that HBM2 can’t run games or that GDDR6 can’t be used for editing, it’s just that each type has its reasons why it might be best for different things.