The war between the two titans of the GPU industry has just been rekindled. The AMD vs. NVIDIA debate might have been a one-sided affair for a while, but now AMD is catching up and is ready to make an attempt for the throne.
This GPU war dates back to the 1990s. Although AMD has a much longer history in tech, NVIDIA has recently dominated the field and is far more successful financially, being valued at about twice as much.
It is worth keeping in mind that AMD devotes a large percentage of its resources to CPUs, giving them less of a focus on GPUs than NVIDIA.
However, history is rarely considered in the world of technology; few people care that AMD has been around since the 1950s. Today, the GPU market has become an NVIDIA versus AMD situation. Scroll down to find the best choice for you.
Table of ContentsShow
AMD vs. NVIDIA: Performance
If you are considering getting a new GPU, you’re probably curious about the potential performances of each card. Hitting 60 FPS seems like the bare minimum in today’s gaming world, and a good GPU is the key to achieving that performance.
Even so, building a new PC and having it produce the best possible in-game performance isn’t just about getting the best GPU. It’s also important to know that the CPU and RAM will need to be on par with the GPU to avoid bottlenecking.
There are three general GPU classifications, and each one represents a different part of the market. These are low-end or budget, mid-tier or mid-range, and high-end.
Each of these categories is beneficial in different ways, so it’s only fair to compare AMD and NVIDIA for each.
For this category, we will look at the RX 5500 XT and GTX 1660 as they are probably the best budget cards that AMD and NVIDIA have to offer in the $200 price range.
The reason that we aren’t making this about RDNA 2 and Ampere is that there aren’t any budget cards near this price point.
While the RX 5500 XT offers a better base clock rate at 1685MHz compared to the GTX 1660’s 1530MHz, NVIDIA wisely used this to their advantage and offered a better boost rate at 1785MHz, which is higher than AMD’s game rate at 1737MHz.
Although you are unlikely to notice a difference, it’s interesting how this competition has extended to even the smallest details.
AMD further showcases its capabilities with an 8GB GDDR6 RAM, which is definitively better than NVIDIA’s 6GB GDDR5. It also features higher memory bandwidth and more L2 cache, but, as you might have already guessed, NVIDIA uses less power for the GTX 1660.
However, the hardware is nothing without accompanying software, and, in this regard, NVIDIA reigns supreme. Despite the previously mentioned specifications favoring AMD, and although it consumes less power, NVIDIA ultimately performs better.
As gamers are far less interested in power consumption than performance, NVIDIA is the winner in this category.
For this category, we will look at AMD’s RX 6700 XT and NVIDIA’s RTX 3060 Ti, as they are two great options that represent their respective companies’ foray into mid-range GPUs.
This tier’s battle is far more interesting as cards are less evenly matched in the mid-range category than they used to be.
The consensus among the benchmarkers is clear: the RX 6700 XT is simply the better card. The RTX 3060 Ti has a comparable performance and tightens the gap even further at 1440p, but, overall, the 6700 XT will simply give you more FPS.
However, one of the most important questions here is the price. The RTX 3060 Ti is considerably cheaper at $399, while the RX 6700 XT comes at a $479 price point. This leads to our next question: which card has the better value?
It is questionable whether or not these cards should even be compared, but, at this point, they are the best mid-range cards from each manufacturer.
The fact of the matter is that the RTX 3060 Ti’s performance is appropriate for the price level at which it is being offered, which can also be said for the RX 6700 XT. Despite the price difference, RX 6700 XT is the better card, which is a win for AMD.
Here’s where things get a little trickier. This is because AMD didn’t really offer anything that could compete with NVIDIA until the release of the RX 6000 series, which is why NVIDIA has a huge advantage in this area.
However, we can still compare an AMD and an NVIDIA card, but first, we need to talk about the elephant in the room.
With their latest generations, both companies released excellent enthusiast-class cards. Although AMD’s RX 6900 XT is only $200 more than NVIDIA’s high-end RTX 3080, and one could argue that these should be in the same bracket, we will compare the RX 6800 XT with the RTX 3080.
These prices are inverted compared to the mid-range, as NVIDIA’s RTX 3080 is the more expensive card. The $50 price difference probably doesn’t seem like much when you’re spending $650-$700, but what makes this particularly interesting is the performance.
Overall, the RX 6800 XT is the better card in terms of raw rasterization performance.
However, the RTX 3080 performs better in the ray-tracing department, which shouldn’t be overlooked. This technology is still relatively new, but it has still been around for long enough to expect better from AMD.
You can still give them some credit as this is their first crack at it, but the fact remains that it didn’t perform as well as NVIDIA. Still, future driver updates and better AMD-oriented game optimization should lead to better performance.
Ray tracing is still relatively rare on the market, including the amazing new feature from NVIDIA: DLSS. This feature can considerably increase your performance while only minimally reducing visual quality.
In the end, it wouldn’t be fair to give NVIDIA a point here, considering a huge percentage of games do not yet utilize ray tracing or DLSS. So, this point goes to AMD.
Total score: AMD 2 – NVIDIA 1
AMD vs. NVIDIA: Features
While features may seem less critical than actual specs, they are an important part of what makes a good GPU. AMD and NVIDIA offer similar GPUs in terms of hardware and price, but the devil is in the details or, in this case, the features.
If there is something that needs to be talked about, it’s this. Although ray tracing isn’t a requirement for good GPU performance, it makes a clear difference by offering better and more realistic visuals.
What is ray tracing anyway?
Technical explanations aside, ray tracing is a rendering technique that allows lighting to be presented more accurately by accounting for variables such as object materials and how lighting is reflected off them.
Ray tracing awards a point to NVIDIA as they were the first to implement it in their GPUs. With the arrival of Big Navi, AMD has a chance to prove itself in this regard. So far, the RX 6000 series has performed well enough, but still not as well as the best from NVIDIA.
NVIDIA has pursued ray tracing technology since the 2000s, having introduced it to the world in 2018. This move highlighted their dominance in the GPU market, and AMD has yet to fully recover from it.
As the next generation of gaming consoles, the Playstation 5 and Xbox Series X wouldn’t have allowed their names to be associated with something less than top-of-the-line. AMD has been very upfront about its intention to introduce ray tracing in its next GPUs.
While things do look promising for AMD with the introduction of ray tracing, the point here has to go to the market innovator NVIDIA.
Variable Rate Shading
VRS is a technology first brought to the market by NVIDIA that has found its best use in VR. It essentially calculates which frames in your field of view are going to be fully shaded or even rendered at all. This significantly reduces the burden on the GPU, redirecting that extra energy to more useful things.
AMD still hasn’t incorporated this tech into their GPUs, but it’s heavily rumored that it will be introduced in their RDNA 2 line, as AMD filed for the patent for VRS all the way back in early 2019.
There have also been talks of perfecting eye-tracking technology and using that to further improve upon VRS, which sounds like a sci-fi idea.
As we were able to see this cool piece of technology in action on NVIDIA’s side, it deserves this point too.
Deep Learning Super Sampling
Designed as another way to increase the efficiency of the GPU, DLSS is a path-breaking piece of technology. It can even be considered ahead of its time because of the process required to fully enjoy its benefits.
The biggest issue here is that game developers are required to enable DLSS support when making the game. In order for the player to see the improvement, it needs to be sent to NVIDIA, which then lets an AI run through the game, analyze the images, and automatically upscale it to a higher resolution.
Initially, the fact that NVIDIA does the heavy lifting was one of the biggest drawbacks of DLSS. The process wasn’t as optimized as it needed to be, and this resulted in DLSS looking like an interesting idea rather than an effective technology.
With the arrival of NVIDIA’s Ampere architecture, this process was streamlined, and we certainly expect further improvements.
G-Sync vs. FreeSync
These are NVIDIA’s and AMD’s adaptive synchronization technologies designed to eliminate screen tearing during gameplay. Screen tearing occurs when the GPU’s output is mismatched with the display’s refresh rate.
The communication between the GPU and the monitor basically works in this way: if the monitor refreshes at 60Hz, it requires 60 frames to be sent from the GPU (same for 120Hz, 144Hz, and so on).
The issue usually occurs when the GPU is unable to produce the required frames, causing screen tearing.
Adaptive sync technology allows for the GPU to effectively change the monitor refresh rate, depending on the number of frames being produced.
If the game dips to 40 FPS, the GPU will limit the monitor to refresh at 40Hz. However, this doesn’t make the game run any smoother; it just prevents screen tearing.
In the not-so-distant past, the solution for this was software-based, most notably with VSync, but this is being phased out in favor of newer technologies.
G-Sync is NVIDIA’s solution to screen tearing and has drawn some criticism. As the first to experiment with adaptive sync tech, NVIDIA used that to their advantage, although with some hardware requirements. Monitors must be G-Sync compatible and, although not specifically stated, this brought an extra cost of anywhere from $100 to $300 in price.
To run G-Sync, monitors require a proprietary NVIDIA G-Sync scaler module which means they will all have similar on-screen menus and options.
This is AMD’s FreeSync’s biggest advantage: using the Adaptive Sync standard built into the DisplayPort 1.2a specification allows manufacturers to choose any cheaper scaler.
Nevertheless, G-Sync has a better way of handling the GPU producing too many frames. It will lock the GPU’s frame rate upper limit to that of the monitor. FreeSync, when in-game VSync is turned off, will allow the GPU to produce extra frames. This can lead to screen tearing but also lowers the input lag.
The biggest issue which divides the community is the fact that not all NVIDIA cards work with FreeSync monitors, just as not all AMD cards will work with G-Sync monitors. This is being ironed out, but the fact remains that you will need to check if your monitor will work properly with your GPU.
While both sides have their pros and cons when it comes to adaptive sync technology, the fact that FreeSync is more readily available is what ultimately earns AMD the point here.
Total score: AMD 3 – NVIDIA 4
AMD vs. NVIDIA: Drivers And Software
The fact of the matter is that good hardware requires good software. Drivers are programs that control how a certain device (such as a GPU) interfaces with the CPU. It allows the software to use the hardware it controls to the best of its ability without needing to control every aspect of how that particular part operates.
As previously mentioned, AMD’s RX 5000 series failed miserably when it launched, with driver issues creating black screens and crashes. Unfortunately, this issue persists despite newer drivers attempting to fix it.
NVIDIA’s issues are equally problematic, as they are often smaller and, therefore, more difficult to identify.
AMD has significantly improved its driver capabilities with its yearly Radeon Adrenalin updates. The 2020 version allegedly offers an impressive 12% improvement over the 2019 version.
AMD also makes a conscious effort to simplify things and only uses one piece of software to update its drivers. They have followed a schedule of at least once per month with major releases.
The biggest setback for AMD is their products’ persistent issues, which take a long time to fix.
NVIDIA’s driver update schedule consists of two diverse applications to control their hardware. Their NVIDIA Control Panel enables the configuration of aspects such as 3D settings and display resolution. GeForce Experience handles game optimization, driver updates, and extra features.
The NVIDIA Control Panel has not seen a UX or UI change for more than a decade. The design is outdated, and it can be incredibly slow at times.
GeForce Experience, in general, sounds like a great idea, but it isn’t what users hoped it would be. Users must log in to use the available features such as automatic driver updates, recording, FPS counter, etc. For many, GE is considered to be bloatware.
In comparison, Radeon Software is much quicker, far more intuitive, requires no account, and provides other useful features such as Radeon Chill, Radeon Boost, manual and automatic overclocking, undervolting, manual fan curve, and more.
In the end, while AMD has its downsides, the efficient simplicity with which its software operates earns them a point in this category.
Total score: AMD 4 – NVIDIA 4
AMD vs. NVIDIA: Power Consumption and Efficiency
When AMD introduced Navi and announced their gamble at TSMC’s 7nm FinFET process, they probably thought this amazing 50% per watt performance would bridge the efficiency gap.
Unfortunately, this was not the case: Navi didn’t even outperform older NVIDIA GPUs that were built on the TSMC’s last-gen 12nm node.
The future looks brighter for AMD as its RDNA 2 cards were able to produce another 50% upgrade over RDNA.
This isn’t completely black and white. In the extreme performance range, NVIDIA’s RTX 3090 uses a lot of power, with only the RX 6900 XT coming close.
We could say that RX 6800 XT is not as power-draining as the RTX 3090, but that would completely disregard the fact that the latter is simply a better GPU.
This does switch in the medium range, where the RTX 3070 is the better performer, but only with a slight margin. That margin is the same in terms of overall performance, so we can’t really chalk this up as an NVIDIA win.
For now, NVIDIA is simply a better performer in the budget category, and there isn’t much room for debate here. Both companies are preparing their respective budget cards for their latest generation, so this section might need an update soon.
NVIDIA narrowly edges out AMD in the performance-per-watt metric in their latest generations, but also for the previous few. Although AMD is catching up, it should be concerned with NVIDIA’s efficiency at using previous-generation lithography.
Total score: AMD 4 – NVIDIA 5
AMD vs. NVIDIA: Dollar Value
While top-level performance is what most gamers are looking for in their GPUs, the price also needs to be considered. As previously discussed, there are three basic categories for both price and performance.
NVIDIA has a clear advantage in the extreme price range performance-wise, but that gap is gradually closing with AMD’s RX 6900 XT coming close to the RTX 3090 while being a whole $500 cheaper.
If we drop a level below, where the RX 6800 XT is $50 cheaper than the RTX 3080, it wouldn’t be unreasonable to claim it’s a better option.
However, due to the RTX 3080’s better ray-tracing performance, it’s hard to say outright that either is better. Due to the price difference, let’s say that AMD has a slight edge in this price bracket.
Dropping further down to the mid-range market, we have a much clearer picture. Although the RX 6800 is more expensive than the RTX 3070, it is also a better card, and its cost is justified.
However, as the question here is the dollar value, we feel that both cards perform appropriately for their prices, so we will call this one a tie.
Although the comparisons seem mostly evenly matched, the performance per dollar favors AMD, which is why they get the point this time around. Keep in mind that it was a close match.
And The Winner is…
With a total score of 5 to 5, there is no better card manufacturer. Of course, the AMD vs. NVIDIA debate is subjective, and you shouldn’t blindly purchase either company’s card.
The general advice before investing in a PC part is to know your requirements and budget. You should carefully research all available cards before making your final decision.
In both companies’ latest generations, the answer to which you should buy essentially boils down to ray tracing. If you don’t consider that of the utmost importance, then AMD is a better option. However, if you want the highest possible image quality, you should use NVIDIA.