The war between the two titans of the industry has just been re-kindled. The AMD vs Nvidia debate might’ve been a one-sided affair for a while, but now one of them is catching up and is ready to make a claim for the throne.
This GPU war goes back to the 1990s. Although AMD has a much longer history in tech, Nvidia has recently dominated the field and is financially in a much better spot, being worth around twice as much.
However, we have to keep in mind that AMD devotes a great part of their resources to their CPUs and this fact shouldn’t be overlooked .
However, history doesn’t weigh that much in the world of technology: no one really cares that AMD has been around since the 1950s. This has now become a Nvidia versus AMD situation. Scroll down to find the best choice for you.
Table of ContentsShow
AMD vs Nvidia: Performance
If you’re thinking about getting a new GPU, you’re probably wondering about the potential performances of each card. Hitting that 60 FPS seems like a bare minimum in today’s gaming world, and having a good GPU is the key to getting that performance.
Nonetheless, building a new PC and getting it to the best possible in-game performance isn’t just about getting the best GPU. It’s also important to know that CPU and RAM are required to be on-par with the GPU to avoid bottlenecking.
There are three general GPU classifications and each represents a valid part of the market. These are low-end or budget, mid-tier or mid-range, and high-end. Seeing how each of these categories is beneficial in different ways, it’s only fair to compare AMD and Nvidia for each.
For this category, we can look at RX 5500 XT and GTX 1660 as they’re probably the best budget cards AMD and Nvidia have to offer in the $200 price range.
The reason why we’re not making this about RDNA 2 and Ampere is because there aren’t any budget cards near that price point.
While RX 5500 XT offers a better base clock rate at 1685MHz compared to GTX 1660’s 1530MHz, Nvidia cleverly used this to their advantage and offered a better boost rate at 1785MHz, which is higher than AMD’s game rate, at 1737MHz. Although this may look irrelevant, it’s interesting to see how this competition has reached even the smallest details.
AMD further showcases its capabilities with an 8GB GDDR6 RAM, which is categorically better than Nvidia’s 6GB GDDR5. It also holds firm with higher memory bandwidth and more L2 cache, but as you might’ve assumed already, Nvidia spends less power for GTX 1660.
However, the hardware is nothing without software, and in this regard Nvidia reigns supreme. Despite the previously mentioned specifications favoring AMD, and although it consumes less power, Nvidia ultimately performs better.
As gamers are not as interested in power consumption as in performance, Nvidia is the winner in this category.
For this category, we’ll look at AMD’s RX 6700 XT and Nvidia’s RTX 3060 Ti as they are two great choices and fair representations of the companies’ foray into mid-range.
This generation’s duel is a lot more interesting as cards aren’t as evenly matched in the mid-range category as they used to be.
The consensus among the benchmarkers is clear – the RX 6700 XT is simply the better card. Although the RTX 3060 Ti has a comparable performance and tightens the gap even further at 1440p. But, overall, the 6700 XT will give you more FPS.
However, the most important question here is the price. The RTX 3060 Ti is considerably cheaper at $399, while the RX 6700 XT’s comes at a $479 price point. This leads to our next question – which card has the better value proposition?
There might be some speculation on if these cards should’ve even been compared, but at this point, they are the best mid-range cards from each manufacturer.
The fact of the matter is that RTX 3060 Ti’s performance is at the price level at which it’s being offered which can also be said for the RX 6700 XT. So despite the price difference, RX 6700 XT is the better card and this is a point for AMD.
Here’s where things get a little tricky. This is mainly because AMD didn’t really offer anything that could compete with Nvidia up until the release of the RX 6000 series which is why Nvidia has a huge advantage in this area.
However, we will still compare an AMD and an Nvidia card, but first, we have to talk about the elephant in the room.
With their latest generations, both companies made incredible enthusiast-class cards, and although AMD’s RX 6900 XT is only $200 more than Nvidia’s high-end RTX 3080 and one could argue that these should be in the same bracket, we will look at the RX 6800 XT and the RTX 3080 for comparison.
The prices here are inverted when compared to the mid-range as Nvidia’s RTX 3080 is the more expensive card here. The $50 price difference probably doesn’t seem like a lot when you’re shelling out $650-$700, but what makes it particularly interesting is the performance.
Overall, the RX 6800 XT is a better performing card in terms of raw rasterization performance.
However, the RTX 3080 performs better in the ray-tracing department and that shouldn’t be overlooked. This technology is relatively “new”, but it’s still been long enough to expect better from AMD.
Of course, you could give them some credit as this is its first crack at it, but the fact is that it didn’t perform as well as Nvidia. Still, future driver updates and better AMD oriented game optimization could lead to better peformance.
Ray tracing is still pretty scarce on the market, including the amazing new feature from Nvidia – DLSS. A feature that can considerably increase your performance while minimally reducing visual quality.
In the end, it wouldn’t be fair to give Nvidia a point here considering that a huge percentage of the game do not utilize ray tracing or DLSS. So, the point goes for AMD.
Total score: AMD 2 – Nvidia 1
AMD vs Nvidia: Features
While features may seem less significant than actual specs, they are an important part of what makes a good GPU. Both AMD and Nvidia offer similar GPUs in terms of hardware and price, but the devil is in the details; in this case, in the features.
If something has to be talked about, it’s this. Although ray tracing isn’t a requirement of GPU performance, it makes a clear difference, offering better and more realistic visuals.
So, what is ray tracing anyway?
Technical definitions aside, ray tracing is a rendering technique that allows for lighting to be tracked more accurately by accounting for things such as object materials and how lighting reflects off of them.
Ray tracing awards a point to Nvidia as they were the first to implement it in their GPUs. With the arrival of Big Navi, AMD has a chance to prove themselves in this regard. So far, the RX 6000 series performed well enough, but still not up to par with the best from Nvidia.
Nvidia has pursued ray tracing technology since the 2000s, having introduced it to the world in 2018. This move particularly highlighted their dominance in the GPU market and AMD is yet to recover from it.
Since the next generation of gaming consoles, Playstation 5 and Xbox Series X wouldn’t have allowed their names to be linked with something less than top-of-the-line, AMD has been very upfront about their intention to introduce ray tracing in their next GPUs.
While things do look promising for AMD with the introduction of ray tracing, the point here goes to the market innovator Nvidia.
Variable Rate Shading
VRS is a technology first brought to the market by Nvidia and has found the best use in VR. It basically calculates which frames in your field of vision are going to be fully shaded or even rendered at all. This significantly lowers the power load of the GPU, transferring that extra energy to more useful things.
AMD still hasn’t incorporated this tech into their GPUs, but it’s heavily rumored that it will be introduced in their RDNA 2 line, as it filed for the patent for VRS all the way back in early 2019.
There have also been talks of perfecting the eye-tracking technology and using that to further improve upon VRS, which totally sounds like sci-fi stuff.
Since we were able to see this cool piece of technology in action from Nvidia’s side, it deserves this point too.
Deep Learning Super Sampling
Designed as another way to increase the efficiency of the GPU, DLSS is a path-breaking piece of technology. It can even be considered a little ahead of its time because of the process required to fully enjoy its benefits.
The biggest issue here is that game developers are required to enable DLSS support when making the game, and in order for the player to see the improvement, it needs to be sent to Nvidia, who then let an AI run through the game, analyze the images, and automatically upscale it to a higher resolution.
Initially, the fact that Nvidia does the heavy lifting was one of the biggest drawbacks of DLSS. The whole process wasn’t as optimized as it needed to be and this largely caused DLSS to come off as an idea rather than an efficient concept.
With the arrival of Nvidia’s Ampere architecture, we know that this process was streamlined and we certainly expect future improvements.
G-Sync vs FreeSync
These are Nvidia’s and AMD’s adaptive synchronization technologies designed to eliminate screen tearing during gameplay. Screen tearing occurs when the GPU’s output is mismatched with the display’s refresh rate.
The communication between the GPU and the monitor basically works this way: if the monitor refreshes at 60Hz, it requires 60 frames to be sent from the GPU (same for 120Hz, 144Hz, and so on).
The issue usually happens when the GPU is unable to produce the required frames, causing screen tearing.
Adaptive sync technology allows for the GPU to effectively change the monitor refresh rate, depending on the number of frames it produces.
So, if the game dips to 40 FPS, the GPU will limit the monitor to refresh only at 40Hz. However, this doesn’t make the games run any smoother, it just prevents screen tearing.
In the not so distant past, the solution for this was software-based, most notably with VSync, but this is being phased out in favor of newer technologies.
G-Sync is Nvidia’s solution to screen tearing and has drawn some criticism. Having been the first to market with adaptive sync tech, Nvidia used that to their advantage with some hardware requirements: monitors must be G-Sync compatible and, although not specifically stated, this had an extra cost of anywhere from $100 to $300 in price.
To run G-Sync, monitors require a proprietary Nvidia G-Sync scaler module which means they’ll all have similar on-screen menus and options. This is AMD’s FreeSync’s biggest advantage: using Adaptive Sync standard built into the DisplayPort 1.2a specification, it allows manufacturers to choose any cheaper scaler.
Nevertheless, G-Sync has a better way to handle the GPU outproducing the display adapter. It will actually lock the GPU’s frame rate upper limit to that of the monitor, while FreeSync, on the condition of in-game VSync being turned off, will allow the GPU to produce extra frames. This can lead to screen tearing but lower the input lag.
The biggest issue which strongly divides the community is the fact that not all Nvidia cards work with FreeSync monitors, just like not all AMD cards will work with G-Sync monitors. This is being ironed out, but the fact still remains that you will have to check if your monitor will work properly with your GPU.
While both sides have their pros and cons in the adaptive sync technology sector, the fact that FreeSync is more readily available is what ultimately earns AMD the point here.
Total score: AMD 3 – Nvidia 4
AMD vs Nvidia: Drivers And Software
The fact of the matter is: good hardware requires good software. Drivers are programs that control how a certain device (like a GPU) interfaces with the CPU. It enables the software to use the hardware part it controls to the best of its ability without having to control every aspect of how that particular part operates.
As previously mentioned, AMD’s RX 5000 series failed miserably when it launched to some driver issues creating black screens and crashes. Unfortunately, this issue persists despite newer drivers’ constant attempts to fix it. Nvidia’s issues are equally problematic, as they are often slighter and therefore more difficult to identify with precision.
AMD has significantly improved their driver capabilities with their yearly Radeon Adrenalin updates. The 2020 version alone allegedly offers an impressive 12% improvement over the 2019 version.
AMD also makes a conscious effort to simplify things and only use one piece of software to update its drivers. They have also followed a schedule of at least once-per-month with major releases.
The biggest setback for AMD is their products’ persistent issues, which take a long time to fix.
In turn, Nvidia’s driver update schedule consists of two diverse applications for the control over their hardware.
Their Nvidia Control Panel enables the configuration of things like 3D settings or display resolution.
GeForce Experience handles game optimizations, driver updates, and extra features.
The Nvidia Control Panel has not seen a UX or UI change for more than a decade. The design is outdated and it’s incredibly slow at times.
Geforce Experience in general sounds like a great idea, but it’s not what users hoped it would be. Users must log in to use the available features such as automatic driver updates, recording, FPS counter, etc. For many, GE is considered to be bloatware.
In comparison, Radeon Software is much quicker, much more intuitive, requires no account and provides other useful features such as Radeon Chill, Radeon Boost, manual and automatic overclocking, undervolting, manual fan curve, and more.
In the end, while AMD has it’s downsides, the efficient simplicity at which their software operates earns them a point in this category.
Total score: AMD 4 – Nvidia 4
AMD vs Nvidia: Power Consumption and Efficiency
When AMD introduced Navi and announced their gamble at TSMC’s 7nm FinFET process, they probably thought this amazing 50% per watt performance would bridge the efficiency gap.
But that was not the case: Navi didn’t even outperform older Nvidia GPUs that were built on the TSMC’s last-gen 12nm node.
The future seems brighter for AMD as its RDNA 2 cards managed to produce another 50% upgrade over RDNA.
But this isn’t all black and white. In the extreme performance range, Nvidia’s RTX 3090 certainly uses a lot of power with only RX 6900 XT coming close.
In short, we could say that RX 6800 XT is not as power-draining as RTX 3090, but that would completely disregard the core of the argument where the latter is simply a better GPU.
Things do flip in the medium range where the RTX 3070 is the better performer, but only at a slight margin. Coincidentally, that margin is the same in terms of overall performance so we can’t really chalk this up as Nvidia win.
For the time being, Nvidia is simply a better performer in the budget category and there isn’t much room for debate here. Still, both companies are preparing their respective budget cards of their latest generation, so this sections might need an update soon.
Nvidia narrowly edges out AMD in the performance-per-watt metric within their latest generations, but also for the last few. Although AMD is catching up, it should be concerned by Nvidia’s efficiency in using previous-generation lithography.
Total score: AMD 4 – Nvidia 5
AMD vs Nvidia: Dollar Value
While top-level performance is what most gamers are looking out for their GPUs, the price also needs to be considered. As previously discussed, there are three basic categories for both price and performance.
Nvidia has a clear advantage in the extreme price range performance-wise, but that gap is slowly closing with AMD’s RX 6900 XT coming really close to the RTX 3090 while being a whole $500 cheaper.
If we drop a level below where the RX 6800 XT is $50 cheaper than the RTX 3080, it wouldn’t be too far out to claim it’s a better option.
However, due to the RTX 3080’s better ray-tracing performance, it’s really hard to outright say either is better, but due to price difference, let’s say that AMD has a minuscule edge in this price bracket.
Dropping further down to the mid-range market, we have a much clearer picture. Although RX 6800 is more expensive than the RTX 3070, it is also a better card and its cost is justified. However, as the question here is the dollar value, we feel that both cards perform appropriately to their prices so we’ll call this one a tie.
Even though the comparisons seem a bit evenly matched, the performance per dollar seems to favor AMD which is why they get a point this time around. But, keep in mind that it was a close match.
And The Winner is…
With a total score of 5 to 5, there is no better card manufacturer. Of course, the AMD vs Nvidia debate is subjective and you shouldn’t blindly get either company’s card. The general advice for making any sort of investment in a PC part is to know your needs and budget.
You should carefully research all cards individually before making your final decision. In both company’s latest generation the answer to which you should buy essentially boils down to ray tracing. If that is not of the utmost importance to you then AMD is a better option. But, if you want the highest possible quality of image, you should go with Nvidia.