Are NVIDIA RTX Graphics Cards Worth The Money?

Nvidia's RTX cards introduced the world to ray tracing, but their prices certainly raised a lot of eyebrows. Let's see if RTX is worth it.

When NVIDIA introduced its GeForce RTX 2000 series to the world in 2018, the graphics card market was shaken to its core. These RTX graphics cards revolutionized the gaming world.

Although the concept of ray tracing had existed for a while before the actual announcement by NVIDIA, we hadn’t previously seen any real-time ray tracing from a single GPU.

This market-shifting move was strengthened by NVIDIA’s decision to release the best representation of its innovative technology right away.

Several years on, we’ve seen NVIDIA release RTX 3080 and RTX 3090 as some of the best-performing cards on the market. However, we do have to admit that, this time around, AMD finally put up a solid fight against NVIDIA with their RX 6000 Series GPUs.

Related:GPU Hierarchy 2022 – Graphics Card Tier List

However, this performance comes at a cost. Technology wasn’t the only thing being stretched to its limit.

The biggest source of apprehension toward a new GPU for NVIDIA’s latest generation of cards was the price. At the time, the best card available was the RTX 2080 Ti, which had an eye-watering price that kept away even the most loyal NVIDIA fans.

However, NVIDIA sought to rectify this. With the release of the better-performing RTX 3080 at $700, we could finally enjoy a top-of-the-line performance at a more reasonable price. There are also the mid-range RTX 3070 at $500 and RTX 3060 Ti at $400.

Still, considering that the GTX 1080, the top-of-the-line GPU back in 2016, was priced at $600 and the flagship before that, the GTX 980 (the generation before the 1000 series), was just $550, this is still a sharp increase. Both AMD and NVIDIA are to blame.

NVIDIA also released the RTX 3090 at $1500, but as that card compares favorably to the previous generation’s enthusiast-class $2500 Titan RTX, we should be more than pleased.

Table of ContentsShow

What You Get With RTX

Ray Tracing

Shadow of the Tomb Raider RTX Off vs RTX On
Noticeable difference in Shadow of the Tomb Raider

Ray tracing wasn’t the only addition NVIDIA included, but it was certainly the most hyped-up feature.

At first, the more cynical PC hardware enthusiasts were skeptical about the idea and were quick to criticize and make memes mocking the technology as soon as it was announced.

They were somewhat accurate in their assessment that ray tracing would not bring a vast improvement in the looks department. However, on closer inspection, even those hard-set in their beliefs had to admit that it brought visual refinements.

How It Works

In the past, we could see reflections and lighting effects that were implemented, but the truth is that those were part of an elaborate smoke and mirrors illusion. The static lighting effect would be hard-coded to show reflections and shadows that could look nice, but it wasn’t the real deal.

The game developers had to devise ways to make their games look properly shaded and illuminated using these tricks. The fact that they often did so successfully is a testament to their inventive and innovative approaches.

RTX brings true real-time light particle simulation to the table. The in-game world is rendered dynamically, allowing for far more realistic and immersive visuals.

These visual effects can now be rendered so accurately that we’re slowly but steadily moving towards hyper-realistic video game graphics.

When light particles and reflections are calculated with RTX, the engine considers the surface material that light is reflecting off.

For example, light reflection is rendered differently if the reflecting surface is water, as opposed to glass. Similarly, the light will look different when hitting a marble floor or sand.

Below is a tech demo for ray tracing capabilities, showcased for Battlefield V at CES 2019.

Interestingly, RTX wasn’t the first technology to provide its audience with the magic of ray tracing. In fact, most modern movies with high-budget CGI effects feature ray tracing.

Although The Compleat Angler from 1979, which was produced by Bell Labs engineer Turner Whitted, is credited as the first use of ray tracing, it wasn’t until Pixar’s Monsters University that the technology was fully adopted in 2013.

So how come Pixar did it in 2013, and gamers had to wait until 2018?

Prerendered ray tracing vs. real-time ray tracing

The answer is simple. What Pixar did is completely different from the ray tracing we see in games. Pixar (or any other CGI animation studio) can prerender every single scene, which can take hours, weeks, or even months to process.

Once all those scenes are ready, they can be edited together and turned into a movie such as Monsters University.

In contrast, visuals in video games with ray tracing are processed in real time. RTX 2000, 3000, and RX 6000 GPUs process ray tracing constantly, which is why it has such a significant impact on in-game performance. In games such as Metro: Exodus, the average FPS could drop by 40% or more and might cause stuttering.

We also need to consider that ray tracing in video games is minimal compared to some ray-traced scenes in animated movies. An RTX 3000 or RX 6000 GPU would need months to process a complicated ray-traced scene. It would be impossible to do it in real time.

The way scenes are ray traced with RTX GPUs is also very different compared to animated movie scenes.

The player’s camera will trace a path through an individual pixel to whatever object is behind that pixel and to the light source. Ray tracing also considers if the object’s exposure to light is slightly disturbed or even completely obstructed. Below is a great visual representation of how that works.

ray tracing explained
Photo credit: Techquickie

This is accomplished using a bounding volume hierarchy traversal which, as the name suggests, is an algorithm for traversing a BVH tree structure. Although this greatly reduces the computational requirement, there is still a very noticeable surplus.

GPUs that don’t have additional ray tracing hardware would be required to use shaders, which would create a tremendous bottleneck.

Enter RT cores.

NVIDIA’s simple solution to that added computational requirement is to assign dedicated cores to those calculations. The RT cores hold two separate units where one handles the bounding box tests, and the other performs ray-triangle intersection tests. This significantly reduces the strain on the GPU and allows it to perform other tasks more effectively.

Deep Learning Super Sampling – DLSS

NVIDIA introduced several advancements for AI calculations with its RTX cards, but the most prominent use can be seen with DLSS.

DLSS can be considered an extension of the anti-aliasing technology, although it functions differently. Anti-aliasing is a technique that reduces the jaggedness of edges when rendering, but with DLSS, this is done without overloading the shader cores. This allows for the same effect, but without the performance hit.

For the same reason that we fear technological singularity, we are excited about DLSS. That might sound ominous, but there’s no reason to panic; DLSS is a friendly AI that can optimize the look of your games.

Following some growing pains when DLSS was first released, we now see a much-desired improvement following the release of the RTX 3000 series cards. It appears that NVIDIA listened to its audience and focused on working with developers to ease the process of making DLSS-compatible games.

Big Tech Equals Big Price

100 bills flying through a green vortex

Despite what the internet might say about it, there’s nothing wrong with acknowledging how amazing NVIDIA’s technology is in their RTX series. What isn’t so amazing is that there simply aren’t many games that can fully utilize everything RTX cards have to offer.

On a more positive note, the industry is thoroughly impressed by the tech of ray tracing and DLSS. As time passes, we’re likely to see this tech being utilized to its full potential more often.

As NVIDIA is sitting firmly on its GPU throne, they have the power to dictate the price points for its cards. When they introduce the gaming world to groundbreaking technology such as real-time ray tracing, they can’t be blamed for taking advantage and testing the limits of their consumers’ wallets.

One might look at the RTX 3090 and its $1500 price and judge it as even more expensive than the RTX 2080 Ti. However, it’s important to remember that the $700 RTX 3080 comfortably outperforms the RTX 2080 Ti and is considered NVIDIA’s flagship and, dare we say, its best representative.

The RTX 3070 and RTX 3060 Ti offer further price drops compared to their predecessors, and that is certainly commendable. Unfortunately, even with AMD’s release of RDNA 2 GPUs in late 2020, we still didn’t see any considerable price decrease in either high-end or mid-range GPUs.

On release, RTX might have been above the expected and comfortable price range, but in 2022, RTX appears to have found its footing in terms of both performance and price. In conclusion, we can confidently say that they are undoubtedly worth the money.

You Might Like These Too

What Are NVIDIA CUDA Cores
What Are NVIDIA CUDA Cores And How Do They Help PC Gaming?
Aleksandar Cosic

Alex is a Computer Science student and a former game designer. That has enabled him to develop skills in critical thinking and fair analysis. As a CS student, Aleksandar has very in-depth technical knowledge about computers, and he also likes to stay current with new technologies.