Ray tracing, simply put, is the simulation of real-time light interactions with virtual objects as if it was in the real world.
When done effectively, this improves the detail quality for rendered graphics at very high levels, making them look even more realistic than ever before.
But as many already know, there is much more to the concept of real-time light vectors than simply making things look prettier.
Especially for games, where the technology’s pros and cons make its practicality a constant hot topic of debate.
Let’s jump right into it!
Table of ContentsShow
How Does Ray Tracing Work?
Ray tracing attempts to imitate natural light interaction with mathematically determined paths along an imaginary eye and the virtual environment that it “sees”.
Each pixel shown on the screen results from a simulated light ray that calculates where it should be reflected from where the eye (or any viewing medium) is looking.
As expected, the color properties of the surfaces affected by this process can dynamically change simply by switching viewing angles or moving objects around. Just like how objects in the real world change their shadow properties or brightness when moved nearer or away from a light source.
Why Is Ray Tracing Important?
Ray tracing is important because it theoretically skips the manual labor of pre-set or engine-based graphics lighting, and instead uses those calculations for real-time, dynamic lighting effects.
Because the light ray calculations optimize for the interaction of these vectors at each surface point available, it also creates much more realistic reflections and color tones.
This is unlike traditional lighting techniques in graphics design, where each pixel change has to be painstakingly modified manually. Lighting effects on regular graphical assets would only work for a pre-set number of changes built by its designer.
Even when using modern graphics engines, there is an ultimate limit on how rendering software could “assume” light interactions, leaving what should be perfectly reflective surfaces in games as generalized mirrored backdrops.
Rasterization vs. Ray Tracing
Rasterization is the method of converting a bunch of points and curves into a series of pixels with discernable images (and depth, if 3D) using calculations.
In simpler terms, this is the default manual way for graphics hardware to render images on the screen. This includes the manual color shifting of pixels when lighting is simulated.
Because rasterization is predetermined, calculations are relatively straightforward, provided that the graphics card has enough power to crunch all the necessary numbers in the shortest amount of time.
Ray tracing, while it should theoretically look better (or more natural), presents a much higher level of hardware processing challenge.
First, it has to dynamically calculate all surface color property changes using simulated light rays. The light rays also require additional processing power since their paths also need to be calculated.
On top of that, the graphics card still needs to use a good chunk of rasterization under ray tracing tasks. After all, it still has to account for the objects inside the game, as well as the overall graphical quality of the environment (material textures, surface tessellation, etc.).
Even at our current adoption level, ray tracing still reduces a significant amount of rendered frames.
Without using upscaling technologies like DLSS, FSR, and XeSS, it requires a higher number of dedicated cores and better pure rasterization performance to overcome the frame rate dips.
Ray Tracing On Nvidia GPUs
Nvidia may not have started the concept, but the company has spearheaded the hardware trend of ray tracing support with the introduction of the RTX lineup of GPUs.
RTX (Ray Tracing Texel eXtreme) started with the 20-series (Turing), evolving from the traditional GTX (Giga Texel Shader eXtreme) line of graphics cards, which ended with the GTX 10-series (Pascal).
The first-generation ray tracing cores of the 20 series were hardly impressive. But with DLSS at least, Nvidia was able to demonstrate the promising future of fully detailed ray-traced environments without making the game completely unplayable.
By the time the RTX 40-series (Lovelace) arrived, ray tracing hardware technology had improved enough that 1080p gaming was effectively “conquered” by Nvidia’s technology. And even at 1440p, the latest hardware can still provide good ray tracing features with high graphical settings without relying too much on DLSS.
Ray Tracing On AMD GPUs
AMD started quite late into the ray tracing game, with the company’s focus on RDNA 2 architecture being more on rasterization efficiency than providing new technological features.
But at least when the RX 6000 series arrived, Team Red could offer a baseline level of ray tracing performance that can still be mildly enjoyed at good enough frame rates. That was if you have the higher-end models.
As such, owners of anything below a Radeon RX 6700 XT are generally advised not to use ray tracing options.
They can still provide top-level detail with maxed-out standard graphics settings for modern triple-A titles. But with the limited ray tracing cores of these lower-tier graphics cards, frame rates take a much bigger hit, sometimes even with the use of FSR 2.0.
Ray Tracing On Intel GPUs
Intel surprised the world with its very first foray into the GPU market with the Intel Arc series and the first Alchemist architecture.
Despite lacking any top-level offerings, Team Blue was at least keen on providing feature parity by providing impressive compute-based performance. The GPUs also had a bunch of other offerings like XeSS and native (hardware-based) AV1 encoding.
Most impressively, the ray tracing capabilities of the Intel Arc A770 and Arc A750 more often than not surpass what equivalent RTX cards can offer.
The condition, of course, is that the game in question is already working fine and well with the Intel GPUs in the first place. Otherwise, the ray tracing-supported game never runs at all or performs slightly below the Geforce RTX 3060.
If you are playing on a natively DX12-supported game, turning on ray tracing is definitely a treat for both Intel graphics cards. Experience better frame rates than its competitors without relying on FSR and XeSS!
Should You Turn On Ray Tracing?
If you have a considerably high-end or upper-mid-range GPU that has dedicated ray tracing features, then you can most likely turn on ray tracing and still enjoy acceptable frame rates. However, there are a few caveats, namely:
- Basic graphics settings take a higher priority. Try to max out other graphic settings first before turning ray tracing on. Ray tracing enhances the experience of regular rasterized graphics. It would not feel as amazing or as stunning if other graphical quality settings were sacrificed for it.
- Remember that ray tracing implementation varies wildly depending on the game. Cyberpunk 2077 takes a far harder hit to frame rates when ray tracing is turned on, for example, than other games like Call of Duty: Warzone or Doom Eternal. Some games even require ray tracing to be turned on all the time, such as Metro Exodus: Enhanced Edition.
- Don’t turn on ray tracing if you can’t get respectable frame rates from reasonable graphics settings. The Radeon RX 6400 may theoretically support ray tracing with its dedicated hardware, but it is far from even being able to play any ray tracing-supported game at a usable FPS with it.
- If the game demands high-level real-time action, ray tracing might not be as beneficial. Ray tracing works more practically for games where you can appreciate the visuals without getting too busy with the environment. Something like No Man’s Sky, or even a slower-paced game like Resident Evil: Village.