The idea of connecting multiple graphics cards isn’t new. On the contrary, it’s been around since the late 1990s. Although the premise sounds really cool on paper, SLI never really took off, but Nvidia wasn’t preparing to back down, so they created NVLink as a direct successor.
Keep reading if you want to see how NVLink differs from its forerunner and if it has finally managed to fulfil the prophecy of two graphics cards operating like one.
So, what is NVLink?
If you want the most technical version, it’s a wire-based communications protocol serial multi-lane near-range communication link. Speaking more broadly, it’s a way of using two graphics cards as one for a large boost in performance.
Table of ContentsShow
The Difference Between NVLink And SLI Has Huge Potential
Unlike SLI, NVLink uses mesh networking, which is a local network topology in which the infrastructure nodes connect directly in a non-hierarchical way.
This enables each node to relay information instead of routing it through one particular node. What’s cool about this setup is that nodes dynamically self-organize and self-configure, which allows a dynamic distribution of the workload.
In essence, where SLI struggled is the place where NVLink shines the brightest, and that is the speed at which data is exchanged.
NVLink doesn’t bother with SLI’s master-slave method, where one card in a two or more card setup, working as the master, is responsible for gathering slaves’ data and putting together the final output. Using the mesh networking infrastructure, it’s able to treat each node equally and thus significantly improve rendering speed.
The biggest advantage of NVLink, when compared to SLI, is that, because of the mesh network, both graphics cards’ memories are accessible all the time.
That was a point of confusion for those unfamiliar with SLI multi-GPU setup as it was logical that if two GPUs have a gigabyte of RAM each, their combined memory would be two gigabytes. However, that is simply not the case. With NVLink, it’s finally safe to say that one plus one equals two.
To handle frame rendering, SLI uses Alternate Frame Rendering (or AFR for short), functioning in a way that each connected card handled different frames. In a two GPU setup, one card would render even frames and the other odd ones. While this is a logical solution, it was not executed in the best way (mostly due to hardware limitations) and caused a lot of frustration with micro stuttering.
Furthermore, another key to faster image processing is the NVLink Bridge. SLI bridges had a 2 GB/s bandwidth at best, but NVLink Bridge promises a ridiculous 200 GB/s in the most extreme cases. However, despite the sheer lunacy of that number, it can be deceiving.
The 160 and 200 GB/s NVLink bridges can only be used for Nvidia’s professional-grade GPUs Quadro GP100 and GV100, respectively. So while there technically exists a machine out there with those bandwidth speeds, those are made for stuff like AI testing or CGI rendering.
The bridges for consumer-intended GPUs are slower, but still an extremely significant improvement over SLI. Still, top tier enthusiasts who will get two Titan RTX’s or two RTX 2080 Ti’s can potentially experience a whopping 100 GB/s bandwidth.
Is This Finally The Way Multi-GPU System Is Ushered In As The Standard?
Sadly, it would appear that we are still far from that. However, NVLink has the potential to introduce change with its “easier than ever” way for game developers to fully utilize everything that multi-GPU setup has to offer.
Older games might paradoxically produce fewer FPS with NVLink than with a single GPU, and there’s only a handful of more modern games that can actually provide that 2-as-1 experience. Not to discourage anyone, but it’s a simple fact that, although the potential is there, it’s just not worth getting two GPUs and hook them up with an NVLink. Nor SLI for that matter.
That’s not to say that this tech is never going to be viable, but the same problems that plagued SLI are holding NVLink down as well. With a new generation of GPUs arriving from both Nvidia and AMD, it’s going to be intriguing to see if and to what extent we are going to see support for multi-GPU systems.