It’s down to RTX vs GTX- what’s the difference, and why does it matter to you and me?
In this article, I’ll do my best to break down everything you need to know about Nvidia’s GTX and RTX graphics cards, including if RTX graphics cards are always better than GTX and where this crazy naming scheme even comes from, to begin with.
Let’s dive into it!
What does GTX mean?
GTX may seem like a meaningless series of cool-sounding letters to put in front of a graphics card’s name, but it does have an actual meaning. Specifically, it means “Giga Texel Shader eXtreme”.
Prior to Nvidia manufacturing GTX GPUs, their previous generations of graphics cards followed a similar naming scheme.
For instance, the GT series that predates GTX simply means “Giga Texel”, whereas GTS expanded on that with “Giga Texel Shader” before GTX added the “eXtreme” at the end.
But…what exactly is a texel? Basically, a texel is a part of what forms a “texture”.
To simplify, think of 3D rendering as a function of models (the 3D shapes) and textures (the 2D images projected onto those shapes to give them color and detail).
There is plenty more that goes into 3D graphics rendering than just models and textures, but these are the basics you need to know to understand what a texel is.
So basically, once the ability to process a “Giga Texel” was crossed with the original release of the GeForce 7 series in 2005, GT and GTX became commonplace for product naming in Nvidia’s graphics cards.
In fact, there was a great variety of GT, GTS, and GTX graphics cards in that generation of cards.
Fast forward to 2022 and these days “Giga Texel” is much less impressive and Nvidia themselves no longer seems to care to highlight it.
Nevertheless, there are three main consumer brands of Nvidia graphics cards right now: the low-end GT cards, the mid-to-high range GTX cards, and the (usually) high-end RTX cards.
These days, these prefixes are more communicating a shorthand of where a particular product stands in Nvidia’s hierarchy rather than an explicit way to describe its performance or capabilities…except with RTX. More on that in a bit.
What does RTX mean?
RTX is short for “Ray Tracing Texel eXtreme”. While RTX shares a lot of its naming inspirations with GTX, that “R” for ray tracing is actually a pretty big deal.
Specifically, it advertises the ability of RTX graphics cards to do what is called real-time ray tracing, which is an incredibly impressive feat if you know what ray tracing is.
Compared to GTX graphics cards, RTX graphics cards add two new important pieces of hardware: RT cores and Tensor cores.
RT cores are used for the aforementioned real-time ray tracing, whereas Tensor cores are used to accelerate deep learning, AI, and HPC (High-Performance Computing).
As you may be able to surmise, the hardware tasks that RT and Tensor cores are needed for, are pretty intensive.
That’s why they require dedicated hardware on the graphics card in order to make it possible.
This isn’t the first or only example of extra hardware on a graphics card being used to accelerate other workloads, either.
For example, nearly every modern graphics card has a build-in H.264 hardware encoder to allow for high-quality video recording and rendering directly from the GPU without overly taxing overall performance.
So, now that you understand what GTX and RTX actually mean, let’s talk about what that ray tracing everyone is so hyped up about actually does.
What is Ray Tracing?
Ray tracing can basically be thought of as an extremely accurate calculation of how rays of light interact with a scene.
This doesn’t just mean the light by itself, it also takes into account how different objects in the environment can interact with or be interacted with by that light.
In the real world, light can change color based on the objects it passes through and bounces off of.
Ray tracing looks to simulate realistic light interactions with an incredibly high degree of accuracy, but as a result, it comes at a great computational cost.
Prior to real-time ray tracing being introduced to the market with Nvidia’s RTX graphics cards, ray tracing was exclusively in the area of professional rendering tasks.
Think uber-realistic CGI in live-action movies or the uncanny attention to lighting and shading in pre-rendered CG animated films, like basically everything Pixar has made.
Truthfully, the highest-end features of ray tracing (like Global Illumination or true matte Reflections and Refractions) are still out of the reach of real-time rendering: but with real-time ray tracing alongside strong graphics engines and art direction, you can achieve something that looks and feels pretty close to the real thing.
Even with dedicated RT cores on RTX graphics cards, however, real-time ray-tracing is still fairly demanding on performance in real-time applications.
There isn’t really a solution to this for the majority of non-gaming applications, but gamers, in particular, can enable another RTX-exclusive feature called DLSS to reduce the performance overhead introduced by ray tracing.
What is DLSS?
DLSS, or Deep Learning Super Sampling, is an advanced AI-powered upscaling technology from Nvidia.
If implemented into a particular game or application, it uses the onboard Tensor and RT cores to upscale a low-resolution image to your desired rendering resolution.
The result, especially with more recent implementations of DLSS 2.0 and newer, is a near-native (sometimes better-than-native) resolution image at a fraction of the performance cost.
Many games that support real-time ray tracing also support DLSS, which allows users to more feasibly enable ray tracing without completely destroying their framerate.
However, these features have to be manually coded into a game- while it would be amazing if DLSS were usable with all games, all the time, that simply isn’t how it works.
Using this particular Gentleman’s Doom Eternal benchmark from YouTube as an example, let’s talk about how DLSS impacts performance at 1440p and Ultra Nightmare settings.
At 1440p Ultra Nightmare settings, without ray tracing or DLSS enabled, the RTX 2060 Super is able to push roughly 118 FPS on average, with 1% dips to 73 FPS and 0.1% dips to 35 FPS.
That’s a pretty good range of performance for the most part, but those dips are a real problem at this and higher resolutions due to the limited power of the card.
At 1440p Ultra Nightmare settings with DLSS enabled, the RTX 2060 Super is able to push a nice 142 FPS on average, with 1% dips to 91 FPS and 0.1% dips to 47 FPS.
Not only is this a great improvement to average FPS, but DLSS also seems to prevent dips from being as severe, staying firmly outside of the 30 FPS range and generally remaining above 60 FPS, even in intense situations.
At 1440p Ultra Nightmare settings with DLSS and ray tracing enabled, the RTX 2060 Super is able to push 89 FPS on average, with 1% dips to 71 FPS and 0.1% dips to 65 FPS.
While this is still within perfectly playable ranges, the average FPS with ray tracing on top of DLSS is still severely impacted, going from an almost-flawless 144 Hz experience to something much closer to a 75 Hz or 60 Hz experience.
While Doom Eternal and its ray-tracing look absolutely stunning in action, the game looks almost as good without the extra frills of ray tracing being enabled- and performs much more smoothly, at that.
In such a fast-paced and ruthless game, you may well find yourself valuing the bump to framerate offered by DLSS by itself more than the extra visual frills enabled by ray tracing.
Even with DLSS to help with the raw performance cost and dedicated RT cores meant to accelerate it, real-time ray tracing comes with a fairly severe impact on frame rate.
Considering how important a high framerate is for a fluid gaming experience, many users are better served by simply leaving ray tracing disabled.
Can Non-RTX GPUs Do Ray Tracing or DLSS?
DLSS- no, not even remotely. That’s a firmly Nvidia RTX-only technology.
Ray tracing- yes and no.
You see, since AMD released their RX 6000 series, AMD graphics cards are also capable of doing real-time ray tracing.
However, Nvidia seems to have the performance lead when it comes to ray tracing at the time of writing, and AMD doesn’t have DLSS to offset the performance cost incurred by enabling ray tracing.
However, prior-generation AMD graphics cards and non-RTX Nvidia graphics cards aren’t capable of real-time ray-tracing.
In the rare situations where you can force ray tracing despite the lack of capable hardware, you’ll be dealing with performance far too poor for it to be even remotely worthwhile.
AMD does have something FidelityFX Super Resolution, which is another upscaling technology, but since it isn’t backed by AI cores it doesn’t have quite the same level of performance gain or image quality that DLSS offers.
Plus, it still has to be enabled on a per-game basis when using Windows.
Interestingly, AMD FSR can be enabled universally on Linux, and the Steam Deck handheld gaming PC has the feature enabled for all games out of the box. That part’s pretty cool.
Workloads For GTX and RTX Graphics Cards
Now, let’s break down some workloads that can be accelerated on both GTX and RTX graphics cards- or really, most high-performance graphics cards in general.
If you aren’t gaming, however, Nvidia does generally have a lead in professional applications thanks to a higher number of CUDA core-optimized applications.
If your application of choice can utilize OpenCL, however, AMD remains a compelling option for pro users, even on the high-end.
- Any form of PC gaming benefits from having a discrete graphics card paired with a CPU powerful enough to push your desired framerates.
For emulation or high refresh rate gaming, more high-end CPUs are required so as to not bottleneck your GPU.
- Video rendering, Encoding video recording, and video streaming can all be accelerated by the GPU.
In many cases, you can offload tasks like these almost entirely to the GPU, though you’ll most likely get the best performance and quality by using your CPU for these tasks as well, especially video rendering.
- 2D & 3D Animation
Workloads Only For RTX Graphics Cards, Or Enhanced By RTX Cards
Now, let’s talk about workloads that only RTX cards can do, or that RTX cards offer particularly compelling benefits for.
- PC gaming with real-time ray tracing and/or DLSS, as discussed above.
- 3D Rendering and Real-Time Previews (2D and 3D) + 3D animation particularly benefits from RTX cards since many leading applications already employ ray tracing.
Since it doesn’t need to be rendered in real-time, there isn’t a real trade-off of note here: having the dedicated ray tracing hardware onboard simply makes many rendering engines do their work faster.
- Computer-aided design software (CAD) can also see benefits from RTX cards, especially if you’re rendering high-fidelity previews with lighting intact. Pretty much any kind of advanced rendering that can make use of ray tracing hardware will see a benefit from an RTX card.
- Machine learning and AI tasks are also accelerated by the AI-focused Tensor cores in RTX GPUs .
- Compositing, Tracking, Stabilization, Denoising. Any Post-Effect Plugins in Applications like After Effects, Fusion, Nuke can be heavily accelerated by RTX GPUs and High-End GPUs in general
- 3D Viewports in 3D Software is also accelerated by a powerful discrete GPU.
Are all RTX graphics cards better than all GTX graphics cards?
When RTX cards first launched, this was…mostly the case.
While the likes of the RTX 2060 was still outperformed in non-ray tracing rendering tasks by the likes of cards like the previous-gen GTX 1080, RTX cards led that particular generation of Nvidia hardware in performance.
Today, this still remains mostly the case, as the majority of Nvidia’s new GPU releases are high-end RTX graphics cards.
However, a recent 2022 release from Nvidia- the RTX 3050- performs about on par with the GTX 1660, a mid-range graphics card from 2019.
This makes it generally inadequate for real-time ray tracing in games, but the card does benefit from having DLSS.
Lower-end RTX cards like the 3050 may still be a compelling option for professionals who have RTX-accelerated workloads, though.
Is there an AMD equivalent to Nvidia RTX?
Not really. AMD’s RX 6000 series (and newer) GPUs support real-time ray tracing since 2020 but aren’t quite as good at it yet.
AMD cards also have no AI or deep learning acceleration technologies to speak of.
Should I spend the extra money on an RTX GPU if I don’t need real-time ray tracing or other RTX-exclusive features?
If the price is right compared to similiarly-tiered AMD GPUs or past-gen Nvidia graphics cards…yeah, actually.
Even if you aren’t necessarily utilizing all of the hardware features on offer, the majority of RTX graphics cards are still fairly solid graphics cards for pushing any GPU-accelerated application.
The overwhelming majority of PC games I have in my Steam library don’t have ray tracing or DLSS and will never have those things, but can still see significant performance improvements compared to my current GTX 1070 with a high-end RTX graphics card.
Additionally, while things like video rendering and photo editing can’t really make use of ray tracing, they can still very much benefit from having a fast graphics card to help CPUs like my Intel Core i7-10700K push through long renders.
If you don’t need RTX’s extra features, that’s fine- but you should still take a look at competing options in your price range before choosing to skip over an RTX cards.
Either you’ll find a better non-RTX GPU for your money or you’ll find that the RTX card is still the best choice for you, even if you don’t need ray tracing at all.
Over to You
Whew! That was about everything I could think to cover in this breakdown of GTX and RTX graphics cards.
I hope you walk away from this article with a better understanding of what makes them different, what makes them not-so-different, and which one might be better suited for your needs.