Graphics Card (GPU) based render engines such as Redshift, Octane, or VRAY have matured quite a bit and are overtaking CPU-based Render-Engines – both in popularity and speed.
But what hardware gives the best-bang-for-the-buck, and what do you have to keep in mind when building your GPU-Workstation compared to a CPU Rendering Workstation?
Building an all-round 3D Modeling and CPU Rendering Workstation can be somewhat straightforward, but optimizing GPU Rendering performance is a whole other story.
So, what are the most affordable and best PC-Parts for rendering with Octane, Redshift, VRAY, or other GPU Render Engines?
Let’s take a look:
Best Hardware for GPU Rendering
Processor
Since GPU-Render Engines use the GPU(s) to render, technically, you should go for a max-core-clock CPU like the Intel i9 10900K that clocks at 3,7GHz (5,3Ghz Turbo) or the AMD Ryzen 9 5950X that clocks at 3,4Ghz (4,9Ghz Turbo).
At first glance, this makes sense because the CPU does help speed up some parts of the rendering process, such as scene preparation.
That said, though, there is another factor to consider when choosing a CPU: PCIe-Lanes.
GPUs are connected to the CPU via PCIe-Lanes on the motherboard. Different CPUs support a different number of PCIe-Lanes.
Top-tier GPUs usually need 16x PCIe 3.0 Lanes to run at full performance without bandwidth throttling.

Image-Credit: MSI, Unify x570 Motherboard – A typical PCIe x16 Slot
Mainstream CPUs such as the i9 10900K/5950X have 16 GPU<->CPU PCIe-Lanes, meaning you could use only one GPU at full speed with these kinds of CPUs.
If you want to use more than one GPU at full speed, you would need a different CPU that supports more PCIe-Lanes.
AMD’s Threadripper CPUs, for example, are great for driving lots of GPUs.
They have 64 PCIe-Lanes (e.g., the AMD Threadripper 2950X or Threadripper 3960X)
On Intel’s side, the i9 10900X Series CPUs support 48 PCIe-Lanes (e.g,. i9 10980XE).
GPUs, though, can also run in lower bandwidth modes such as 8x PCIe 3.0 (or 4.0) Speeds.
This also means they use up fewer PCIe-Lanes (namely 8x). Usually, there is a negligible difference in Rendering Speed when having current-gen GPUs run in 8x mode instead of 16x mode.
At x8 PCIe Bandwidths, you could run two GPUs on an i9 10900K, or Ryzen 9 5950X. (For a total of 16 PCIe Lanes, given the Motherboard and Chipset supports DUAL GPUs and has sufficient PCIe Slots)
You could theoretically run 4 GPUs in x16 Mode on a Threadripper CPU (= 64 PCIe Lanes). Unfortunately, this is not supported, and the best you can achieve with Threadripper CPUs is a x16, x8, x16, x8 Configuration.
CPUs with a high number of PCIe-Lanes usually fall into the HEDT (= High-End-Desk-Top) Platform range and are often great for CPU Rendering as well, as they tend to have more cores and, therefore, higher multi-core performance.
Here’s a quick bandwidth comparison between having two Titan X GPUs run in x8/x8, x16/x8 and x16/x16 mode. The differences are within the margin of error.
Beware though, that the Titan X’s in this benchmark certainly don’t saturate a x8 PCIe 3.0 bus and the benchmark scene fits into the GPUs VRAM easily, meaning there is not much communication going on over the PCIe-Lanes.
When actively rendering and your scene fits nicely into the GPUs VRAM, the speed of GPU Render Engines is dependent on GPU performance.
Some processes, though, that happen before and during rendering rely heavily on the performance of the CPU, Storage, and (possibly) your network.
For example, extracting and preparing Mesh Data to be used by the GPU, loading textures from your Storage, and preparing the scene data.
These processing stages can take considerable time in very complex scenes and will bottleneck the overall rendering performance, if a low-end CPU, Disk, or RAM are used.
If your scene is too large to fit into your GPU’s memory, the GPU Render Engine will need to access your System’s RAM or even swap to disk, which will slow down the rendering considerably.
Best Memory (RAM) for GPU Rendering
Different kinds of RAM won’t speed up your GPU Rendering all that much. You do have to make sure, that you have enough RAM though, or else your System will crawl to a halt.

Image-Source: Corsair
Keep the following rules in mind to optimize for performance as much as possible:
- To be safe, your RAM size should be at least 1.5 – 2x your combined VRAM size
- Your CPU can benefit from higher Memory Clocks which can in turn slightly speed up the GPU rendering
- Your CPU can benefit from more Memory Channels on certain Systems which in turn can slightly speed up your GPU rendering
- Look for lower Latency RAM (e.g. CL14 is better than CL16) which can benefit your CPU’s performance and can therefore also speed up your GPU rendering slightly
Take a look at our RAM (Memory) Guide here, which should get you up to speed.
If you just need a quick recommendation, look into Corsair Vengeance Memory, as we have tested these Modules in a lot of GPU Rendering systems and can recommend them without hesitation.
Best Graphics Card for GPU Rendering
Finally, the GPU:
To use Octane and Redshift you will need a GPU that has CUDA-Cores, meaning you will need an NVIDIA GPU.
Some versions of VRAY used to additionally support OpenCL, meaning you could use an AMD GPU, but this is no longer the case.
If you are using other Render Engines, be sure to check compatibility here.
The best NVIDIA GPUs for Rendering are:
- RTX 3060 Ti (4864 CUDA Cores, 8GB VRAM)
- RTX 3070 (5888 CUDA Cores, 8GB VRAM)
- RTX 3080 (8704 CUDA Cores, 10GB VRAM)
- RTX 3090 (10496 CUDA Cores, 24GB VRAM)

Image-Source: Nvidia
Although some Quadro GPUs offer even more VRAM, the value of these “Pro”-level GPUs is worse for GPU rendering compared to mainstream or “Gaming” GPUs.
There are some features such as ECC VRAM, higher Floating Point precision, or official Support and Drivers that make them valuable in the eyes of enterprise, Machine-learning, or CAD users, to name a few.
For your GPU Rendering needs, stick to mainstream RTX GPUs for the best value.
GPU Cooling
Blower Style Cooler (Recommended for Multi-GPU setups)
- PRO: Better Cooling when closely stacking more than one card (heat is blown out of the case)
- CON: Louder than Open-Air Cooling
Open-Air Cooling (Recommended for single GPU Setups)
- PRO: Quieter than Blower Style, Cheaper, more models available
- CON: Bad Cooling when stacking cards (heat stays in the case)
Hybrid AiO Cooling (All-in-One Watercooling Loop with Fans)
- PRO: Best All-In-One Cooling for stacking cards
- CON: More Expensive, needs room for radiators in Case
Full Custom Watercooling
- PRO: Best temps when stacking cards, Quiet, some cards only use single slot height
- CON: Needs lots of extra room in the case for tank and radiators, Much more expensive
NVIDIA GPUs have a Boosting Technology, that automatically overclocks your GPU to a certain degree, as long as it stays within a predefined temperature and power limit.
So making sure your GPUs stay as cool as possible, will allow them to boost longer and therefore improve the performance.
You can observe this effect especially in Laptops, where there little room for cooling, and the GPUs tend to get very hot and loud and throttle very early. So if you are thinking of Rendering on a Laptop, keep this in mind.
A quick note on Riser Cables. With PCIe- or Riser-Cables you can basically place your GPUs further away from the PCIe-Slot of your Motherboard. Either to show off your GPU vertically in front of the Case’s tempered glass side panel, or because you have some space-constraints that you are trying to solve (e.g. the GPUs don’t fit).
If this is you, take a look at our Guide on finding the right Riser-Cables for your need.
Power Supply
Be sure to get a strong enough Power supply for your system. Most GPUs have a typical Power Draw of around 180-250W, though the Nvidia RTX 3080 and 3090 GPUs can draw even more.
I Recommend at least 650W for a Single-GPU-Build. Add 250W for every additional GPU that you have in your System.
Good PSU manufacturers to look out for, are Corsair, beQuiet, Seasonic, and Coolermaster but you might prefer others.

Image-Credit: Corsair
Use this Wattage-Calculator here that lets you Calculate how strong your PSU will have to be by selecting your planned components.
Motherboard & PCIe-Lanes
Make sure the Motherboard has the desired amount of PCIe-Lanes and does not share Lanes with SATA or M.2 slots.
Be careful what PCIe Configurations the Motherboard supports. Some have 3 or 4 physical PCIe Slots but only support one x16 PCIe Card (electrical speed).
This can get quite confusing.
Check the Motherboard manufacturer’s Website to be sure the Multi-GPU configuration you are aiming for is supported.
Here is what you should be looking for in the Motherboard specifications:

Image-Source: Asus
In the above example, you would be able to use (with a 40 PCIe Lane CPU) 1 GPU in x16 mode. OR 2 GPUs in both x16 mode OR 3 GPUs, one in x16 mode and two of those in x8 mode and so on. Beware that 28-PCIe Lane-CPUs in this example would support different GPU configurations than the 40 lane CPU.
Currently, the AMD Threadripper CPUs will give you 64 PCIe Lanes to hook your GPUs up to, if you want more you will have to go the Epyc or Xeon route.
To confuse things even more, some Motherboards do offer four x16 GPUs (needs 64 PCIe-Lanes) on CPUs with only 44 PCIe Lanes. How is this even possible?
Enter PLX Chips.
On some motherboards, these chips serve as a type of switch, managing your PCIe-Lanes and leads the CPU to believe fewer Lanes are being used.
This way, you can use e.g. 32 PCIe-Lanes with a 16 PCIe-Lane CPU or 64 PCIe-Lanes on a 44-Lane CPU.
Beware though, only a few Motherboards have these PLX Chips. The Asus WS X299 Sage is one of them, allowing up to 7 GPUs to be used at 8x speed with a 44-Lane CPU, or even 4 x16 GPUs on a 44 Lanes CPU.
This screenshot of the Asus WS X299 Sage Manual clearly states what type of GPU-Configurations are supported (Always check the manual before buying expensive stuff):

Image-Source: Asus Mainboard Manual
PCIe-Lane Conclusion
For Multi-GPU Setups, having a CPU with lots of PCIe-Lanes is important, unless you have a Motherboard that comes with PLX chips.
Having GPUs run in x8 Mode instead of x16, will only marginally slow down the performance on most GPUs. (Note though, the PLX Chips won’t increase your GPU bandwidth to the CPU, just make it possible to have more cards run in higher modes)
Best GPU Performance / Dollar
Ok so here it is. The Lists everyone should be looking at when choosing the right GPU to buy. The best performing GPU per Dollar!
GPU Benchmark Comparison: Octane
This List is based on OctaneBench 2020.
GPU Name | VRAM (GB) | OctaneBench Score | Price $ | Performance/Dollar |
---|---|---|---|---|
8x RTX 2080 Ti | 11 | 2733 | 9592 | |
4x RTX 2080 Ti | 11 | 1433 | 4796 | |
2x RTX 2080 Ti | 11 | 693 | 2398 | |
RTX 2080 Ti | 11 | 355 | 1199 | |
4x RTX 2080 | 8 | 1017 | 3196 | |
RTX 2080 | 8 | 261 | 620 | |
4x RTX 2080 Super | 8 | 1100 | 2880 | |
2x RTX 2080 Super | 8 | 541 | 1440 | |
RTX 2080 Super | 8 | 285 | 720 | |
4x RTX 2070 Super | 8 | 1057 | 2200 | |
2x RTX 2070 Super | 8 | 514 | 1100 | |
RTX 2070 Super | 8 | 259 | 550 | |
2x RTX 2070 | 8 | 482 | 1000 | |
RTX 2070 | 8 | 228 | 500 | |
4x RTX 2060 Super | 8 | 961 | 1260 | |
2x RTX 2060 Super | 8 | 485 | 840 | |
RTX 2060 Super | 8 | 240 | 420 | |
GTX 1660 Ti | 6 | 130 | 280 | |
GTX 1660 | 6 | 113 | 230 | |
GTX 1660 Super | 6 | 134 | 230 | |
Titan V | 12 | 332 | 3000 | |
4x GTX 1080 Ti | 11 | 837 | 2800 | |
2x GTX 1080 Ti | 11 | 382 | 1400 | |
GTX 1080 Ti | 11 | 195 | 700 | |
RTX 2060 (6GB) | 6 | 188 | 360 | |
Quadro RTX 8000 | 48 | 365 | 5670 | |
Quadro RTX 6000 | 24 | 380 | 4400 | |
Quadro RTX 4000 | 8 | 232 | 950 | |
Quadro RTX 5000 | 16 | 222 | 2100 | |
GTX 980 Ti | 6 | 142 | 300 | |
GTX 980 | 4 | 94 | 200 | |
RTX 3090 | 24 | 661 | 1499 | |
RTX 3080 | 10 | 559 | 699 | |
RTX 3070 | 8 | 403 | 499 | |
RTX 3060 Ti | 8 | 376 | 399 | |
GPU Name | VRAM (GB) | Octanebench Score | Price $ | Performance/Dollar |
Source: Complete OctaneBench Benchmark List
GPU Benchmark Comparison: Redshift
The Redshift Render Engine has its own Benchmark and here is a List based on the Redshift Benchmark 3.0.26:
GPU(s) | VRAM | Time (Minutes) | Price | Perf / $ |
---|---|---|---|---|
1x GTX 1080 Ti | 11 | 08.56 | 300 | |
1x RTX 2060 SUPER | 8 | 06.31 | 350 | |
1x RTX 2070 | 8 | 06.28 | 400 | |
1x RTX 3060 Ti | 8 | 04.26 | 450 | |
1x RTX 2070 SUPER | 8 | 06.12 | 450 | |
1x RTX 2080 | 8 | 06.01 | 600 | |
1x RTX 2080 SUPER | 8 | 05.47 | 650 | |
1x RTX 3080 | 10 | 03.07 | 850 | |
2x RTX 2070 SUPER | 8 | 03.03 | 900 | |
1x RTX 2080 Ti | 11 | 04.27 | 1200 | |
2x RTX 2080 | 8 | 03.10 | 1200 | |
2x RTX 2080 SUPER | 8 | 02.58 | 1300 | |
4x RTX 2070 | 8 | 01.56 | 1600 | |
1x RTX 3090 | 24 | 02.42 | 1750 | |
4x RTX 2070 SUPER | 8 | 01.42 | 1800 | |
2x RTX 2080 Ti | 11 | 02.18 | 2400 | |
4x RTX 2080 | 8 | 01.36 | 2400 | |
4x RTX 2080 SUPER | 8 | 01.32 | 2600 | |
1x RTX Titan | 24 | 04.16 | 2700 | |
4x RTX 2080 Ti | 11 | 01.07 | 4800 | |
8x RTX 2080 Ti | 11 | 00.49 | 9600 | |
GPU(s) | VRAM | Time (Minutes) | Price | Perf / $ |
Source: Complete Redshift Benchmark Results List
GPU Benchmark Comparison: VRAY-RT
And here is a List based off of VRAY-RT Bench. Note how the GTX 1080 interestingly seems to perform worse than the GTX 1070 in this benchmark:
GPU Name | VRAM | VRAY-Bench | Price $ MSRP | Performance/Dollar |
---|---|---|---|---|
GTX 1070 | 8 | 1:25 min | 400 | 2.941 |
RTX 2070 | 8 | 1:05 min | 550 | 2.797 |
GTX 1080 TI | 11 | 1:00 min | 700 | 2.380 |
2x GTX 1080 TI | 11 | 0:32 min | 1400 | 2.232 |
GTX 1080 | 8 | 1:27 min | 550 | 2.089 |
4x GTX 1080 TI | 11 | 0:19 min | 2800 | 1.879 |
TITAN XP | 12 | 0:53 min | 1300 | 1.451 |
8x GTX 1080 TI | 11 | 0:16 min | 5600 | 1.116 |
TITAN V | 12 | 0:41 min | 3000 | 0.813 |
Quadro P6000 | 24 | 1:04 min | 3849 | 0.405 |
Source: VRAY Benchmark List
Speed up your Multi-GPU Rendertimes
Note – This section is quite advanced. Feel free to skip it.
So, unfortunately, GPUs don’t always scale perfectly. 2 GPUs render an Image about 1,9 times faster. Having 4 GPUs will sometimes render about 3,6x faster.
Having multiple GPUs communicate with each other to render the same task, costs so much performance, that a large part of one GPU in a 4-GPU rig is mainly just there for managing decisions.
One solution could be the following: When final rendering image sequences, use as few GPUs as possible per task.
Let’s make an example:
What we usually do in a multi-GPU rig is, have all GPUs work on the same task. A single task, in this case, would be an image in our image sequence.
4 GPUs together render one Image and then move on to the next Image in the Image sequence until the entire sequence has been rendered.
We can speed up preparation time per GPU (when the GPUs sit idly, waiting for the CPU to finish preparing the scene) and bypass some of the multi-GPU slow-downs when we have each GPU render on its own task. We can do this by rendering one task per GPU.
So a machine with 4 GPUs would now render 4 tasks (4 images) at once, each on one GPU, instead of 4 GPUs working on the same image, as before.
Some 3D-Software might have this feature built-in, if not, it is best to use some kind of Render Manager, such as Thinkbox Deadline (Free for up to 2 Nodes/Computers).

Option to set the amount of GPUs rendering on one task in Thinkbox Deadline
Beware though, that you might have to increase your System RAM a bit and have a strong CPU since every GPU-Task needs its amount of RAM and CPU performance.
We’ve put together an in-depth Guide on How to Render faster. You might want to check that out too.
Redshift vs. Octane
Another thing I am asked often is if one should go with the Redshift or Octane.
As I myself have used both extensively, in my experience, thanks to the Shader Graph Editor and the vast Multi-Pass Manager of Redshift, I like to use the Redshift Render Engine more for doing work that needs complex Material Setups and heavy Compositing.
Octane is great if you want results fast, as it’s learning curve is shallower. But this, of course, is a personal opinion and I would love to hear yours!
Custom PC-Builder
If you want to get the best parts within your budget, have a look at the PC-Builder Tool.
Select the main purpose that you’ll use the computer for and adjust your budget to create the perfect PC with part recommendations that will fit within your budget.
What Hardware do you want to buy? Let me know in the comments!
and , the best GPU for
Lumion 10 / 11
or
Twinmotion 2019 / 2020
or
UE 4 / 5
with rtx 3000
As all of the listed are real-time render engines you’ll find that the RTX series GPUs are excellent and will perform nicely. Whatever your budget allows for. RTX3060 Ti and 3070 have better value, the 3080 and 3090 pack some serious performance.
Cheers,
Alex
I have a few PCs setup for rendering. I was under the impression that you don’t need a Graphics card in the render machines to process the rendering. They have the standard VGA on board chipsets. They have plenty of RAM, storage space, and 4 – 8 cores in each machine. Do I need to put a video card in the renderfarm machines?
Hey Gary,
It depends on what type of system you are building. Most Motherboards do not come with an onboard graphics chip. They surface a monitor connector (vga / hdmi) but the gpu itself usually has to be integrated into the CPU. Many CPUs do not have an integrated graphics chip. Almost none of the AMD CPUs have an iGPU. So unless you are running an Intel CPU with an iGPU (such as a 10900k) you’ll need either a Motherboard with an onboard GPU (which is very rare … asrock has some though) or a cheap dedicated gpu.
Technically a PC will run without any kind of gpu just fine but the question is how will yxou get visual feedback of what it’s doing? Of course, after it is set up you can just VNC or remote into the pc, but before that you’ll usually need to access the BIOS and install the os and such. You’ll need direct visual feedback for that unless you have a motherboard with an IPMI or other KVM.
Cheers,
Alex
https://www.microcenter.com/search/search_results.aspx?Ntt=GeForce+RTX+3080
So, I’m trying to decide between what is available at my local Microcenter for either the 3080 and 3090.
Are the overclocked versions worth it? As I cannot seem to get a single one as they sell out the moment the store opens.
Hey Waggy,
The stock situation for new GPUs is crazy right now. It’s like that all over the place. For GPU rendering, especially for multi-gpu setups, I’d recommend getting blower style dual slot GPUs as they tend to perform better when stacked. Right now there are only 3090s that are dual slot blower style, and I think a 3070 from Asus. So far there are no 3080. If you only need one or max two GPUs that would have some room between each other, you can also go for open air cooled gpus, but would have to make sure to well-ventialte your case. For rendering though, I’d again recommend to go with non-factory overclocked gpus as you’ll want stability and sustained performance more than a slight increase of clocks that might jeopardize stability.
Cheers,
Alex
Hey Alex, why are non-factory over clocked GPUs better? What sort of stability issues do factory over clocked GPUs cause?
1. longevity: Your rendering GPUs will be put to work for much longer and more sustained than gaming GPUs
2. In a stacked config with multiple GPUs you’ll want the automatic nvidia boosting behaviour to take care of the overclock depending on the temperature, you’ll most likely never reach the heightened factory-clock limit anyway
3. unproportional amount of additional power draw with added overclock puts strain on all of your components, especially the PSU
I am not saying it will crash, but I find the extra 2% of performance to not be worth it. It’s not like you’ll get 3 FPS more like in games 🙂
Cheers,
Alex
do you think i should go with the 3080 or the 6800xt for editing and 3d work?
was leaning more towards the 6800xt as it has 16gb vram compared to the 10 on the 3080
would 10gb vram be limiting ?
Hey Ima,
Check this page to see what GPU vendors your render engine of choice supports: https://www.cgdirector.com/render-engine-hardware-compatibility-cpu-gpu-hybrid/
In most cases this will be Nvidia, so that is usually what we recommend here, unless you are not using a GPU render engine at all (just cpu).
Cheers,
Alex
Hello Alex, Thank you for the article it has helped me a lot.
I do have a few questions. I’ve been having some problems with my C4D viewport , the tutorials I’m following are very smooth and my view port is super choppy when doing the playback which makes it very difficult to see the results of my work. I’ve tried multiple forums and no thread i find has helped me. I read on 1 of your articles that a multi-core cpu hurts more than help.
Could that be the reason for my viewport to be so choppy?
I’m asking because I’m planning to upgrade my PC so I can work more smoothly , I was thinking the issue would be my GPU but after reading your CPU article now I’m unsure.
My cpu is a threadripper 1950x OC to 4.1ghz
(I’ve disabled 8 of the cores to see if there would be any difference in the view port but i achieve the same result)
GPU:Evga1080ti SC2
Motherboard: ASUS ROG Strix 399-E Gaming
Ram:64GB
——–
I was planning to get an rtx 3090 but after reading this article I might be better with the following setup :
Motherboard: ASRock X399 Professional Gaming
CPU: TR 1900X
GPU: 2080Ti x4
What would you recommend? , I know the motherboard will need to be replaced 100% as mine doesn’t support 4way SLI & doesn’t have four PCI-e x 16
Do the 2080Ti’s have to be the same model or can they vary? ‘
I’m also thinking about the price in the market for the 2080’s
So 3 x 1080Ti’s might be more for the $ and 1 3090 (But I know 1080’s dont support RTX acceleration so i’m unsure about that route as well? maybe 2060’/2070’s?)
If the CPU is not an issue should I be okay to upgrade to a TR 2950X
One last thing , Can I run 1 rtx 3090 and 3 x 2080Ti ?
Thank you in advance for your reply and your helpful articles!
Hey David,
The 1950X is probably one of the worst CPUs for smooth active work, and viewport performance. I did some extensive testing a while back, and while it has a good number of cores (back in the days at least) the single core performance, cahce and core latency is quite bad.
The newer Threadrippers are much better, but still not quite as fast as their mainstream ryzen counterparts, and the Intel cpus are still the fastest for viewport performance. (https://www.cgdirector.com/cinema-4d-viewport-performance-benchmark-scores/)
Now this will most likely change when AMD releases their new CPUs on November 5th, as the performance numbers they released point into that direction.
For best active/smooth viewport performance, there is currently no better cpu than the 10900K. Of course you won’t be able to drive 4 GPUs with that CPU, so for a 4-GPU machine, the next best thing would be a current gen Threadripper like the 3970X.
That said, if 3 GPUs might be enough for you (like 3x 3080, which is faster than 4x 2080ti) then I’d suggest you get the upcoming Ryzen 5900X or 5950X with a Asus WS ACE x570 Motherboard, which allows you to drive 3 GPUs.
That would enable you to have both, fast active viewport performance, a good number of cores, and 3 gpus for gpu rendering.
Cheers,
Alex
Thank you very much for the quick reply .
I’ve been looking around at the 5900x and 5950x and I see that both of them only have 20PCIE lanes . Will I only be able to run 2 gpus at 8x 8 and the 3rd at x4 instead of all 3 at x16? . Sorry if I’m not understanding correctly
Thanks in advance !!
Yes this is usually how it is on mainstream platforms. If you need access to drive multiple gpus at x16 you’ll have to go with the Threadripper platform.
Of course if you use lower end gpus, then those gpus dont even saturate x8 pcie bandwidth, so they really only have to run at x8. The 2080ti is the first that barely saturates x8, and the 3080 too, the 3090 definitely needs a x16 or it will be heavily throttled.
Hey Alex! Thanks for great guidance.
I just got my 3090 and do I understand it correctly thats its probably not a good idea running it together with my old card 2080TI (blower) for Redshift rendering? I have a 9900K so the cards are running in 8x. Mostly my scenes arent very heavy, temps are fine and renders seems to be faster than with just the 3090.
Or should I just get rid of the 2080 before I get into trouble?
Best
Linus
You’ll limit the 3090s 24GB VRAM down to the smallest vram GPU (your 2080 ti with 11GB of VRAM).
Unless you render multiple frames at once, each gpu renders its own frame, you’ll throttle the 3090 by pairing it with a 2080 Ti.
PCIe Bandwidths too might bottleneck you with two GPUs. What Motherboard are you using?
Cheers,
Alex
Thanks Alex. Sorry, forgot to mention that. Im on Strix Gaming Z390.
Guess Ill use my 2080TI in a cabinet to help my Razerblade then.
Hello Alex, I was curious when you referred to the solution using the Asus WS ACE x570 motherboad. I read great things about this one, but is it a good idea to buy it now? Or is there a newer successor coming or a newer alternative to it? Or this motheboard doesn’t get that old that fast? Thank you very much!
It’s still a very viable choice for running 3 gpus on a mainstream platform. There is no successor that I know of yet. There might be some motherboard upgrades once Zen3 hits the shelves, but because all of the current motherboards are already compatible, I’d think there won’t be too many new products.
Cheers,
Alex