Graphics Card (GPU) based render engines such as Redshift, Octane, or VRAY have matured quite a bit and are overtaking CPU-based Render-Engines – both in popularity and speed.
But what hardware gives the best-bang-for-the-buck, and what do you have to keep in mind when building your GPU-Workstation compared to a CPU Rendering Workstation?
Building an all-round 3D Modeling and CPU Rendering Workstation can be somewhat straightforward, but optimizing GPU Rendering performance is a whole other story.
So, what are the most affordable and best PC-Parts for rendering with Octane, Redshift, VRAY, or other GPU Render Engines?
Let’s take a look:
Best Hardware for GPU Rendering
Processor
Since GPU-Render Engines use the GPU(s) to render, technically, you should go for a max-core-clock CPU like the Intel i9 12900K or the AMD Ryzen 9 5950X that clocks at 3,4Ghz (4,9Ghz Turbo).
At first glance, this makes sense because the CPU does help speed up some parts of the rendering process, such as scene preparation.
That said, though, there is another factor to consider when choosing a CPU: PCIe-Lanes.
GPUs are connected to the CPU via PCIe-Lanes on the motherboard. Different CPUs support a different number of PCIe-Lanes.
Top-tier GPUs usually need 16x PCIe 3.0 Lanes to run at full performance without bandwidth throttling.

Image-Credit: MSI, Unify x570 Motherboard – A typical PCIe x16 Slot
Mainstream CPUs such as the i9 12900K/5950X have 16 GPU<->CPU PCIe-Lanes, meaning you could use only one GPU at full speed with these kinds of CPUs.
If you want to use more than one GPU at full speed, you would need a different CPU that supports more PCIe-Lanes.
AMD’s Threadripper CPUs, for example, are great for driving lots of GPUs.
They have 64 PCIe-Lanes (e.g., the AMD Threadripper 2950X or Threadripper 3960X)
GPUs, though, can also run in lower bandwidth modes such as 8x PCIe 3.0 (or 4.0) Speeds.
This also means they use up fewer PCIe-Lanes (namely 8x). Usually, there is a negligible difference in Rendering Speed when having current-gen GPUs run in 8x mode instead of 16x mode.
At x8 PCIe Bandwidths, you could run two GPUs on an i9 10900K, or Ryzen 9 5950X. (For a total of 16 PCIe Lanes, given the Motherboard and Chipset supports DUAL GPUs and has sufficient PCIe Slots)
You could theoretically run 4 GPUs in x16 Mode on a Threadripper CPU (= 64 PCIe Lanes). Unfortunately, this is not supported, and the best you can achieve with Threadripper CPUs is a x16, x8, x16, x8 Configuration.
CPUs with a high number of PCIe-Lanes usually fall into the HEDT (= High-End-Desk-Top) Platform range and are often great for CPU Rendering as well, as they tend to have more cores and, therefore, higher multi-core performance.
Here’s a quick bandwidth comparison between having two Titan X GPUs run in x8/x8, x16/x8 and x16/x16 mode. The differences are within the margin of error.
Beware though, that the Titan X’s in this benchmark certainly don’t saturate a x8 PCIe 3.0 bus and the benchmark scene fits into the GPUs VRAM easily, meaning there is not much communication going on over the PCIe-Lanes.
When actively rendering and your scene fits nicely into the GPU’s VRAM (find out how much VRAM you need here), the speed of GPU Render Engines is dependent on GPU performance.
Some processes, though, that happen before and during rendering rely heavily on the performance of the CPU, Storage, and (possibly) your network.
For example, extracting and preparing Mesh Data to be used by the GPU, loading high-quality textures from your Storage, and preparing the scene data.
These processing stages can take considerable time in very complex scenes and will bottleneck the overall rendering performance, if a low-end CPU, Disk, or RAM are used.
If your scene is too large to fit into your GPU’s memory, the GPU Render Engine will need to access your System’s RAM or even swap to disk, which will slow down the rendering considerably.
Best Memory (RAM) for GPU Rendering
Different kinds of RAM won’t speed up your GPU Rendering all that much. You do have to make sure, that you have enough RAM though, or else your System will crawl to a halt.

Image-Source: Corsair
Keep the following rules in mind to optimize for performance as much as possible:
- To be safe, your RAM size should be at least 1.5 – 2x your combined VRAM size
- Your CPU can benefit from higher Memory Clocks which can in turn slightly speed up the GPU rendering
- Your CPU can benefit from more Memory Channels on certain Systems which in turn can slightly speed up your GPU rendering
- Look for lower Latency RAM (e.g. CL14 is better than CL16) which can benefit your CPU’s performance and can therefore also speed up your GPU rendering slightly
Take a look at our RAM (Memory) Guide here, which should get you up to speed.
If you just need a quick recommendation, look into Corsair Vengeance Memory, as we have tested these Modules in a lot of GPU Rendering systems and can recommend them without hesitation.
Best Graphics Card for GPU Rendering
Finally, the GPU:
To use Octane and Redshift you will need a GPU that has CUDA-Cores, meaning you will need an NVIDIA GPU.
Some versions of VRAY used to additionally support OpenCL, meaning you could use an AMD GPU, but this is no longer the case.
If you are using other Render Engines, be sure to check compatibility here.
The best NVIDIA GPUs for Rendering are:
- RTX 3060 Ti (4864 CUDA Cores, 8GB VRAM)
- RTX 3070 (5888 CUDA Cores, 8GB VRAM)
- RTX 3070 Ti (6144 CUDA Cores, 8GB VRAM)
- RTX 3080 (8704 CUDA Cores, 10GB VRAM)
- RTX 3080 Ti (10240 CUDA Cores, 12GB VRAM)
- RTX 3090 (10496 CUDA Cores, 24GB VRAM)

Image-Source: Nvidia
Although some Quadro GPUs offer even more VRAM, the value of these “Pro”-level GPUs is worse for GPU rendering compared to mainstream or “Gaming” GPUs.
There are some features such as ECC VRAM, higher Floating Point precision, or official Support and Drivers that make them valuable in the eyes of enterprise, Machine-learning, or CAD users, to name a few.
For your GPU Rendering needs, stick to mainstream RTX GPUs for the best value.
GPU Cooling
Blower Style Cooler (Recommended for Multi-GPU setups)
- PRO: Better Cooling when closely stacking more than one card (heat is blown out of the case)
- CON: Louder than Open-Air Cooling
Open-Air Cooling (Recommended for single GPU Setups)
- PRO: Quieter than Blower Style, Cheaper, more models available
- CON: Bad Cooling when stacking cards (heat stays in the case)
Hybrid AiO Cooling (All-in-One Watercooling Loop with Fans)
- PRO: Best All-In-One Cooling for stacking cards
- CON: More Expensive, needs room for radiators in Case
Full Custom Watercooling
- PRO: Best temps when stacking cards, Quiet, some cards only use single slot height
- CON: Needs lots of extra room in the case for tank and radiators, Much more expensive
NVIDIA GPUs have a Boosting Technology, that automatically overclocks your GPU to a certain degree, as long as it stays within a predefined temperature and power limit.
So making sure your GPUs stay as cool as possible, will allow them to boost longer and therefore improve the performance.
You can observe this effect, especially in Laptops, where there is little room for cooling, and the GPUs tend to get very hot and loud and throttle very early. So if you are thinking of Rendering on a Laptop, keep this in mind.
A quick note on Riser Cables. With PCIe- or Riser-Cables you can basically place your GPUs further away from the PCIe-Slot of your Motherboard. Either to show off your GPU vertically in front of the Case’s tempered glass side panel, or because you have some space constraints that you are trying to solve (e.g. the GPUs don’t fit).
If this is you, take a look at our Guide on finding the right Riser-Cables for your need.
Power Supply
Be sure to get a strong enough Power supply for your system. Most GPUs have a typical Power Draw of around 180-250W, though the Nvidia RTX 3080 and 3090 GPUs can draw even more.
I Recommend at least 650W for a Single-GPU-Build. Add 250W for every additional GPU that you have in your System.
Good PSU manufacturers to look out for, are Corsair, beQuiet, Seasonic, and Coolermaster but you might prefer others.

Image-Credit: Corsair
Use this Wattage-Calculator here that lets you Calculate how strong your PSU will have to be by selecting your planned components.
Motherboard & PCIe-Lanes
Make sure the Motherboard has the desired amount of PCIe-Lanes and does not share Lanes with SATA or M.2 slots.
Be careful what PCIe Configurations the Motherboard supports. Some have 3 or 4 physical PCIe Slots but only support one x16 PCIe Card (electrical speed).
This can get quite confusing.
Check the Motherboard manufacturer’s Website to be sure the Multi-GPU configuration you are aiming for is supported.
Here is what you should be looking for in the Motherboard specifications:

Image-Source: Asus
In the above example, you would be able to use (with a 40 PCIe Lane CPU) 1 GPU in x16 mode. OR 2 GPUs in both x16 mode OR 3 GPUs, one in x16 mode and two of those in x8 mode and so on. Beware that 28-PCIe Lane-CPUs in this example would support different GPU configurations than the 40 lane CPU.
Currently, the AMD Threadripper CPUs will give you 64 PCIe Lanes to hook your GPUs up to, if you want more you will have to go the Epyc or Xeon route.
To confuse things even more, some Motherboards do offer four x16 GPUs (needs 64 PCIe-Lanes) on CPUs with only 44 PCIe Lanes. How is this even possible?
Enter PLX Chips.
On some motherboards, these chips serve as a type of switch, managing your PCIe-Lanes and leads the CPU to believe fewer Lanes are being used.
This way, you can use e.g. 32 PCIe-Lanes with a 16 PCIe-Lane CPU or 64 PCIe-Lanes on a 44-Lane CPU.
Beware though, only a few Motherboards have these PLX Chips. The Asus WS X299 Sage is one of them, allowing up to 7 GPUs to be used at 8x speed with a 44-Lane CPU, or even 4 x16 GPUs on a 44 Lanes CPU.
This screenshot of the Asus WS X299 Sage Manual clearly states what type of GPU-Configurations are supported (Always check the manual before buying expensive stuff):

Image-Source: Asus Mainboard Manual
PCIe-Lane Conclusion
For Multi-GPU Setups, having a CPU with lots of PCIe-Lanes is important, unless you have a Motherboard that comes with PLX chips.
Having GPUs run in x8 Mode instead of x16, will only marginally slow down the performance on most GPUs. (Note though, the PLX Chips won’t increase your GPU bandwidth to the CPU, just make it possible to have more cards run in higher modes)
Best GPU Performance / Dollar
Ok so here it is. The Lists everyone should be looking at when choosing the right GPU to buy. The best performing GPU per Dollar!
GPU Benchmark Comparison: Octane
This List is based on OctaneBench 2020.
GPU Name | VRAM (GB) | OctaneBench Score | Price $ | Performance/Dollar |
---|---|---|---|---|
8x RTX 2080 Ti | 11 | 2733 | 9592 | |
4x RTX 2080 Ti | 11 | 1433 | 4796 | |
4x RTX 2080 Super | 8 | 1100 | 2880 | |
4x RTX 2070 Super | 8 | 1057 | 2200 | |
4x RTX 2080 | 8 | 1017 | 3196 | |
4x RTX 2060 Super | 8 | 961 | 1260 | |
4x GTX 1080 Ti | 11 | 837 | 2800 | |
2x RTX 2080 Ti | 11 | 693 | 2398 | |
RTX 3090 | 24 | 661 | 1499 | |
RTX 3090 Ti | 24 | 692 | 1999 | |
RTX 3080 Ti | 12 | 648 | 1199 | |
RTX 3080 | 10 | 559 | 699 | |
2x RTX 2080 Super | 8 | 541 | 1440 | |
2x RTX 2070 Super | 8 | 514 | 1100 | |
2x RTX 2060 Super | 8 | 485 | 840 | |
2x RTX 2070 | 8 | 482 | 1000 | |
RTX 3070 Ti | 8 | 454 | 599 | |
RTX 3070 | 8 | 403 | 499 | |
2x GTX 1080 Ti | 11 | 382 | 1400 | |
Quadro RTX 6000 | 24 | 380 | 4400 | |
RTX 3060 Ti | 8 | 376 | 399 | |
RTX 3060 | 12 | 289 | 329 | |
Quadro RTX 8000 | 48 | 365 | 5670 | |
RTX 2080 Ti | 11 | 355 | 1199 | |
Titan V | 12 | 332 | 3000 | |
RTX 2080 Super | 8 | 285 | 720 | |
RTX 2080 | 8 | 261 | 620 | |
RTX 2070 Super | 8 | 259 | 550 | |
RTX 2060 Super | 8 | 240 | 420 | |
Quadro RTX 4000 | 8 | 232 | 950 | |
RTX 2070 | 8 | 228 | 500 | |
Quadro RTX 5000 | 16 | 222 | 2100 | |
GTX 1080 Ti | 11 | 195 | 700 | |
RTX 2060 (6GB) | 6 | 188 | 360 | |
GTX 980 Ti | 6 | 142 | 300 | |
GTX 1660 Super | 6 | 134 | 230 | |
GTX 1660 Ti | 6 | 130 | 280 | |
GTX 1660 | 6 | 113 | 230 | |
GTX 980 | 4 | 94 | 200 | |
RTX A6000 | 48 | 628 | 5000 | |
RTX A5000 | 24 | 593 | 2250 | |
RTX Titan | 24 | 361 | 2499 | |
RTX 3050 | 4 | 179 | 249 | |
GPU Name | VRAM (GB) | Octanebench Score | Price $ | Performance/Dollar |
Source: Complete OctaneBench Benchmark List
GPU Benchmark Comparison: Redshift
The Redshift Render Engine has its own Benchmark and here is a List based on the Redshift Benchmark 3.0.26:
GPU(s) | VRAM | Time (Minutes) | Price | Perf / $ |
---|---|---|---|---|
1x GTX 1080 Ti | 11 | 08.56 | 300 | |
1x RTX 2060 SUPER | 8 | 06.31 | 350 | |
1x RTX 3060 | 12 | 05.38 | 350 | |
1x RTX 2070 | 8 | 06.28 | 400 | |
1x RTX 3060 Ti | 8 | 04.26 | 450 | |
1x RTX 2070 SUPER | 8 | 06.12 | 450 | |
1x RTX 3070 | 10 | 03.57 | 500 | |
1x RTX 3070 Ti | 8 | 03.27 | 599 | |
1x RTX 2080 | 8 | 06.01 | 600 | |
1x RTX 2080 SUPER | 8 | 05.47 | 650 | |
1x RTX 3080 | 10 | 03.07 | 850 | |
2x RTX 2070 SUPER | 8 | 03.03 | 900 | |
1x RTX 3080 Ti | 12 | 02.44 | 1199 | |
1x RTX 2080 Ti | 11 | 04.27 | 1200 | |
2x RTX 2080 | 8 | 03.10 | 1200 | |
2x RTX 2080 SUPER | 8 | 02.58 | 1300 | |
1x RTX 3090 | 24 | 02.42 | 1499 | |
4x RTX 2070 | 8 | 01.56 | 1600 | |
4x RTX 2070 SUPER | 8 | 01.42 | 1800 | |
1x RTX 3090 Ti | 24 | 02.36 | 1999 | |
2x RTX 2080 Ti | 11 | 02.18 | 2400 | |
4x RTX 2080 | 8 | 01.36 | 2400 | |
4x RTX 2080 SUPER | 8 | 01.32 | 2600 | |
1x RTX Titan | 24 | 04.16 | 2700 | |
2x RTX 3090 | 24 | 01.15 | 4000 | |
4x RTX 2080 Ti | 11 | 01.07 | 4800 | |
4x RTX 3090 | 24 | 00.45 | 8000 | |
8x RTX 2080 Ti | 11 | 00.49 | 9600 | |
GPU(s) | VRAM | Time (Minutes) | Price | Perf / $ |
Source: Complete Redshift Benchmark Results List
GPU Benchmark Comparison: VRAY-RT
And here is a List based off of VRAY-RT Bench. Note how the GTX 1080 interestingly seems to perform worse than the GTX 1070 in this benchmark:
GPU Name | VRAM | VRAY-Bench | Price $ MSRP | Performance/Dollar |
---|---|---|---|---|
GTX 1070 | 8 | 1:25 min | 400 | 2.941 |
RTX 2070 | 8 | 1:05 min | 550 | 2.797 |
GTX 1080 TI | 11 | 1:00 min | 700 | 2.380 |
2x GTX 1080 TI | 11 | 0:32 min | 1400 | 2.232 |
GTX 1080 | 8 | 1:27 min | 550 | 2.089 |
4x GTX 1080 TI | 11 | 0:19 min | 2800 | 1.879 |
TITAN XP | 12 | 0:53 min | 1300 | 1.451 |
8x GTX 1080 TI | 11 | 0:16 min | 5600 | 1.116 |
TITAN V | 12 | 0:41 min | 3000 | 0.813 |
Quadro P6000 | 24 | 1:04 min | 3849 | 0.405 |
Source: VRAY Benchmark List
Speed up your Multi-GPU Rendertimes
Note – This section is quite advanced. Feel free to skip it.
So, unfortunately, GPUs don’t always scale perfectly. 2 GPUs render an Image about 1,9 times faster. Having 4 GPUs will sometimes render about 3,6x faster.
Having multiple GPUs communicate with each other to render the same task, costs so much performance, that a large part of one GPU in a 4-GPU rig is mainly just there for managing decisions.
One solution could be the following: When final rendering image sequences, use as few GPUs as possible per task.
Let’s make an example:
What we usually do in a multi-GPU rig is, have all GPUs work on the same task. A single task, in this case, would be an image in our image sequence.
4 GPUs together render one Image and then move on to the next Image in the Image sequence until the entire sequence has been rendered.
We can speed up preparation time per GPU (when the GPUs sit idly, waiting for the CPU to finish preparing the scene) and bypass some of the multi-GPU slow-downs when we have each GPU render on its own task. We can do this by rendering one task per GPU.
So a machine with 4 GPUs would now render 4 tasks (4 images) at once, each on one GPU, instead of 4 GPUs working on the same image, as before.
Some 3D-Software might have this feature built-in, if not, it is best to use some kind of Render Manager, such as Thinkbox Deadline (Free for up to 2 Nodes/Computers).

Option to set the amount of GPUs rendering on one task in Thinkbox Deadline
Beware though, that you might have to increase your System RAM a bit and have a strong CPU since every GPU-Task needs its amount of RAM and CPU performance.
We’ve put together an in-depth Guide on How to Render faster. You might want to check that out too.
In case you have a Motherboard that’ll let you hook up several GPUs to it, but you don’t have any room to plug them in side-by-side, be sure to check out our PCIe-Riser Cable Guide.
Redshift vs. Octane
Another thing I am asked often is if one should go with the Redshift or Octane.
As I myself have used both extensively, in my experience, thanks to the Shader Graph Editor and the vast Multi-Pass Manager of Redshift, I like to use the Redshift Render Engine more for doing work that needs complex Material Setups and heavy Compositing.
Octane is great if you want results fast, as it’s learning curve is shallower. But this, of course, is a personal opinion and I would love to hear yours!
Custom PC-Builder
If you want to get the best parts within your budget, have a look at the PC-Builder Tool.
Select the main purpose that you’ll use the computer for and adjust your budget to create the perfect PC with part recommendations that will fit within your budget.
Answers to frequently asked questions (FAQ)
Is the GPU or CPU more important for rendering?
It depends on the render engine you are using. If you’re using a GPU render engine such as Redshift, Octane, or Cycles GPU, the GPU will be considerably more important for rendering than the CPU.
For CPU render engines, such as Cycles CPU, V-Ray CPU, Arnold CPU, the CPU will be more important.
Interestingly, the CPU also plays a minor role in maximizing GPU render performance. CPUs with high single-core performance are less of a bottleneck for the GPU(s).
Is RTX better than GTX for Rendering?
Yes, Nvidia’s RTX GPUs perform better in GPU Rendering than GTX GPUs. The reason is simple: RTX GPUs both designate higher-tiered and more expensive GPUs that perform better compared to Nvidia’s GTX line-up, and also have Ray-Tracing cores, which can additionally increase render performance in supported engines.
Does more RAM help for rendering?
More RAM will only speed up your rendering if you’ve had too little, to begin with. You see, RAM really only bottlenecks performance if it’s full and data has to be swapped to disk.
If you have a simple 3D scene that only needs about 16GB of RAM for rendering, then 32, 64, or 128GB of RAM will do nothing for that particular task. If you only had 8GB of RAM beforehand, then installing more RAM will considerably increase render performance.
Does more VRAM help for rendering?
Similar to RAM, more VRAM only helps increase performance in scenarios where you had too little, to begin with.
If your 3D scene fits entirely into your GPU’s VRAM, then having a GPU with more VRAM won’t impact performance at all (given all other specs are the same).
Of course, many GPU render engines might be able to utilize larger ray-trees or other optimizations if there’s more free VRAM, so depending on the render engine, you could see a small speedup with more VRAM even if your scene is simple and easily fits into your GPU(s)’s VRAM.
Over to you
What Hardware do you want to buy? Let me know in the comments! Or ask us anything in our forum 🙂
387 Comments
6 April, 2022
Hi Alex,
It’s not clear to me which is the optimal configuration for real time rendering, IPR (Interactive Prodcuction Render) in Vray. I don’t care so much about the final render time, I care more about it being accurate and fast to set up the scene in real time (camera, lights, materials) before rendering the final image. My scenes are not very big, rather small, product rendering with studio lighting (beverages, cosmetics, etc). Thank you very much for your advice and your work.
8 April, 2022
Hey Frank,
Render Preparation time is dependent – 99% of the time – on your CPU’s single core performance. So optimal CPUs here would be something like the 5900X, 5800X3D, 12700K, 12900K.
And also make sure your PCIe-Lanes aren’t bottlenecked or throttled by other devices.
Once the scene is prepared (lights, meshes, ray-tree, textures etc.) and it has been uploaded to your GPU’s VRAM, then you rarely see the CPU do any of the heavy lifting anymore, that’s the GPU’s job then, to run through the bucket-rendering phase (or progressive phase)
The CPU is much more involved in real-time previews than it is in final-rendering. Especially when you’re constantly updating scene settings like material color, moving polygons, objects and the like.
V-Ray does a pretty good job of not having to recreate re-prepare the entire scene on scene update, but it can only do this to a certain degree (mostly on an object level) until it does have to rebake its meshes and recreate the ray-tree etc from the start.
Long story short: The CPU’s single core performance is what the IPR is most dependent on in preparation time.
This is a bit of a dilemma for multi-GPU PC builds, because if you need more PCIe Lanes for multiple GPUs, you’ll most likely have a Threadripper or other HEDT-level CPU, which comes with access to more PCIe Lanes and the motherboard has more PCIe Slots, but those CPU (because of their higher core count) also clock lower and therefore also have lower single-core performance than their mainstream counterparts (such as 5900X, 12900K etc). Mainstream CPUs though can only drive a single GPU at full PCIe Lane Bandwidths. If you only have one GPU, that’s the way to go for minimum viewport lag / ipr delay.
Cheers,
Alex
21 December, 2021
Hi Alex,
Very helpful thread to educate someone who doesn’t know much about building optimized workstation PC.
For building a PC optimized primarily for Blender and GPU rendering w/ Octane, my current build lacks viewport performance, and in different software aside from Blender – currently have a 2080/Threadripper 1920X
Looking to build a new workstation, I’m debating between another Threadripper at 24C/48T to run 1 or 2 RTX 3090s, or a Ryzen 5950X with the same 1 or 2 RTX 3090s.
From what I understand, the Ryzen will have faster viewport performance but I’m unsure how it will affect the ability to utilize multiple GPUs if I get there.
Blender seems to benefit in some areas for multithreading, though I primarily use it for modelling, animation, compositing scenes and rendering with Octane. + I regularly use other software, primarily in the order of
Photoshop, Blender, Octane, Zbrush, Marvelous Designer, Substance Painter, After Effects and Houdini.
I have a pretty high budget of 10k CAD, would love any feedback with this.
21 December, 2021
Also don’t fully understand the benefit of multiple GPUs – in my work specific case, I can afford to wait longer for render times, but idk how much better it is to use multiple
But if I were to live preview things, will multiple GPUs be useful for quick live feedback?
If I find it isn’t a huge deal, I’m considering getting an Intel Core i9-12900K
22 December, 2021
It sounds like the 5950X or 12900K are more suited to your work. Unless you absolutely need 3 or 4 GPUs, I’d definitely lean towards the faster viewport / active work performance on mainstream CPUs.
The Threadrippers are great for 3-4 gpus, more fast storage and if you need cores for CPU Rendering, but the workloads you listed will definitely run better on a higher clocked CPU.
GPU Rendering scales almost linearly, so yes you’ll have faster previews in cycles or octane and faster renders.
Cheers,
Alex
13 November, 2021
I have dilemma. I currently have two RTX titan gpus in my system for 3D rendering using Redshift, Octane and Blender cycles. These cards suffer from overheating because of a single slot between them. My question is, are two RTX TITAN’s better than a single RTX 3090 for 3d rendering? What about in Davinci Resolve 17? I would like to hear you thoughts
19 November, 2021
Hey Evond,
Two RTX Titans will be just slightly faster than a single RTX 3090. Both GPU Render Engines and DaVinci resolve as well scale quite well with multiple GPUs. RS, Octane, Cycles will scale slightly better than DaVinci.
Nvidia really didn’t think this through, putting open-air coolers on workstation GPUs that are often used in multi-gpu configs.
You’ll need blower-style coolers if you want to stack GPUs closely on top of each other unfortunately and still want good cooling performance.
Alex
8 November, 2021
Hi, I have one question.
I am planning on using nvlink with two quadro rtx A6000 48gb cards.
So will I get 98 gb vram or 48 gb separate Vrams?
I am really confused about this 🙁
Will I get to see 96gb vram figure inside octane or not?
What will happen when I combine these cards using nvilink?
Any help will be appreciated 🙂
Thank you!
12 November, 2021
Hey!
If set up and configured correctly you’ll be able to pool your vram together and see 96gb of VRAM when rendering.
Check this article to see how to set it up: https://www.pugetsystems.com/labs/support-hardware/How-to-Enable-and-Test-NVIDIA-NVLink-on-Quadro-and-GeForce-RTX-Cards-in-Windows-10-1266/
Cheers,
Alex
13 June, 2021
Hi Alex!
I have a question regarding the amount of GPU memory to render a scene in Blender.
Unfortunately I don’t have a very powerful hardware, I should upgrade it. As a GPU I have a GTX 1650 Super with 4GB. The system RAM is 6GB.
Going to render a scene with 380,000 polygons and textures in 4k with Cylces, it gives me the error “System is out of GPU and shared host memory”.
How is it possible that with only 380,000 polygons the 4GB of ram in the GPU is not enough?
Maybe it also depends on the textures that are all in 4k?
Does this error depend only on the graphics card or also on the low system ram?
Thanks in advance for the help!
14 June, 2021
Hey Mau,
Any Software you have running, Browsers, etc., as well as the Operating System, will already use up a large chunk of your RAM and GPU VRAM, so Blender will not have the entirety for rendering.
4GB GPU and just 6GB of System Memory isn’t much and a moderately complex Scene can easily fill that capacity up quickly.
Textures eat into your VRAM and RAM as well, in addition to caches, micro displacements, subdiv modifiers, and so on. Your Polycount might be a lot higher during rendertime than it is in your viewport.
Cheers,
Alex
15 June, 2021
I will update my hardware as soon as I can.
Thanks for the help Alex!