Best Hardware for GPU Rendering in Octane – Redshift – Vray (Updated)

CG Director Author Alex Glawion  by Alex Glawion   ⋮   ⋮   348 comments
CGDirector is Reader-supported. When you buy through our links, we may earn an affiliate commission.
Best Hardware for GPU Rendering in Octane – Redshift – Vray (Updated)

Graphics Card (GPU) based render engines such as Redhift3D, Octane or VRAY-RT have matured quite a bit and are starting to overtake CPU-based Render-Engines.

But what hardware gives the best-bang-for-the-buck and what do you have to keep in mind when building your GPU-Workstation compared to a CPU Rendering Workstation?

Building a 3D Modeling and CPU Rendering Workstation can be somewhat straightforward, but highly optimizing for GPU Rendering is a whole other story.

So what are most affordable and best PC-Parts for rendering with Octane, Redhsift3D, VRAY-RT or other GPU Render Engines?

Let’s take a look:

Best Hardware for GPU Rendering

Processor

Since GPU-Render Engines use the GPU to render, technically you should go for a max-core-clock CPU like the Intel i9 9900K that clocks at 3,6GHz (5Ghz Turbo) or the AMD Ryzen 9 3900X that clocks at 3,8Ghz (4,6Ghz Turbo).

That said though, there is another factor to consider when choosing a CPU: PCIe-Lanes.

GPUs are attached to the CPU via PCIe-Lanes on the motherboard. Different CPUs support different amounts of PCIe-Lanes and Top-tier GPUs usually need 16x PCIe 3.0 Lanes to run at full performance.

PCIE 4 x16 Slot

Image-Credit: MSI, Unify x570 Motherboard – A typical PCIE x16 Slot

The i9 9900K/3900X have 16 GPU<->CPU PCIe-Lanes, meaning you could use only one GPU at full speed with these type of CPUs.

If you want more than one GPU at full speed you would need a different CPU that supports more PCIe-Lanes like the AMD Threadripper CPUs, that have 64 PCIe-Lanes (e.g. the AMD Threadripper 2950X or Threadripper 3960X) or on the Intel side, the i9 10900X Series CPUs that support 48 PCIe-Lanes (e.g. i9 10980XE).

GPUs, though, can also run in lower bandwidth modes such as 8x PCIe 3.0 Speeds and then also use up fewer PCIe-Lanes (namely 8x). Usually, there is a negligible difference in Rendering Speed when having current gen GPUs run in 8x mode instead of 16x mode.

This would mean you could run two GPUs on an i9 9900K or Ryzen 9 3900X in x8 PCIe mode. (For a total of 16 PCIe Lanes)

You could theoretically also run 4 GPUs in x16 Mode on a Threadripper CPU (= 64 PCIe Lanes). Unfortunately this is not supported though, and the best you can get with Threadripper CPUs is a x16, x8, x16, x8 Configuration.

CPUs that have a high number of PCIe-Lanes usually fall into the HEDT (= High-End-Desk-Top) Platform range and are usually also great for CPU Rendering as they tend to have more cores and therefore higher multi-core performance.

Here’s a quick bandwidth comparison between having two Titan X GPUs run in x8/x8, x16/x8 and x16/x16 mode. The differences are within the margin of error.

Beware though, that the Titan X’s in this benchmark certainly don’t saturate a x8 pcie 3 bus. With upcoming GPU generations this might change. A current gen 2080Ti for example already saturates a x8 pcie 3.0 Bus in terms of bandwidth.

PCIE_Lanes Compariosn

When actively rendering and your scene fits nicely into the GPUs VRAM, the speed of GPU Render Engines is of course mainly dependent on GPU performance.

Some processes though that happen before and during rendering rely heavily on the performance of the CPU, Storage, and (possibly) network.

For example, extracting and preparing Mesh Data to be used by the GPU, loading textures from your Storage and preparing the scene data.

In very complex scenes, these processing stages will take lots of time and can bottleneck the overall rendering speed, if a low-end CPU, Disk, and RAM are employed.

If your scene is too large to fit into your GPU’s memory, the GPU Render Engine will need to access your System’s RAM or even swap to disk, which will considerably slow down the rendering.

Best Memory (RAM) for GPU Rendering

Different kinds of RAM won’t speed up your GPU Rendering all that much. You do have to make sure, that you have enough RAM though, or else your System will crawl to a halt.

Corsair Vengeance LPX

Image-Source: Corsair

I recommend keeping the following rules in mind to optimize performance as much as possible:

  • To be safe, your RAM size should be at least 1.5 – 2x your combined VRAM size
  • Your CPU can benefit from higher Memory Clocks which can in turn slightly speed up the GPU rendering
  • Your CPU can benefit from more Memory Channels on certain Systems which in turn can slightly speed up your GPU rendering
  • Look for lower Latency RAM (e.g. CL14 is better than CL16) which can benefit your CPU’s performance and can therefore also speed up your GPU rendering slightly

Take a look at our RAM (Memory) Guide here, which should get you up to speed.

If you just need a quick recommendation, look into Corsair Vengeance Memory, as we have tested these Modules in a lot of GPU Rendering systems and can recommend them without hesitation.

Best Graphics Card for Rendering

To use Octane and Redshift you will need a GPU that has CUDA-Cores, meaning you will need an NVIDIA GPU. VRAY-RT additionally supports OpenCL meaning you could use an AMD card here. If you are using other Render Engines, be sure to check compatibility here.

The best bang-for-the-buck NVIDIA cards are:

Nvidia RTX 2070

Image-Source: Nvidia

On the high-end, the currently highest possible performance is offered by the NVIDIA Titan V and Titan RTX, that also come with 24GB of Video RAM.

These Cards though have worse Performance per Dollar as they are targeted at a different audience and VRAM is very expensive but not necessarily needed in such high capacities for GPU Rendering.

In my experience, 8GB – 11GB of VRAM is usually plenty for most scenes, unless you know you will be working on extremely complex projects.

GPU Cooling

Blower Style Cooler (Recommended for Multi-GPU setups)

  • PRO: Better Cooling when closely stacking more than one card (heat is blown out of the case)
  • CON: Louder than Open-Air Cooling

Open-Air Cooling (Recommended for single GPU Setups)

  • PRO: Quieter than Blower Style, Cheaper, more models available
  • CON: Bad Cooling when stacking cards (heat stays in the case)

Hybrid AiO Cooling (All-in-One Watercooling Loop with Fans)

  • PRO: Best All-In-One Cooling for stacking cards
  • CON: More Expensive, needs room for radiators in Case

Full Custom Watercooling

  • PRO: Best temps when stacking cards, Quiet, some cards only use single slot height
  • CON: Needs lots of extra room in the case for tank and radiators, Much more expensive

GPU Cooling Variants - Blower - open air - hybrid - water cooled

NVIDIA GPUs have a Boosting Technology, that automatically overclocks your GPU to a certain degree, as long as it stays within predefined temperature and power limits. So making sure a GPU stays as cool as possible, will allow it to boost longer and therefore improve the performance.

You can see this effect especially in Laptops, where there is usually not much room for cooling, and the GPUs tend to get very hot and loud and throttle very early. So if you are thinking of Rendering on a Laptop, keep this in mind.

A quick note on Riser Cables. With PCIe- or Riser-Cables you can basically place your GPUs further away from the PCIe-Slot of your Motherboard. Either to show off your GPU vertically in front of the Case’s tempered glass side panel, or because you have some space-constraints that you are trying to solve (e.g. the GPUs don’t fit).

If this is you, take a look at our Guide on finding the right Riser-Cables for your need.

Power Supply

Be sure to get a strong enough Power supply for your system. Most GPUs have a Power Draw of around 180-250W.

I Recommend a 550W for a Single-GPU-Build. Add 250W for every additional GPU that you have in your System. Good PSU manufacturers to look out for are Corsair, beQuiet, Seasonic, and Coolermaster but you might prefer others.

Corsair AX760W PSU

Image-Credit: Corsair

There is a Wattage-Calculator here that lets you Calculate how strong your PSU will have to be by inputting your planned components.

Mainboard & PCIe-Lanes

Make sure the Mainboard has the desired amount of PCIe-Lanes and does not share Lanes with SATA or M.2 slots. Also, be careful what PCI-E Configurations the Motherboard supports. Some have 3 or 4 physical PCI-E Slots but only support one x16 PCI-E Card (electrical speed).

This can get quite confusing. Check the Motherboard manufacturer’s Website to be sure the Multi-GPU configuration you are aiming for is supported. Here is what you should be looking for in the Motherboard specifications:

Asus Rampage PCIE Lane Config

Image-Source: Asus

In the above example, you would be able to use (with a 40 PCIe Lane CPU) 1 GPU in x16 mode. OR 2 GPUs in both x16 mode OR 3 GPUs one in x16 mode and two of those in x8 mode and so on. Beware that 28-PCIe Lane-CPUs in this example would support different GPU configurations than the 40 lane CPU.

Currently, the AMD Threadripper CPUs will give you 64 PCIe Lanes to hook your GPUs up to, if you want more you will have to go the multi-CPU route with Intel Xeons.

To confuse things a bit more, some Mainboards do offer four x16 GPUs (needs 64 PCIe-Lanes) on CPUs with only 44 PCIe Lanes. How is this even possible?

Enter PLX Chips.

On some motherboards, these chips serve as a type of switch, managing your PCIe-Lanes and leads the CPU to believe fewer Lanes are being used. This way, you can use e.g. 32 PCIe-Lanes with a 16 PCIe-Lane CPU or 64 PCIe-Lanes on a 44-Lane CPU.

Beware though, only a few Motherboards have these PLX Chips. The Asus WS X299 Sage is one of them, allowing up to 7 GPUs to be used at 8x speed with a 44-Lane CPU, or even 4 x16 GPUs on a 44 Lanes CPU.

This screenshot of the Asus WS X299 Sage Manual clearly states what type of GPU-Configurations are supported (Always check the manual before buying expensive stuff):

Asus WS X299 Sage

Image-Source: Asus Mainboard Manual

PCIe-Lane Conclusion

For Multi-GPU Setups, having a CPU with lots of PCIe-Lanes is important, unless you have a Mainboard that comes with PLX chips. Having GPUs run in x8 Mode instead of x16, will only marginally slow down the performance on most GPUs. (Note though, the PLX Chips won’t increase your GPU bandwidth to the CPU, just make it possible to have more cards run in higher modes)

Best GPU Performance / Dollar

Ok so here it is. The Lists everyone should be looking at when choosing the right GPU to buy. The best performing GPU per Dollar!

GPU Benchmark Comparison: Octane

This List is based on OctaneBench 4.00.

GPU NameVRAMOctaneBenchPrice $ MSRPPerformance/Dollar
RTX 206061703500.485
RTX 2060 Super82034200.483
RTX 207082105000.420
RTX 2070 Super82205500.400
GTX 1070 Ti81534500.340
GTX 107081334000.333
RTX 2080 Super82337200.323
GTX 1080 Ti112227000.317
GTX 10608943000.313
RTX 208082267990.282
GTX 108081485500.269
RTX 2080 Ti1130411990.253
TITAN XP1225013000.192
Titan V1239630000.132
RTX Titan2432627000.120
GTX TITAN Z1218929990.063
Quadro GP1001628470000.040
Quadro P60002413938490.036
GPU NameVRAMOctaneBenchPrice $ MSRPPerformance/Dollar

Source: Complete OctaneBench Benchmark List

GPU Benchmark Comparison: Redshift

The Redshift Render Engine has its own Benchmark and here is a List based off of Redshift Bench. Note how the cards scale (1080TI) [RedshiftBench Mark (Time [min], shorter is better)]:

GPU NameVRAMRedshiftBenchPrice $ MSRPPerformance/Dollar
RTX 2070811.355001.762
GTX 1070817.114001.461
GTX 1080 Ti1111.447001.248
RTX 2080810.597991.181
4x GTX 1080 Ti113.0728001.163
2x GTX 1080 Ti116.1514001.161
8x GTX 1080 Ti111.5756001.137
GTX 1080816.005501.136
RTX 2080 Ti118.3811990.995
TITAN XP1210.5413000.729
Titan V128.5030000.392
Quadro P60002411.3138490.229
Quadro GP100169.5770000.149
4x RTX 2080 Ti112.2847960.914
RTX 2080 Super810.157201.368
RTX 2070 Super811.175501.627
RTX 2060 Super812.174201.956
GPU NameVRAMRedshiftBenchPrice $ MSRPPerformance/Dollar

Source: Complete Redshift Benchmark Results List

GPU Benchmark Comparison: VRAY-RT

And here is a List based off of VRAY-RT Bench. Note how the GTX 1080 interestingly seems to perform worse than the GTX 1070 in this benchmark:

GPU NameVRAMVRAY-BenchPrice $ MSRPPerformance/Dollar
GTX 107081:25 min4002.941
RTX 207081:05 min5502.797
GTX 1080 TI111:00 min7002.380
2x GTX 1080 TI110:32 min14002.232
GTX 108081:27 min5502.089
4x GTX 1080 TI110:19 min28001.879
TITAN XP120:53 min13001.451
8x GTX 1080 TI110:16 min56001.116
TITAN V120:41 min30000.813
Quadro P6000241:04 min38490.405

Source: VRAY Benchmark List

Speed up your Multi-GPU Rendertimes

Note – This section is quite advanced. Feel free to skip it.

So, unfortunately, GPUs don’t always scale perfectly. 2 GPUs render an Image about 1,9 times faster. Having 4 GPUs will only render about 3,6x faster. This is quite a bummer, isn’t it?

Having multiple GPUs communicate with each other to render the same task, costs so much performance, that a large part of one GPU in a 4-GPU rig is mainly just there for managing decisions.

One solution could be the following: When final rendering image sequences, use as few GPUs as possible per task.

Let’s make an example:

What we usually do in a multi-GPU rig is, have all GPUs work on the same task. A single task, in this case, would be an image in our image sequence.

4 GPUs together render one Image and then move on to the next Image in the Image sequence until the entire sequence has been rendered.

We can speed up preparation time per GPU (when the GPUs sit idly, waiting for the CPU to finish preparing the scene) and bypass some of the multi-GPU slow-downs when we have each GPU render on its own task. We can do this by rendering one task per GPU.

So a machine with 4 GPUs would now render 4 tasks (4 images) at once, each on one GPU, instead of 4 GPUs working on the same image, as before.

Some 3D-Software might have this feature built-in, if not, it is best to use some kind of Render Manager, such as Thinkbox Deadline (Free for up to 2 Nodes/Computers).

GPUs per task

Option to set the amount of GPUs rendering on one task in Thinkbox Deadline

Beware though, that you might have to increase your System RAM a bit and have a strong CPU since every GPU-Task needs its amount of RAM and CPU performance.

Redshift vs. Octane

Another thing I am asked often is if one should go with the Redshift or Octane.

As I myself have used both extensively, in my experience, thanks to the Shader Graph Editor and the vast Multi-Pass Manager of Redshift, I like to use the Redshift Render Engine more for doing work that needs complex Material Setups and heavy Compositing.

Octane is great if you want results fast, as it’s learning curve is shallower. But this, of course, is a personal opinion and I would love to hear yours!

Custom PC-Builder

If you want to get the best parts within your budget you should have a look at the Web-Based PC-Builder Tool that we’ve created.

Select the main purpose that you’ll use the computer for and adjust your budget to create the perfect PC with part recommendations that will fit within your budget.

CGDirector PC-Builder Tool

PC-Builder Facebook Title Image

What Hardware do you want to buy? Let me know in the comments!



Find a new friend on the CGDirector Forum! Expert Advice & PC-Build Planning with a warm and friendly Community! :)

Alex Glawion - post author

Hi, I’m Alex, a Freelance 3D Generalist, Motion Designer and Compositor.

I’ve built a multitude of Computers, Workstations and Renderfarms and love to optimize them as much as possible.

Feel free to comment and ask for suggestions on your PC-Build or 3D-related Problem, I’ll do my best to help out!

348
Comments
Also check out our Forum for feedback from our Expert Community.

Guillaume

Hi. First of all thanks for your work on this website, it’s really usefull and i learn (and use it) a lot.
Any plans on an article about the Nvidia conference and those great GPU card ? Especially about this nvidia link not showing on 3070 and 3080. Will vray still work with two 3080 ? Will a 3070 work with a 2080ti ?
Cant wait for a vray benchmark between 2080ti and 3070 or 3080 (3090 is sooo expansive though)

Thx again for sharing your knowledge.

Anastasiia

Hi Alex. I already have PC and thinking about installing 2-nd GPU to decrease rendering time. Can you suggest sm?
Parts I already have:
CPU: AMD Ryzen 9 3950X
Motherboard: Gigabyte X570 Aorus Elite ATX AM4
GPU: NVIDIA RTX 2080 SUPER 8GB – MSI Gaming X Trio

Matt Gase

Hey Alex, you helped me a while back with my build, Thanks for your help. A little off topic, but I was wondering in the situation where you have 2 of the exact same cards- lets say 2 GTX 1080s (non TI), and you have 2 “monitors” like a 4K standard monitor and a 4k Cintiq 16 Pro. What makes more sense, plug these both into one of the two cards, or one device into each card? I primarily want the PC geared for Redshift/C4D rendering. Thanks!

Jan Dekempeneer

Hi Alex,

Could you clarify something for me. I am looking to expand my current work pc and I’m getting a bit confused about the X16 pci lanes argument you made. I’m working with autodesk maya and render on gpu’s with Vray. I’ll try to be specific about my questions:

Here’s my current setup:

CPU: Amd threadripper 32core version
RAM: 128G
Motherboard: ASrock x399 taichi
GPU: 2 x 2080 ti’s.
power supply: 1300 W

I was thinking to expand this setup with to additional 2080 ti’s.

problem 1: power supply

I needed to buy a new case because my current one does not allow me to fit 2 additional cards. so I stumbled upon this site because an open mining rig case could solve my problem.

sitelink: https://www.thegeekpub.com/11488/best-power-supply-mining-cryptocurrency/#:~:text=Rocking%20in%20at%20number%20two,comes%20in%20at%201000%20watts.

On this site the author mentions:
“It’s the CUDA cores that are busy calculating hashes for the coin you are mining.” He argues that even if a 2080ti card mentions a peak power consumption of 250 Watts, that you’ll never draw 250W per card while rendering/mining. So my question is, do I have to buy a new power supply?

I know that mining and rendering are not exactly the same, but I think the argument about only the cuda cores being used is still valid. While rendering with Vray I never get 100% performance, only the cuda cores are at 100%.

problem 2: pci risers vs 16x pci lanes

I cant access the 2 remaining pci slots because my current two gpu’s are blocking them. So I’ll have to use pci risers / extention cables.

On this site again (same guy) he mentions the use of risers:

https://www.thegeekpub.com/11070/tour-mining-rig/

he says: “Additionally, most video cards need x16 slots, and most motherboards don’t have six of those! These risers convert X1 PCIe slots to X16 slots, albeit at X1 speeds.”

Will I get a performance hit by using risers that seem to convert the lanes from x16 to x1?

Do you see any other problem with my idea to expand with 2 additional 2080ti’s?

Thank you very much for your help.

J.

Arthur

Hi Alex and thanks for this brilliant article!

I’m currently willing to build a GPU rendering workstation around 2k budget max.
I’ve heard a lot that running multiple GPUs for that setup are highly recommended.

Therefore, which configuration would be optimal for you :

– AMD RYZEN9 3900x – 3,8GHz, 12cores
– 1x Nvidia RTX 2070 8GB – Super blower
– 1x cheap 4GB GC, any recommandations ?

OR

– AMD Ryzen Threadripper 1900X – 3.8 GHz, 8cores
– 2x Nvidia RTX 2060 6GB – Turbo

Also, since the mainboard & PCIe section remains still a bit foggy in my brain, do you think an ASUS TUF Gaming X570-Plus would fit as a motherboard for those 2 configs ?

Cheers,
Arthur

Potato

Good article! I would just make a comment on the ‘RAM size should be at least 1.5 – 2x your combined VRAM size’ tip. I find this quite a strange statement. Say you’re rendering a scene in octane that needs 24gb of memory (common with medium complexity scenes) to render. Whether you have a single 1080 or 4 1080s, the memory usage would not change at all, you’d still need 24gb of available memory regardless of how many cards you have. I know redshift works a bit different than this, but for octane scaling op memory based on combined vram doesn’t make sense.

In my experience 32gb will allow you a decent amount of wiggle room in rendering scenes, but 64gb is needed for more complex scenes, namely ones with high poly counts. I’ve ran into the issue a few times that a scene would just not render because there was not enough free ram.

In motion graphics or more simple 3d work 16gb would probably suffice, but 32gb is a safe bet.

Maja

Hi Alex! Would this graphic card be compatible with redshift ? The answer to this question might be obvious but im kinda paranoid and I dont have a lot experience. https://www.gigabyte.com/Graphics-Card/GV-N1660GAMING-OC-6GD#kf

Davd

With Octane X coming out and Redshift for Mac coming out soon, I’ve been researching AMD GPUs and I am totally confused. I plan on using my iMac Pro with 2 EGPUs. Any thoughts on what the best AMD GPU to get is? Great site. Thanks.

Edd

Hey Alex, I’m looking at a 2 x 2080ti blower setup specifically for Redshift. Is there any brand you would opt for over others, like say, Asus, PNY, EVGA, Gigabyte etc.? I’m not really too knowledgable about tech, so just want to give myself the best chance of getting a pair of reliable cards from a brand with a good industry reputation! Many thanks.

Adrian

Hi Alex,

I, Build my CPU for rendering but i doubt if may existing GPU card 1050ti is enough to do rendering job like Vray other 3d software.

Please i need you advise.

Here are the specs. of my CPU build.
GIGABYTE AORUS ELITE X570
RYZEN 7 3700X
G.SKILL TRIDENT Z 3200CL16 32GB
PSU 650W
SSD 970 EVO PLUS

Thank you so much.
Adrian,