Best Hardware for GPU Rendering in Octane – Redshift – Vray (Updated)

CG Director Author Alex  by Alex   ⋮   ⋮   279 comments
CGDirector is Reader-supported. When you buy through our links, we may earn an affiliate commission.
Best Hardware for GPU Rendering in Octane – Redshift – Vray (Updated)

Graphics Card (GPU) based render engines such as Redhift3D, Octane or VRAY-RT have matured quite a bit over the last years and are starting to overtake CPU-based Render-Engines.

But what hardware gives the best-bang-for-the-buck and what do you have to keep in mind when building your GPU-Workstation compared to a CPU Rendering Workstation? Building a 3D Modeling and CPU Rendering Workstation can be somewhat straightforward, but highly optimizing for GPU Rendering is a whole other story.

So what are the best Hardware-Components and best GPU for rendering with Octane, Redhsift3D or VRAY-RT, that also is affordable? Let’s take a look:

Best Hardware for GPU Rendering


Since GPU-Render Engines use the GPU to render, technically you should go for a max-core-clock CPU like the Intel i9 9900K that clocks at 3,6GHz (5Ghz Turbo) or the AMD Ryzen 9 3900X that clocks at 3,8Ghz (4,6Ghz Turbo).

That said though, there is another factor to consider when choosing a CPU: PCIe-Lanes.

GPUs are attached to the CPU via PCIe-Lanes on the motherboard. Different CPUs support different amounts of PCIe-Lanes and Top-tier GPUs usually need 16x PCIe 3.0 Lanes to run at full performance.

The i9 9900K/3900X have 16 GPU<->CPU PCIe-Lanes, meaning you could use only one GPU at full speed with these type of CPUs.

If you want more than one GPU at full speed you would need a different CPU that supports more PCIe-Lanes like the AMD Threadripper CPUs, that have 64 PCIe-Lanes (e.g. the AMD Threadripper 2950X), the i9 9800X (28 PCIe-Lanes) or the i9 9900X Series CPUs that support 44 PCIe-Lanes.

GPUs, though, can also run in lower speed modes such as 8x PCIe 3.0 Speeds and then also use up fewer PCIe-Lanes (8x). Usually, there is a negligible difference in Rendering Speed when having GPUs run in 8x mode instead of 16x mode.

This would mean you could run 2x GPUs on an i9 9900K in 8x PCIe mode, 3x GPUs on an i9 9800X and 5x GPUs on an i9 9900X. (Given the Mainboard supports this configuration)

CPUs that have a high number of PCIe-Lanes usually fall into the HEDT Platform range and are usually also great for CPU Rendering as they tend to have more cores and therefore better multi-core performance.

PCIE_Lanes Compariosn

When actively rendering and your scene fits nicely into the GPUs VRAM, the speed of GPU Render Engines is of course mainly dependent on GPU performance.

Some processes though that happen before and during rendering rely heavily on the performance of the CPU, Hard-Drive, and network.

For example extracting and preparing Mesh Data to be used by the GPU, loading textures from your Hard-Drive and preparing the scene data.

In very complex scenes, these processing stages will take lots of time and can bottleneck the overall rendering speed, if a low-end CPU, Disk, and RAM are employed.

If your scene is too large for your GPU memory, the GPU Render Engine will need to access your System RAM or even swap to disk, which will considerably slow down the rendering.

Best Memory (RAM) for GPU Rendering

Different kinds of RAM won’t speed up your GPU Rendering all that much. You do have to make sure, that you have enough RAM though, or else your System will crawl to a halt.

I recommend keeping the following rules in mind to optimize performance as much as possible:

Best Graphics Card for Rendering

To use Octane and Redshift you will need a GPU that has CUDA-Cores, meaning you will need an NVIDIA GPU. VRAY-RT additionally supports OpenCL meaning you could use an AMD card here.

The best bang-for-the-buck NVIDIA cards are the NVIDIA 2070 RTX (2304 CUDA Cores, 8GB VRAM), 2080 RTX (2944 CUDA Cores, 8GB VRAM) and the 2080 Ti (4352 CUDA Cores, 11GB VRAM).

On the high-end, the currently highest possible performance is offered by the NVIDIA Titan V and Titan RTX, that also comes with 24GB of Video RAM.

These Cards though have worse Performance per Dollar as they are targeted at a different audience and VRAM is very expensive but not necessarily needed in such high capacities for GPU Rendering.

GPUs, that have 12GB of Video RAM and more, can handle high-poly scenes with over 200 million unique objects best. Take a look at the performance per dollar tables below, though, to get an overview of how expensive some of these cards can get without offering that much more performance.

GPU Cooling

Founders Edition Blower Style Cooler

  • PRO: Better Cooling when stacking more than one card
  • CON: Louder than Open-Air Cooling

Open-Air Cooling

  • PRO: Quieter than Blower Style, Cheaper
  • CON: Worse Cooling when stacking cards

Hybrid AiO Cooling (All-in-One Watercooling Loop with Fans)

  • PRO: Best All-In-One Cooling for stacking cards
  • CON: More Expensive, needs room for radiators in Case


  • PRO: Best temps when stacking cards, Quiet, can use only single slot height
  • CON: Needs lots of extra room in the case for tank and radiators, More Expensive

NVIDIA GPUs have a Boosting Technology, that automatically overclocks your GPU to a certain degree, as long as it stays within predefined temperature and power limits. So making sure a GPU stays as cool as possible, will improve the performance.

You can see this effect especially in Laptops, where there is usually not much room for cooling, and the GPUs tend to get very hot and loud and throttle very early. So if you are thinking of Rendering on a Laptop, keep this in mind.

Power Supply

Be sure to get a strong enough Power supply for your system. Most Cards have a Power Draw of around 180-250W. CPU of around 100W and any additional Hardware in your case.

I Recommend a 500W for a Single-GPU-Build. Add 250W for every additional GPU. Good PSU manufacturers to look out for are Corsair, beQuiet, Seasonic, and Coolermaster but you might prefer others.

There is a Wattage-Calculator here that lets you Calculate how strong your PSU will have to be.

Mainboard & PCIe-Lanes

Make sure the Mainboard has the desired amount of PCIe-Lanes and does not share Lanes with SATA or M.2 slots. Also, be careful what PCI-E Configurations the Motherboard supports. Some have 3 or 4 PCI-E Slots but only support one x16 PCI-E Card.

This can get quite confusing. Check the Motherboard manufacturers Website to be sure the Card configuration you are aiming for is supported. Here is what you should be looking for in the Motherboard specifications:

Asus Rampage PCIE Lane Config

Image-Source: Asus

In the above example, you would be able to use (with a 40 PCIe Lane CPU) 1 GPU in x16 mode. OR 2 GPUs in both x16 mode OR 3 GPUs one in x16 mode and two of those in x8 mode and so on. Beware that 28-PCIe Lane-CPUs in this example would support different GPU configurations than the 40 lane CPU.

Currently, the AMD Threadripper CPUs will give you 64 PCIe Lanes to hook your GPUs up to, if you want more you will have to go the multi-CPU route with Intel Xeons.

To confuse things a bit more, some Mainboards do offer four x16 GPUs (needs 64 PCIe-Lanes) on CPUs with only 44 PCIe Lanes. How is this even possible?

Enter PLX Chips. On some motherboards these chips serve as a type of switch, managing your PCIe-Lanes and leads the CPU to believe fewer Lanes are being used. This way, you can use e.g. 32 PCIe-Lanes with a 16 PCIe-Lane CPU or 64 PCIe-Lanes on a 44-Lane CPU. Beware though, only a few Motherboards have these PLX Chips. The Asus WS X299 Sage is one of them, allowing up to 7 GPUs to be used at 8x speed with a 44-Lane CPU, or even 4 x16 GPUs on a 44 Lanes CPU.

This screenshot of the Asus WS X299 Sage Manual clearly states what type of GPU-Configurations are supported (Always check the manual before buying expensive stuff):

Asus WS X299 Sage

Image-Source: Asus Mainboard Manual

PCIe-Lane Conclusion: For Multi-GPU Setups, having a CPU with lots of PCIe-Lanes is important, unless you have a Mainboard that comes with PLX chips. Having GPUs run in x8 Mode instead of x16, will only marginally slow down the performance. (Note though, the PLX Chips won’t increase your GPU bandwidth to the CPU, just make it possible to have more cards run in higher modes)

Best GPU Performance / Dollar

Ok so here it is. The Lists everyone should be looking at when choosing the right GPU to buy. The best performing GPU per Dollar!

GPU Benchmark Comparison: Octane

This List is based on OctaneBench 4.00.

(It’s quite difficult to get an average Price for some of these cards since crypto-currency mining is so popular right now, so I used MSRP)

GPU NameVRAMOctaneBenchPrice $ MSRPPerformance/Dollar
RTX 207082105500.381
GTX 107081334000.333
GTX 1070 Ti81534500.340
GTX 10608943000.313
GTX 1080 Ti112227000.317
GTX 108081485500.269
RTX 208082267990.282
RTX 2080 Ti1130411990.253
TITAN XP1225013000.192
RTX Titan2432627000.120
Titan V1239630000.132
GTX TITAN Z1218929990.063
Quadro P60002413938490.036
Quadro GP1001628470000.040
RTX 206061703500.485

Source: Complete OctaneBench Benchmark List

GPU Benchmark Comparison: Redshift

The Redshift Render Engine has its own Benchmark and here is a List based off of Redshift Bench. Note how the cards scale (1080TI) [RedshiftBench Mark (Time [min], shorter is better)]:

GPU NameVRAMRedshiftBenchPrice $ MSRPPerformance/Dollar
RTX 2070811.355501.601
GTX 1070817.114001.461
GTX 1080 Ti1111.447001.248
RTX 2080810.597991.181
4x GTX 1080 Ti113.0728001.163
2x GTX 1080 Ti116.1514001.161
8x GTX 1080 Ti111.5756001.137
GTX 1080816.005501.136
RTX 2080 Ti118.3811990.995
TITAN XP1210.5413000.729
Titan V128.5030000.392
Quadro P60002411.3138490.229
Quadro GP100169.5770000.149
4x RTX 2080 Ti112.2847960.914

Source: Complete Redshift Benchmark Results List

GPU Benchmark Comparison: VRAY-RT

And here is a List based off of VRAY-RT Bench. Note how the GTX 1080 interestingly seems to perform worse than the GTX 1070 in this benchmark:

GPU NameVRAMVRAY-BenchPrice $ MSRPPerformance/Dollar
GTX 107081:25 min4002.941
RTX 207081:05 min5502.797
GTX 1080 TI111:00 min7002.380
2x GTX 1080 TI110:32 min14002.232
GTX 108081:27 min5502.089
4x GTX 1080 TI110:19 min28001.879
TITAN XP120:53 min13001.451
8x GTX 1080 TI110:16 min56001.116
TITAN V120:41 min30000.813
Quadro P6000241:04 min38490.405

Source: VRAY Benchmark List

Speed up your Multi-GPU Rendertimes

So, unfortunately, GPUs don’t scale linearly. 2 GPUs render an Image about 1,8 times faster. Having 4 GPUs will only render about 3,5x faster. This is quite a bummer, isn’t it? Having multiple GPUs communicate with each other to render the same task, costs so much performance, that one GPU in a 4-GPU rig is mainly just managing decisions.

One solution could be the following: When final rendering image sequences, use as few GPUs as possible per task.

Let’s make an example:

What we usually do in a multi-GPU rig is, have all GPUs work on the same task. A single task, in this case, would be an image in our image sequence.

4 GPUs together render one Image and then move on to the next Image in the Image sequence until the entire sequence has been rendered.

We can speed up preparation time per GPU (when the GPUs sit idly waiting for a task to start) and bypass some of the multi-GPU slow-downs when we have each GPU render on its own task.

Now 4 GPUs are simultaneously rendering 4 Images, but each GPU is rendering its own Image.

Some 3D-Software might have this feature built-in, if not, it is best to use some kind of Render Manager, such as Thinkbox Deadline (Free for up to 2 Nodes/Computers).

Beware though, that you might have to increase your System RAM a bit and have a strong CPU since every GPU-Task needs its amount of RAM and CPU performance.

Buying GPUs

NVIDIA and AMD GPUs are both hard to get by for a reasonable price nowadays since mining is so popular. Graphics Cards are mostly Out of Stock and even when they are available, they are nowhere near MSRP. There is a site called nowinstock that gives you an overview of the most popular places to buy GPUs in your country and notifies you as soon as cards are available.

I put together some Builds in different Price ranges here for you to get a head start in configuring your own dream-build.

Redshift vs. Octane

Another thing I am asked often is if one should go with the Redshift Render Engine or Octane.

As I myself have used both extensively, in my experience, thanks to the Shader Graph Editor and the vast Multi-Pass Manager of Redshift, I like to use the Redshift Render Engine more for doing work that needs complex Material Setups and heavy Compositing.

Octane is great if you want results fast, as it is slightly easier to learn for beginners. But this, of course, is a personal opinion and I would love to hear yours!

Custom PC-Builder

If you want to get the best parts within your budget you should have a look at the Web-Base PC-Builder Tool that I’ve created.

Select the main purpose that you’ll use the computer for and adjust your budget to create the perfect PC with part recommendations that will fit within your budget.

CGDirector PC-Builder Tool

PC-Builder Facebook Title Image

What Hardware do you want to buy?

Alex from CGDirector - post author

Hi, I'm Alex, a Freelance 3D Generalist, Motion Designer and Compositor.

I've built a multitude of Computers, Workstations and Renderfarms and love to optimize them as much as possible.

Feel free to comment and ask for suggestions on your PC-Build or 3D-related Problem, I'll do my best to help out!



Hey Alex,

I want to update the graphics card on my machine for rendering on redshift with c4d, But I’m sure that if I do it without any help I’ll end up buying something that probably doesn’t fit in this dinosaur.

I’ve got a Hp workstation z800 from 2008, the motherboard is the HP 0aech.
specs bellow:

My budget is around 500£. Do you have any ideas of models that may suit this motherboard?

I had a read on your article but I’m still not familiar with all of this terms.

Thank you

pixel phil

The Threadripper gen3 TRX40 boards seem to be still very limited on pci speed. I can’t find any that supports at least x16/x16/x16. The highest i can find is 2 slots at x16. These boards are PCI 4.0, does that help in any way with the speed or do we need new gpu’s to take advantage?

pixel phil


I’m about to order a Supermicro workstation. They support single processor boards and they advised an AMD Epyc Rome 3.0 Ghz – 3.4 Ghz. I asked for an option with a i9 10900x 3.5 – 4.7 but they respond with a CPU Benchmark score with is in favor of the AMD. But isn’t single core peak performance more important then the overall CPU Benchmark score? My question is should I go with the AMD or i9 for my GPU rendering workstation or is the difference negligable?


Gary Abrehart

Hi Alex

I have a TR 1920 machine that I built with an 850watt PSU and two 980ti GPUs. I have just started to use the machine to do GPU rendering and was thinking of changing the two 908tis for two 2080tis but I thought to keep the costs down why not keep the two 980tis and just add another 2080ti.

Obviously I would have to put a 1200watt PSU in the machine but it would be cheaper than buying two 2080tis. Thing I am worried about is heat. My CPU has water cooling but the GPUs are still on air so with three GPUs in the machine is that likely to get too hot? I would be getting a blower 2080ti BTW.

Bo Heidelberg

Hey Alex.

I need some professional advise please!

I just got my hands on two GTX TITAN X (12 GB) gpu’s.

My question is how I best utilize these two babies if I want to ramp up my gpu rendering capabilities?. I’m using 3ds max 2020, Phoenix FD 4.xx and vRay next. I also use After effects and Premiere Pro a lot.

This is my workstation as it stands right now:

MB2011-V3 Asus X99 Gaming ROG STRIX
Intel Core I7-6800K 6x <3.6GHz LGA2011
CPUK Thermalright Macho Rev B
DDR4-2133 8GB Kingston RAM (x4)
SSD Samsung 850 EVO 500GB (SSD)
PSU Corsair RM750x 750W 80+ Gold
Asus DRW-24F DVD burner
CASE Vision X2 Silent Black cabinet
Nvidia Asus GTX1070 8GB STRIX
SSD Samsung 950 PRO 256GB M.2 PCIe x4

I'm not sure if the best and cheapest way to go forward is to make my workstation a 2x Titan machine and put my GTX 1070 gpu on a new cheap render machine on the side, or if I should use the 2x Titan X for the render machine, or if I should make a mix of one Titan X and my GTX 1070 on my workstation?.

I don't have a lot of money right now so I'm looking for the cheapest solution possible for now. I am however considering going for the AMD Ryzen ThreadRipper 1920X – 3.5 GHz – 12-core for my next cpu upgrade but I do fear that it will be too expensive in other upgrades (motherboard etc.) as a result – so please take that into consideration as well please.

Thank you very much in advance!



Hi Alex,

Not sure if you missed my comment so I will past it . Thanks.

“Hi Alex,

I currently have very old system but planning to build a new one .This mainly be for 3 modelling and visualisation in Blender, Photoshop. . If I understand correctly I would need to emphasis my budget towards GPU rather than CPU. I look to spend under $1000/£1000 ( UK based). CGIDirect build tool suggest GTX 1660TI but I know in order to get full card potential on Blender I would have to opt for RTX. In this case I would be ready to buy 2060.

My idea is to build new build in stages. At the first I want to buy graphic card GTX 2060and use it on my Asus P6TD Deluxe. I also run 1st gen i7 920 CPU Do. My question would be if my old system is capable to handle such card and if so would I have anything to adjust.?

Then alter next year I would build system from the scratch and use the new card.

Asus P6TD Deluxe
i7 920 CPU Do
Antec PSU 650W ( Silver 10 years old)

Man Thanks,

José Correa

Hi Alex. This info is super useful for me. I am a Mac user. I have an iMac Pro, entry level, I use Arnold for Maya, Physical Render and Pro Render for C4D but I would like try Octane and Redshift, obviously on PC. There are rumors that these will be compatible for Mac soon, but I don’t think that will happen (and they say only for the new Mac Pro, but I don’t know if this will happen for the iMac Pro, which I guess it will if im lucky).

I will read extra carefully your blog in order to get a really (REALLY) good machine for fast rendering using Redshift. I have never build a PC, and I just hope I don’t get a trial and error basis… that would be expensive.

Thanks for sharing your knowledge.


Hi Alex,

I currently have very old system but planning to build a new one .This mainly be for 3 modelling and visualisation in Blender, Photoshop. . If I understand correctly I would need to emphasis my budget towards GPU rather than CPU. I look to spend under $1000/£1000 ( UK based). CGIDirect build tool suggest GTX 1660TI but I know in order to get full card potential on Blender I would have to opt for RTX. In this case I would be ready to buy 2060.

My idea is to build new build in stages. At the first I want to buy graphic card GTX 2060and use it on my Asus P6TD Deluxe. I also run 1st gen i7 920 CPU Do. My question would be if my old system is capable to handle such card and if so would I have anything to adjust.?

Then alter next year I would build system from the scratch and use the new card.

Asus P6TD Deluxe
i7 920 CPU Do
Antec PSU 650W ( Silver 10 years old)

Man Thanks,

Hey Tomas,

Thanks for asking!

If you plan on using the GPU render engines, then you would have to allot a big portion of your budget for the GPU. That said, if you have your sights set on the RTX 2060 now, you can go ahead and purchase this GPU now since I see no compatibility issues with your Asus P6TD Deluxe motherboard and other components for that matter. You can then just build a new system around this card from scratch once you have the budget for it.



Hi Alex,

Thanks! It is good news then. In your opinion do I need Ryzen 3600 over 5 2600x paired with RTX2060? Will I actually utilise extra £50 in this case?

It is probably worth to mention that I want my PC to be compact so looking for a micro ATX MoBo. I didn’t see much content on your website. Maybe you could suggest any worth mentioning micro/mini motherboards for around £80-£120?


Hey Tomas,

If you have the means, I suggest that you go for the Ryzen 5 3600. That’s not to say that there’s anything wrong with the Ryzen 5 2600X. The 2600X is a fine CPU but the 3600 brings better memory support and performs at a higher level compared to its predecessor as shown in benchmarks.

As for a microATX motherboard, the X570 motherboards are a bit too expensive at the moment but if you need PCIe 4.0 and don’t mind the higher cost, the cheapest X570 microATX motherboard available at the moment is the ASRock X570M PRO4 that can be had for around $179.99.

Now, if you don’t need PCIe 4.0, going for a B450 motherboard is your best bet. Some good options are the MSI B450M Gaming Plus ($79.99), ASRock B450M Steel Legend ($89.99), Gigabyte B450M DS3H ($110.03), and the Asus TUF B450M-Plus Gaming ($118.39).



Hi Alex,

Thanks. Got the Ryzen 5 3600 next to me 🙂 Motherboard still have to wait a little.

In regards of MoBO either yours recommended ASUS TUF Gaming B450M-PRO or ASUS ROG Strix B450-I Gaming Mini ITX Motherboard, I am tempted to get Mini ATX for the reason I will have even more space in my small Silverstone SG13 case.


Salvador Robles

Hi Alex, thanks for the time and effort that you dedicate to your post and that are undoubtedly a reference on the web … I wanted to ask you about the compatibility of the RTX 2060 Super with Octane, since on their page they make a benchmark of more than 200 but indicate ? in terms of card compatibility for all versions of Octane. It is assumed that if this graphic Card has CUDAs it should be compatible. Can anyone work with the Octane plugin in Cinema4D with RTX 2060 Super ?. With what version of Octane? Thank you.


You article is very useful and it is not boring because every line contain useful information. XD

I want to build PC for render blender 2.81 cycle optix.
I use model from game engine so it is not high poly as sculpt model but I use a lot of physic like cloth and hair and other.

I have budget 2400 usd and I want the build like

RTX 2070*2
RTX 2080TI*1 and find another RTX 2080ti in next four month
RTX 2070*3

but I don’t know which cpu and AIO cooling water I should buy.

Now I use gtx980m SLI to render cycle and it take 10min per picture (Blender 2.8 with addon denoise)
I can’t make animation with this.

I will not save up for 7000 build because I want PC now.
I just want build that can upgrade in future without sale the old one.
How many time it take to render 10min picture with RTX 2070*2 optix?
Is Optix affect only first card or every card that have RTX?

Hey Chick,

Thank you for the kind words!

For your $2,400 budget, you can get a build with specs like the below:

Parts List:

CPU: AMD Ryzen 9 3900X 3.8GHz 12-Core Processor ($495.99)
CPU Cooler: AMD Wraith Prism Cooler (Included with CPU) –
Motherboard: ASUS Prime X470-Pro AM4 ($131.99)
GPU: NVIDIA RTX 2070 SUPER 8GB – MSI Gaming X ($549.99)
GPU: NVIDIA RTX 2070 SUPER 8GB – MSI Gaming X ($549.99)
Memory: 32GB (2 x 16GB) Corsair Vengeance LPX DDR4-3200 CL16 ($129.99)
Storage SSD: Samsung 860 EVO 1TB 2.5″ Solid State Drive ($109.99)
Power Supply: Corsair RMx Series RM650x 650W ATX 2.4 Power Supply ($117.99)
Case: Phanteks Enthoo Pro ATX Full Tower Case ($199.99)

The total of the build comes up to around $2,285.92 but you get a very powerful Ryzen 9 3900X CPU at the heart of your system working with 32GB of RAM to make sure your workflow is always fast and smooth. In addition to that, this build also comes with two (2) RTX 2070 SUPER GPUs which would definitely help a lot in your GPU rendering tasks. No way to tell though just how much time it will take two RTX 2070 SUPER GPUs to render a 10-min picture as this would depend on a lot of factors but Optix render mode can make perfect use of a multi-GPU setup.