Best Hardware for GPU Rendering in Octane – Redshift – Vray (Updated)

Best Hardware for GPU Rendering in Octane – Redshift – Vray (Updated)

CG Director Author Alex  by Alex   ⋮   ⋮   243 comments
CGDirector is Reader-supported. When you buy through our links, we may earn an affiliate commission.

Graphics Card (GPU) based render engines such as Redhift3D, Octane or VRAY-RT have matured quite a bit over the last years and are starting to overtake CPU-based Render-Engines.

But what hardware gives the best-bang-for-the-buck and what do you have to keep in mind when building your GPU-Workstation compared to a CPU Rendering Workstation? Building a 3D Modeling and CPU Rendering Workstation can be somewhat straightforward, but highly optimizing for GPU Rendering is a whole other story.

So what are the best Hardware-Components and best GPU for rendering with Octane, Redhsift3D or VRAY-RT, that also is affordable? Let’s take a look:

Best Hardware for GPU Rendering

Processor

Since GPU-Render Engines use the GPU to render, technically you should go for a max-core-clock CPU like the Intel i9 9900K that clocks at 3,6GHz (5Ghz Turbo) or the AMD Ryzen 9 3900X that clocks at 3,8Ghz (4,6Ghz Turbo).

That said though, there is another factor to consider when choosing a CPU: PCIe-Lanes.

GPUs are attached to the CPU via PCIe-Lanes on the motherboard. Different CPUs support different amounts of PCIe-Lanes and Top-tier GPUs usually need 16x PCIe 3.0 Lanes to run at full performance.

The i9 9900K/3900X have 16 GPU<->CPU PCIe-Lanes, meaning you could use only one GPU at full speed with these type of CPUs.

If you want more than one GPU at full speed you would need a different CPU that supports more PCIe-Lanes like the AMD Threadripper CPUs, that have 64 PCIe-Lanes (e.g. the AMD Threadripper 2950X), the i9 9800X (28 PCIe-Lanes) or the i9 9900X Series CPUs that support 44 PCIe-Lanes.

GPUs, though, can also run in lower speed modes such as 8x PCIe 3.0 Speeds and then also use up fewer PCIe-Lanes (8x). Usually, there is a negligible difference in Rendering Speed when having GPUs run in 8x mode instead of 16x mode.

This would mean you could run 2x GPUs on an i9 9900K in 8x PCIe mode, 3x GPUs on an i9 9800X and 5x GPUs on an i9 9900X. (Given the Mainboard supports this configuration)

CPUs that have a high number of PCIe-Lanes usually fall into the HEDT Platform range and are usually also great for CPU Rendering as they tend to have more cores and therefore better multi-core performance.

PCIE_Lanes Compariosn

When actively rendering and your scene fits nicely into the GPUs VRAM, the speed of GPU Render Engines is of course mainly dependent on GPU performance.

Some processes though that happen before and during rendering rely heavily on the performance of the CPU, Hard-Drive, and network.

For example extracting and preparing Mesh Data to be used by the GPU, loading textures from your Hard-Drive and preparing the scene data.

In very complex scenes, these processing stages will take lots of time and can bottleneck the overall rendering speed, if a low-end CPU, Disk, and RAM are employed.

If your scene is too large for your GPU memory, the GPU Render Engine will need to access your System RAM or even swap to disk, which will considerably slow down the rendering.

Best Memory (RAM) for GPU Rendering

Different kinds of RAM won’t speed up your GPU Rendering all that much. You do have to make sure, that you have enough RAM though, or else your System will crawl to a halt.

I recommend keeping the following rules in mind to optimize performance as much as possible:

Best Graphics Card for Rendering

To use Octane and Redshift you will need a GPU that has CUDA-Cores, meaning you will need an NVIDIA GPU. VRAY-RT additionally supports OpenCL meaning you could use an AMD card here.

The best bang-for-the-buck NVIDIA cards are the NVIDIA 2070 RTX (2304 CUDA Cores, 8GB VRAM), 2080 RTX (2944 CUDA Cores, 8GB VRAM) and the 2080 Ti (4352 CUDA Cores, 11GB VRAM).

On the high-end, the currently highest possible performance is offered by the NVIDIA Titan V and Titan RTX, that also comes with 24GB of Video RAM.

These Cards though have worse Performance per Dollar as they are targeted at a different audience and VRAM is very expensive but not necessarily needed in such high capacities for GPU Rendering.

GPUs, that have 12GB of Video RAM and more, can handle high-poly scenes with over 200 million unique objects best. Take a look at the performance per dollar tables below, though, to get an overview of how expensive some of these cards can get without offering that much more performance.

GPU Cooling

Founders Edition Blower Style Cooler

  • PRO: Better Cooling when stacking more than one card
  • CON: Louder than Open-Air Cooling

Open-Air Cooling

  • PRO: Quieter than Blower Style, Cheaper
  • CON: Worse Cooling when stacking cards

Hybrid AiO Cooling (All-in-One Watercooling Loop with Fans)

  • PRO: Best All-In-One Cooling for stacking cards
  • CON: More Expensive, needs room for radiators in Case

Watercooling

  • PRO: Best temps when stacking cards, Quiet, can use only single slot height
  • CON: Needs lots of extra room in the case for tank and radiators, More Expensive

NVIDIA GPUs have a Boosting Technology, that automatically overclocks your GPU to a certain degree, as long as it stays within predefined temperature and power limits. So making sure a GPU stays as cool as possible, will improve the performance.

You can see this effect especially in Laptops, where there is usually not much room for cooling, and the GPUs tend to get very hot and loud and throttle very early. So if you are thinking of Rendering on a Laptop, keep this in mind.

Power Supply

Be sure to get a strong enough Power supply for your system. Most Cards have a Power Draw of around 180-250W. CPU of around 100W and any additional Hardware in your case.

I Recommend a 500W for a Single-GPU-Build. Add 250W for every additional GPU. Good PSU manufacturers to look out for are Corsair, beQuiet, Seasonic, and Coolermaster but you might prefer others.

There is a Wattage-Calculator here that lets you Calculate how strong your PSU will have to be.

Mainboard & PCIe-Lanes

Make sure the Mainboard has the desired amount of PCIe-Lanes and does not share Lanes with SATA or M.2 slots. Also, be careful what PCI-E Configurations the Motherboard supports. Some have 3 or 4 PCI-E Slots but only support one x16 PCI-E Card.

This can get quite confusing. Check the Motherboard manufacturers Website to be sure the Card configuration you are aiming for is supported. Here is what you should be looking for in the Motherboard specifications:

Asus Rampage PCIE Lane Config

Image-Source: Asus

In the above example, you would be able to use (with a 40 PCIe Lane CPU) 1 GPU in x16 mode. OR 2 GPUs in both x16 mode OR 3 GPUs one in x16 mode and two of those in x8 mode and so on. Beware that 28-PCIe Lane-CPUs in this example would support different GPU configurations than the 40 lane CPU.

Currently, the AMD Threadripper CPUs will give you 64 PCIe Lanes to hook your GPUs up to, if you want more you will have to go the multi-CPU route with Intel Xeons.

To confuse things a bit more, some Mainboards do offer four x16 GPUs (needs 64 PCIe-Lanes) on CPUs with only 44 PCIe Lanes. How is this even possible?

Enter PLX Chips. On some motherboards these chips serve as a type of switch, managing your PCIe-Lanes and leads the CPU to believe fewer Lanes are being used. This way, you can use e.g. 32 PCIe-Lanes with a 16 PCIe-Lane CPU or 64 PCIe-Lanes on a 44-Lane CPU. Beware though, only a few Motherboards have these PLX Chips. The Asus WS X299 Sage is one of them, allowing up to 7 GPUs to be used at 8x speed with a 44-Lane CPU, or even 4 x16 GPUs on a 44 Lanes CPU.

This screenshot of the Asus WS X299 Sage Manual clearly states what type of GPU-Configurations are supported (Always check the manual before buying expensive stuff):

Asus WS X299 Sage

Image-Source: Asus Mainboard Manual

PCIe-Lane Conclusion: For Multi-GPU Setups, having a CPU with lots of PCIe-Lanes is important, unless you have a Mainboard that comes with PLX chips. Having GPUs run in x8 Mode instead of x16, will only marginally slow down the performance. (Note though, the PLX Chips won’t increase your GPU bandwidth to the CPU, just make it possible to have more cards run in higher modes)

Best GPU Performance / Dollar

Ok so here it is. The Lists everyone should be looking at when choosing the right GPU to buy. The best performing GPU per Dollar!

GPU Benchmark Comparison: Octane

This List is based on OctaneBench 4.00.

(It’s quite difficult to get an average Price for some of these cards since crypto-currency mining is so popular right now, so I used MSRP)

GPU NameVRAMOctaneBenchPrice $ MSRPPerformance/Dollar
RTX 207082105500.381
GTX 107081334000.333
GTX 1070 Ti81534500.340
GTX 10608943000.313
GTX 1080 Ti112227000.317
GTX 108081485500.269
RTX 208082267990.282
RTX 2080 Ti1130411990.253
TITAN XP1225013000.192
RTX Titan2432627000.120
Titan V1239630000.132
GTX TITAN Z1218929990.063
Quadro P60002413938490.036
Quadro GP1001628470000.040
RTX 206061703500.485

Source: Complete OctaneBench Benchmark List

GPU Benchmark Comparison: Redshift

The Redshift Render Engine has its own Benchmark and here is a List based off of Redshift Bench. Note how the cards scale (1080TI) [RedshiftBench Mark (Time [min], shorter is better)]:

GPU NameVRAMRedshiftBenchPrice $ MSRPPerformance/Dollar
RTX 2070811.355501.601
GTX 1070817.114001.461
GTX 1080 Ti1111.447001.248
RTX 2080810.597991.181
4x GTX 1080 Ti113.0728001.163
2x GTX 1080 Ti116.1514001.161
8x GTX 1080 Ti111.5756001.137
GTX 1080816.005501.136
RTX 2080 Ti118.3811990.995
TITAN XP1210.5413000.729
Titan V128.5030000.392
Quadro P60002411.3138490.229
Quadro GP100169.5770000.149
4x RTX 2080 Ti112.2847960.914

Source: Complete Redshift Benchmark Results List

GPU Benchmark Comparison: VRAY-RT

And here is a List based off of VRAY-RT Bench. Note how the GTX 1080 interestingly seems to perform worse than the GTX 1070 in this benchmark:

GPU NameVRAMVRAY-BenchPrice $ MSRPPerformance/Dollar
GTX 107081:25 min4002.941
RTX 207081:05 min5502.797
GTX 1080 TI111:00 min7002.380
2x GTX 1080 TI110:32 min14002.232
GTX 108081:27 min5502.089
4x GTX 1080 TI110:19 min28001.879
TITAN XP120:53 min13001.451
8x GTX 1080 TI110:16 min56001.116
TITAN V120:41 min30000.813
Quadro P6000241:04 min38490.405

Source: VRAY Benchmark List

Speed up your Multi-GPU Rendertimes

So, unfortunately, GPUs don’t scale linearly. 2 GPUs render an Image about 1,8 times faster. Having 4 GPUs will only render about 3,5x faster. This is quite a bummer, isn’t it? Having multiple GPUs communicate with each other to render the same task, costs so much performance, that one GPU in a 4-GPU rig is mainly just managing decisions.

One solution could be the following: When final rendering image sequences, use as few GPUs as possible per task.

Let’s make an example:

What we usually do in a multi-GPU rig is, have all GPUs work on the same task. A single task, in this case, would be an image in our image sequence.

4 GPUs together render one Image and then move on to the next Image in the Image sequence until the entire sequence has been rendered.

We can speed up preparation time per GPU (when the GPUs sit idly waiting for a task to start) and bypass some of the multi-GPU slow-downs when we have each GPU render on its own task.

Now 4 GPUs are simultaneously rendering 4 Images, but each GPU is rendering its own Image.

Some 3D-Software might have this feature built-in, if not, it is best to use some kind of Render Manager, such as Thinkbox Deadline (Free for up to 2 Nodes/Computers).

Beware though, that you might have to increase your System RAM a bit and have a strong CPU since every GPU-Task needs its amount of RAM and CPU performance.

Buying GPUs

NVIDIA and AMD GPUs are both hard to get by for a reasonable price nowadays since mining is so popular. Graphics Cards are mostly Out of Stock and even when they are available, they are nowhere near MSRP. There is a site called nowinstock that gives you an overview of the most popular places to buy GPUs in your country and notifies you as soon as cards are available.

I put together some Builds in different Price ranges here for you to get a head start in configuring your own dream-build.

Redshift vs. Octane

Another thing I am asked often is if one should go with the Redshift Render Engine or Octane.

As I myself have used both extensively, in my experience, thanks to the Shader Graph Editor and the vast Multi-Pass Manager of Redshift, I like to use the Redshift Render Engine more for doing work that needs complex Material Setups and heavy Compositing.

Octane is great if you want results fast, as it is slightly easier to learn for beginners. But this, of course, is a personal opinion and I would love to hear yours!

Custom PC-Builder

If you want to get the best parts within your budget you should have a look at the Web-Base PC-Builder Tool that I’ve created.

Select the main purpose that you’ll use the computer for and adjust your budget to create the perfect PC with part recommendations that will fit within your budget.

CGDirector PC-Builder Tool

PC-Builder Facebook Title Image

What Hardware do you want to buy?



Alex from CGDirector - post author

Hi, I'm Alex, a Freelance 3D Generalist, Motion Designer and Compositor.

I've built a multitude of Computers, Workstations and Renderfarms and love to optimize them as much as possible.

Feel free to comment and ask for suggestions on your PC-Build or 3D-related Problem, I'll do my best to help out!

243
Comments

Rod

First off, Thank you for all the time you have put into this site which has helped so many of us. I am blown away at the amount of detail you put into helping everyone in the comments. I have recommended this site to a few others who are also in the market for a rig.

CPU: AMD Ryzen 9 3900x
CPU COOLER 240mm Liquid Cooler
GRAPHICS CARD EVGA Geforce RTX 2070 Super EVO Turbo 8GB GDDR6
MEMORY Corsair LPX 32GB (2x16GB) 3200MHz C16 DDR4 DRAM Memory Kit, Black (CMK32GX4M2B3200C16)
STORAGE – BOOT 1TB NVMe SSD Samsung 970 EVO NVMe 1TB M.2
STORAGE – FILES 4 TB 3.5″ HDD 7200RPM (WD Ultrastar 4TB 7200rpm $165.00) (WD Purple 4TB 5400rpm
MOTHERBOARD MSI MEG X570 ACE Gaming Motherboard
POWER SUPPLY 750 W – Fully Modular – 80+ Gold (Thermaltake)
CASE ATX Mid Tower Case (Antec NX400 NX Mid Gaming Case)

I plan on adding another card (same model) I have tried researching and am still a bit confused on PCI Lanes. I think I may have chosen the wrong board. My question is, Will I be able to use both of my GPU’s at full x16 in each of the allowed spots? Or Will I have to use one at x16 and one x8?
My Motherboard (https://www.newegg.com/p/N82E16813144259) says

————PCI Express 4.0 x16
2 x PCIe 4.0/3.0 x16 slots (PCI_E1, PCI_E3)
– 3rd Gen AMD Ryzen support PCIe 4.0 x16/x0, x8/x8 modes
– 2nd Gen AMD Ryzen support PCIe 3.0 x16/x0, x8/x8 modes
– Ryzen with Radeon Vega Graphics and 2nd Gen AMD Ryzen with Radeon Graphics support PCIe 3.0 x8 mode*
1 x PCIe 4.0/3.0 x16 slot (PCI_E5, supports x4 mode)

* PCI_E3 slot is only available for 2nd and 3rd Gen AMD Ryzen processors.

————PCI Express x1
2 x PCIe 4.0/3.0 x1 slots (PCIE_2,PCIE_4)*
* The PCIe x1 slots can not be used simultaneously. PCI_E2 will be unavailable when installing the PCIe card in PCI_E4 slot.

Also, I read what you said about how the cpu needs to have enough Pci lanes available, Does the Ryzen 9 3900x have enough to run 2 GPU’s at full x16 speed? Or will I have to go to the Highest available CPU allowed on the AM4 Socket type Motherboards (Ryzen 9 3950x)?

Rod

also would it be ok to run the two GPU’s on the the x16/8 slots and then a smaller cheaper dedicated graphics card to run the monitor/ OS on the PCIe 4.0/3.0 x4 slot? If so, Which cheaper one would you recommend?

Akosh

Hi Alex. It was a very useful article we learnt a lot from it.
We’ve bought a miner rig as a base for our future 3D rendering machine. It contains 7x 1080Ti, 64Gb Mem, Corei7-7700K cpu on an ASUS B250H motherboard.
The video cards expanded by 1x PciE raisers. We tested this setup both octane and redshift.
In octane the render was pretty fast, but it could crash fairly easy. Actually there wasn’t too much times when it works without crashing.
In Redshift it works fine, but too slow. We rendered the same scene another machine (two 1080Ti in 16xPCiE) and we get the same amount of render time on that. We think the reason of the slowness is the 1x PciE raisers. What do you think, what should we do? Is it possible to upgrade this setup without any bigger modifications? If it isn’t, what is the solution for us?

Evert

Hi alex , i stumbled upon your website today , great content .
Ive been thinking about jumping to GPU rendering for quite some time, one thing that stops me from doing so is the possibility of a large scene not starting at all even after optimization. I tried to find examples of very large scenes rendered in Gpu, across a few different render engine forums and groups but to no avail (Archviz focused scenes).
I suppose i can still make use of Gpu rendering for faster interior images ( or any small medium scene)…
In that case what would u recommend :
-2 2070 super with the possibility of NVLink em for 15+- vram pool (given the render engine allows it)?
-A 2080 ti .
In both scenarios the monitors would be plugged to a 1070ti.
I use mostly Vray and Ive tried Fstorm and a little bit of Redshift ( the one with the most steep learning curve imo).
Cheers from Mexico.

Hey Evert,

Thanks for asking and thank you for the kind words!

For GPU rendering, having a dual 2070 Super setup is better than a single 2080 Ti. The RTX 2080 Ti may be a beast of a graphics card but GPU rendering in 3D software scales almost linearly with more GPUs. You basically add up the CUDA cores to get the speed so a dual 2070 Super configuration will beat the 2080 Ti for that matter.

https://www.cgdirector.com/octanebench-benchmark-results/

Because most modern gpu render engines have an out of core render capability, they can access the system RAM if their own VRAM is insufficient. I have been rendering very large scenes for a long time in redshift and have never had the problem that a scene wouldn’t start up because of the complexity.

Cheers,
Alex

Boxx Builder

Hell Alex, On Evert’s question regarding NVLink I believe there is no benefit to running the cards in NVLink mode for rendering but would appreciate your insight.

Belay

Hey Alex!

First off thank you SO much for all the knowledge you all at CGdirector are sharing with everyone,
it is so helpful and really peels back the curtains on building a PC for us digital artists!

I’m a Motion Designer and mainly use AfterEffects and C4D (Octane/Redshift)

Here is my current PC build that I have:
CPU: intel i7-6800K
SSD: m.2 500gb Samsung Bootdrive, 2TB SSD work drive
GPU: 2x GTX-1070(2xfan open air), 2X RTX-2070 (1x fan single blower)
RAM: 4x 8GB RAM (32gb)
MOBO: ASUS ATX X99-A II
PSU: 1200w

When I built this, I wasn’t aware of PCIE spacing. All I saw was 4 slots on my mobo,
and I just assumed I could fit four in there no problem! I quickly learned otherwise.
So at the moment, I have a very janky setup using pcie risers to be able to have 4 gpu’s connected.
This also means though, that my case is just open and not closed because of this janky work around.

So, I’m ready for an upgrade in terms of CPU/MOBO that can stack 4x gpu’s. Ideally staying within $1000ish

What do you think of this CPU/MOBO combo?
CPU: AMD 2920x ($369)
MOBO: GIGABYTE X399 AORUS Xtreme ($439)

I’m at a loss at another combo that will accomodate 4x GPU’s (ideally all ran at a minimum of 8xLanes).

Another Idea I had was building this instead:
CPU: 3900x ($530)
MOBO: ASUS ROG Crosshair VIII Hero X570 ATX ($359)

With this build, I wouldn’t be able to run 4x GPU’s, but I could sell my 2x 1070’s and invest
in another RTX 2070. Also this would mean my 3xGPU’s would still hit a minimum of 8x,8x,8xLanes right?

I would love to know your thoughts/recommendations?
Luckily for me I am not in a rush to build this, but I know cyber Monday will be here in a month,
and I’m hoping if the deals are good, I can make a educated purchase…..
But Im sure 3rd gen threadrippers will be announced and then the fomo will be in full effect lol!

Hi Belay,

Thank you for the kind words!

If you have to have 4 GPUs, going with the Threadripper 2920X and GIGABYTE X399 AORUS Xtreme is your best bet. The Threadripper 2920X with its 3.5 GHz base clock and Max Boost of up to 4.3 GHz will ensure task responsiveness when you’re working inside the software. The GIGABYTE X399 AORUS Xtreme on the other hand comes with 4 PCIe slots that are 2 slots apart. Be careful though with stacking your GPUs as your 1070s have an open air design and these tend to take a lot of space compared to a blower-type design like that of your RTX 2070, they also not cool as well when stacked.

Going with a Ryzen 9 3900X and ASUS ROG Crosshair VIII Hero X570 ATX combination is also a good option but this particular motherboard only supports NVIDIA 2-Way SLI Technology. I suggest that you get a motherboard that packs support for NVIDIA Quad SLI such as the ASRock X570 TAICHI priced at $299.99 if you want to have 3 GPUs in this build. And if you end up getting another RTX 2070 with a blower type design similar to what you have now, you shouldn’t have problems stacking all three 2070s on the motherboard.

At the end of the day, whether you go for a 2920X build or a 3900X build, you can’t go wrong. You will surely have a build that’s more than capable of handling whatever task you throw at it at the same time have a multiple GPU setup hitting a minimum of x8 lanes for that matter!

Cheers,
Alex

Belay

Sounds good, thanks again!! <3

Belay

Hey Alex,

My 4 gpu setup has now just went to 3 due to the 1070 dying :(.
Having said that it looks like I will going 3900x and either the ASUS ROG Crosshair VIII Hero X570 or ASRock X570 TAICHI as you mentioned.

My last question is this, with having 3 gpu’s on either of these boards, would I still be able to maintain 8x lanes, even with 2x M.2 drives installed? I only ask because I believe 3900x is only allows 24 lanes. And with all 3 gpu’s runing at 8x, would that leave anything for the M.2 drives? Or is it that the m.2 drives don’t utilize the PCIE, they are actually SATA? Doing research on this but hard to get any clarification from anyone.

Thanks in advance!
Belay

Michael Dueker

Hi Alex,

Thanks for the article and all your in-depth research. Really interesting stuff. Question, so I’m looking into new laptops and want to get something really high-powered. Was going to potentially go with a Razer Blade RTX 2080 with 8 GB of VRAM. Now I’m seeing how Razer is coming out with the Quadro Studio Edition. On paper, it sounds like a great deal, Quadro RTX 5000 16 GB of VRAM, but I’m hearing from other CG people that you’re better off with a 2080 or ideally 2080 TI. I’m primarily looking to do 3D with C4D with Redshift and a little After Effects for comping and light 2D motion graphics. Just curious your thoughts on the Quadro Studio and if it’s worth the money (and I should add, it’s not unreasonably more expensive than the 2080 version and it comes with 32GB of memory and 1 TB SSD drive vs 16 GB and 512 GB SSD) or if I’m better off with the 2080 version? Thanks again for all your help and really appreciate all the work you’re doing.

Best,

Michael

Hey Michael,

Thanks for dropping a line!

In general I would only recommend a quadro over a rtx/gtx if you need more VRAM or know your software will benefit from quadro drivers (only few cad softwares offer added features or higher floating point precision for quadro cards, like solidworks). Apart from that, the rtx/gtx has a much better value for the performance it delivers. In most cases more vram does not help in any way, and it is unnecessary.

The Studio certification version of Laptops is mostly a marketing stunt to be able to ask for higher prices. Make sure to look at the underlying hardware closely, as a comparable “gaming” Laptop will in most cases work the same or better for much less money.

Cheers,
Alex

Michael

Awesome! Thanks so much for the reply and for clearing up my understanding of quadros. Confirms my suspicions.

ziv

Hey Alex,

I’m planning to buy a new pc, mainly for c4d + redshift.
these are the specs:
– AMD 3rd Gen RYZEN 9 3900X AM4 BOX
– Corsair Hydro H55 Quiet Liquid CPU Cooler
– MSI MPG X570 GAMING EDGE WIFI
– Gigabyte GeForce RTX 2070 SUPER GV-N207SWF3OC-8GD
– Corsair DDR 4 32G (16Gx2) 3200 CL16 Vengeance LPX Black CMK32GX4M2E3200C16
– Corsair RM850 850W PSU 80+ Gold Fully Modula

I wanted to buy the 2080 ti, but after some thinking Iv’e decided to save some money and go for the 2070 super, and in a year buy a 2nd 2070 super.

does the motherboard supports 2 GPUs? and what about the this power supply? is it enough for 2×2070 super?

cheers

Hi Ziv,

Thanks for dropping a line!

The build you put together for Cinema 4D and Redshift looks great! The Ryzen 9 3900X CPU at the heart of your system combined with the 32GB of RAM will guarantee task responsiveness when you’re actively working inside the software while the RTX 2070 Super GPU will surely bring about better render speeds thanks to its CUDA core acceleration support when using the GPU render engines.

Now, if you plan on adding another RTX 2070 Super GPU later on, you might want to change your motherboard. The MPG X570 Gaming Edge WIFI features two full-length PCIe 4.0 slots that operate at x16 and x16/x4 which means NVIDIA SLI isn’t supported so you’re out of luck there. That said, you want to look for a motherboard that packs support for NVIDIA SLI such as the ASRock X570 Taichi, MSI X570 Godlike, MSI MEG X570 ACE, Gigabyte X570 AORUS Master, and ASUS Crosshair VIII Hero.

As for your PSU, your planned build will have an approximate load wattage of close to 700 watts so the recommended PSU wattage should be higher than that. Your choice of Corsair RM850 850W PSU 80+ Gold Fully Modular is really good as the RM850 supports up to 850 watts and should be more than enough for your needs.

Cheers,
Alex

ziv

Thanks for the detailed answer Alex!
I think I’ll take the MSI MEG X570 ACE, looks like a beast.

Bernard

Hi Alex,

I’m searching for a new GPU but I am a true noob regarding tech.
I’m using an i9-9900K with 64GB 3600Mhz DDR Ram and a PNY P2000 Quadro card.
I mainly use my rig for Photoshop photoediting with Topaz AI plugins, but I’m not really satisfied with the performance of the plugins.
According to Topaz the plugins use OpenGL, but I don’t get a clear answer which card would be best for my needs.
I’ve been doing research and it looks to me as if a GTX1080 Ti would be a better choice than a RTX2070 Super card.
I based this on better scores for number of Cudas, GTexels/s, Pixelrate and FP32 score, but ‘people’ keep telling me I should buy the RTX2070 Super without providing factual info.
While reading your articles on this site I’m convinced you really know about GPUs (and more) so I really would like to ask you which GPU of this two cards would be best for my usecase according to you.

Sorry for my crappy english and thank you for reading.

Bernard (from the Netherlands)

Hey Bernard,

Thanks for dropping a line and no worries about your English – it’s not crappy and I understood you just fine!

You are correct – the GTX 1080 Ti has better specs compared to the RTX 2070 Super. After all, the 1080 Ti is a higher-tiered GPU. However, you also have to take into consideration the price to performance ratio when choosing a GPU. For example, the cheapest 1080 Ti available at the moment is priced at $1,189 while the most affordable RTX 2070 Super can be had for just $499.

If you have the budget for it, you can go ahead and buy the GTX 1080 Ti. Or better yet, go for the RTX 2080 Ti which is priced almost the same as the 1080 Ti. However, if you want to get the better price to performance ratio, go for the RTX 2070 Super. You also have the option of going for two (2) RTX 2070 Super GPUs and do a dual GPU configuration if your motherboard supports it. Having two (2) RTX 2070 Super is a lot more powerful than a single 1080 Ti or 2080 Ti, not to mention it’s cheaper.

To get a better picture of how these different GPUs perform, please check this article out: https://www.cgdirector.com/octanebench-benchmark-results/

Cheers,
Alex

Enes

Hey Alex,

great post and very informative! We are looking to build a GPU rendering machine at my office since we have been doing a lot of animations lately. At the moment we are using 3ds max + Vray (CPU) and we outsource to a render farm for final rendering production. Given the expense of the renderfarm we are now looking to allocate that money to build an in-house rendering rig and to move to a full GPU rendering system.

Worth noting we work in architecture and our models are usually huge and very RAM heavy.

We are looking at 4x RTX 2080ti system to begin with and an initial budget of $7-8k for hardware on this first setup (in the future, if profitable, we might expand and upgrade or build additional machines).

My question to you is on how to deal with a 4x GPU system. From your post I can tell the best combo is to assign 1 GPU to 1 frame at a time so to avoid the idle downtimes realted to assigning all GPUs to 1 frame sequentially. Now since our scenes are very memory heavy the VRAM would capped at 11GB (in the case of a RTX 2080 ti).

Have you ever used NVLINK in a similar system and if yes, what configuration would you suggest and what do you think is the advantage (if any) of running GPU renderings thru NVLINK vs 4x separate cards individually?

From what i have gathered online my assumptions are:

– Can’t connect more than 2 cards at a time using NVLINK.
– VRAM is doubled on NVLINK but CUDA cores instructions run pretty much the same way (instructions are sent in parallel regardless, no?)
– On 4X 2080ti system I can potentially have a 2x (2x2080ti) system with a total of 22GB VRAM per scene and can potentially send 1 frame per coupled GPUs.

Does this make sense? Is it even feasible? And if yes, what are the pros/cons compared to a typical 4x setup.
I found very little (and sometimes contradictory) information on NVLINK usage for GPU rendering that I would really like to have you and your experience shine some light on this obscure topic!

Thank you again for reading through all this and looking forward to your reply.

Cheers,
Enes

Graham Allen

Good morning Alex,

I love this site and appreciate all of the knowledge you share. The articles are so informative so thank you very much.

My fiancé is a designer using CAD and Vray to render her scenes all on my older PC but it really struggles in the rendering of the active scene (sometimes takes 5 hours to get a good image) Below is my current PC setup-

Motherboard: Asrock H170 M Pro4 Motherboard
Ram: HyperX Fury 16gb (2x8gb)
CPU: Intel core i5 Skylake 6500/3.2 GHz
Case: Rosewill Line M Micro ATX Mini Tower

As you can see its not really designed for rendering (I built this for live streaming)

My question is this- we were thinking of building a workstation with a budget of around £2000 but would you recommend just upgrading what we have instead? I am torn whether to buy a good GPU and get the ram up to 64gb and lastly put in the i9 9900k processor.

Do you have a recommendation?

Thank you very much

Hey Graham,

Thanks for dropping a line and thank you for the kind words!

Upgrading what you have is not possible because your Asrock H170 M Pro4 Motherboard doesn’t support the i9-9900K CPU. The Asrock H170 can only support up to the i7-7700K. That said, I suggest that you start from scratch and build a new workstation dedicated for rendering from the ground up, so to speak. And since you will be doing a lot of rendering, I suggest that you invest in a CPU with high-core count like the Threadripper CPUs.

Given your budget of £2000 (which is around $ 2,491.90), I came up with a possible build for your use case scenario. Please see below:

Parts List:
CPU: AMD Threadripper 2970WX 3.0GHz 24-Core Processor ($910.00)
CPU Cooler: Noctua NH-U14S TR4 ($79.90)
Motherboard: Gigabyte X399 Designare EX ATX TR4 ($429.99)
GPU: NVIDIA GTX 1660 6GB – Gigabyte Windforce ($229.99)
Memory: 64GB (4 x 16GB) Corsair Vengeance LPX DDR4-3200 CL16 ($319.99)
Storage SSD: Crucial MX500 500GB 2.5″ Solid State Drive ($59.99)
Power Supply: Seasonic Focus Plus Gold 650W ATX 2.4 Power Supply ($99.99)
Case: Fractal Design Define XL R2 Titanium Big Tower Case ($138.37)

The total comes up to $2268.22 which is around £1820 and some change. The remaining amount could be used to invest in a better GPU like an RTX 2070 which basically brings the best price to performance ratio at the moment.

Also, you may want to check the site’s PC Builder Tool at https://www.cgdirector.com/pc-builder/ for recommendations based on budget and use case scenario.

Cheers,
Alex

Julio Verani

Hi Alex!

I’m here in a huge dilemma and i hope you can help me out.

I’m about to make an upgrade in my PC (it’s already 6 yrs old) because the rendering times are becoming hazardous for my projects timeline.

I saw the new Ryzen 3900x as a nice catch for this new rig, since rendering benefits for it’s high core count and good clock speeds, but now it’s out of stock and without any prediction of when the supply will normalize.

Then i look the other way and see the i9 9900k with a lower core count but faster clocks, with plenty of availability everywhere.

I ask, will i regret investing in the i9 setup now, other than waiting longer (maybe much) for the 3900x?

Hi Julio,

Thanks for dropping a line!

You are right – with the type of work you do, you will definitely benefit from a CPU with a high core count and good clock speeds. If you can’t get your hands on the Ryzen 9 3900X, the i9-9900K from Intel will be a good alternative. And no, you will not regret investing in an i9 rig right now.

The i9-9900K may have less cores and a slightly slower base clock speed than the Ryzen 9 3900X (8 cores vs 12 cores and 3.6 GHz vs 3.8 GHz) but it does have a slightly faster Turbo Boost speed of 5.0 GHz compared to the 4.6 GHz Max Boost speed of the 3900X. I’d say this evens out the playing field a bit so to speak and makes the 9900K on par with the 3900X in most single-core workloads.

Lastly, I suggest that you check the site’s PC Builder Tool at https://www.cgdirector.com/pc-builder/ to see what other components will best go with your rig should you decide to go for an i9-9900K build.

You are right, AMD is really making it difficult currently to decide between i9 and ryzen, but the availability unfortunately does not seem to be getting much better soon. If you can wait a while until 3900X prices return to normal, feel free to do so. If not, go with the i9 9900k but beware that the LGA 1151 socket is not as future proof as an AM4 socket currently.

Cheers,
Alex

Julio Verani

Yeah, the rig is already selected (all components) just the issue of the CPU.

AMD Ryzen 9 3900X 3.8 GHz 12-Core
MSI MPG X570 GAMING EDGE WIFI ATX AM4
Corsair Vengeance LPX 32 GB (2 x 16 GB) DDR4-3200
Samsung 860 Evo 1 TB 2.5″ SSD
MSI GeForce RTX 2080 8 GB GAMING X TRIO

My current bottleneck is indeed rendering and there, the 3900x rules over the i9 so i guess i’ll wait a bit longer, since i’m working on my PC for 6 years, a month or two more won’t really cause me any problems.

Thanks again for the reply!