Best Hardware for GPU Rendering in Octane – Redshift – Vray (Updated)

Best Hardware for GPU Rendering in Octane – Redshift – Vray (Updated)

CG Director Author Alex  by Alex   ⋮   ⋮   209 comments
CGDirector is Reader-supported. When you buy through our links, we may earn an affiliate commission.

Graphics Card (GPU) based render engines such as Redhift3D, Octane or VRAY-RT have matured quite a bit over the last years and are starting to overtake CPU-based Render-Engines.

But what hardware gives the best-bang-for-the-buck and what do you have to keep in mind when building your GPU-Workstation compared to a CPU Rendering Workstation? Building a 3D Modeling and CPU Rendering Workstation can be somewhat straightforward, but highly optimizing for GPU Rendering is a whole other story.

So what are the best Hardware-Components and best GPU for rendering with Octane, Redhsift3D or VRAY-RT, that also is affordable? Let’s take a look:

Best Hardware for GPU Rendering

Processor

Since GPU-Render Engines use the GPU to render, technically you should go for a max-core-clock CPU like the Intel i9 9900K that clocks at 3,6GHz (5Ghz Turbo) or the AMD Ryzen 9 3900X that clocks at 3,8Ghz (4,6Ghz Turbo).

That said though, there is another factor to consider when choosing a CPU: PCIe-Lanes.

GPUs are attached to the CPU via PCIe-Lanes on the motherboard. Different CPUs support different amounts of PCIe-Lanes and Top-tier GPUs usually need 16x PCIe 3.0 Lanes to run at full performance.

The i9 9900K/3900X have 16 GPU<->CPU PCIe-Lanes, meaning you could use only one GPU at full speed with these type of CPUs.

If you want more than one GPU at full speed you would need a different CPU that supports more PCIe-Lanes like the AMD Threadripper CPUs, that have 64 PCIe-Lanes (e.g. the AMD Threadripper 2950X), the i9 9800X (28 PCIe-Lanes) or the i9 9900X Series CPUs that support 44 PCIe-Lanes.

GPUs, though, can also run in lower speed modes such as 8x PCIe 3.0 Speeds and then also use up fewer PCIe-Lanes (8x). Usually, there is a negligible difference in Rendering Speed when having GPUs run in 8x mode instead of 16x mode.

This would mean you could run 2x GPUs on an i9 9900K in 8x PCIe mode, 3x GPUs on an i9 9800X and 5x GPUs on an i9 9900X. (Given the Mainboard supports this configuration)

CPUs that have a high number of PCIe-Lanes usually fall into the HEDT Platform range and are usually also great for CPU Rendering as they tend to have more cores and therefore better multi-core performance.

PCIE_Lanes Compariosn

When actively rendering and your scene fits nicely into the GPUs VRAM, the speed of GPU Render Engines is of course mainly dependent on GPU performance.

Some processes though that happen before and during rendering rely heavily on the performance of the CPU, Hard-Drive, and network.

For example extracting and preparing Mesh Data to be used by the GPU, loading textures from your Hard-Drive and preparing the scene data.

In very complex scenes, these processing stages will take lots of time and can bottleneck the overall rendering speed, if a low-end CPU, Disk, and RAM are employed.

If your scene is too large for your GPU memory, the GPU Render Engine will need to access your System RAM or even swap to disk, which will considerably slow down the rendering.

Best Memory (RAM) for GPU Rendering

Different kinds of RAM won’t speed up your GPU Rendering all that much. You do have to make sure, that you have enough RAM though, or else your System will crawl to a halt.

I recommend keeping the following rules in mind to optimize performance as much as possible:

Best Graphics Card for Rendering

To use Octane and Redshift you will need a GPU that has CUDA-Cores, meaning you will need an NVIDIA GPU. VRAY-RT additionally supports OpenCL meaning you could use an AMD card here.

The best bang-for-the-buck NVIDIA cards are the NVIDIA 2070 RTX (2304 CUDA Cores, 8GB VRAM), 2080 RTX (2944 CUDA Cores, 8GB VRAM) and the 2080 Ti (4352 CUDA Cores, 11GB VRAM).

On the high-end, the currently highest possible performance is offered by the NVIDIA Titan V and Titan RTX, that also comes with 24GB of Video RAM.

These Cards though have worse Performance per Dollar as they are targeted at a different audience and VRAM is very expensive but not necessarily needed in such high capacities for GPU Rendering.

GPUs, that have 12GB of Video RAM and more, can handle high-poly scenes with over 200 million unique objects best. Take a look at the performance per dollar tables below, though, to get an overview of how expensive some of these cards can get without offering that much more performance.

GPU Cooling

Founders Edition Blower Style Cooler

  • PRO: Better Cooling when stacking more than one card
  • CON: Louder than Open-Air Cooling

Open-Air Cooling

  • PRO: Quieter than Blower Style, Cheaper
  • CON: Worse Cooling when stacking cards

Hybrid AiO Cooling (All-in-One Watercooling Loop with Fans)

  • PRO: Best All-In-One Cooling for stacking cards
  • CON: More Expensive, needs room for radiators in Case

Watercooling

  • PRO: Best temps when stacking cards, Quiet, can use only single slot height
  • CON: Needs lots of extra room in the case for tank and radiators, More Expensive

NVIDIA GPUs have a Boosting Technology, that automatically overclocks your GPU to a certain degree, as long as it stays within predefined temperature and power limits. So making sure a GPU stays as cool as possible, will improve the performance.

You can see this effect especially in Laptops, where there is usually not much room for cooling, and the GPUs tend to get very hot and loud and throttle very early. So if you are thinking of Rendering on a Laptop, keep this in mind.

Power Supply

Be sure to get a strong enough Power supply for your system. Most Cards have a Power Draw of around 180-250W. CPU of around 100W and any additional Hardware in your case.

I Recommend a 500W for a Single-GPU-Build. Add 250W for every additional GPU. Good PSU manufacturers to look out for are Corsair, beQuiet, Seasonic, and Coolermaster but you might prefer others.

There is a Wattage-Calculator here that lets you Calculate how strong your PSU will have to be.

Mainboard & PCIe-Lanes

Make sure the Mainboard has the desired amount of PCIe-Lanes and does not share Lanes with SATA or M.2 slots. Also, be careful what PCI-E Configurations the Motherboard supports. Some have 3 or 4 PCI-E Slots but only support one x16 PCI-E Card.

This can get quite confusing. Check the Motherboard manufacturers Website to be sure the Card configuration you are aiming for is supported. Here is what you should be looking for in the Motherboard specifications:

Asus Rampage PCIE Lane Config

Image-Source: Asus

In the above example, you would be able to use (with a 40 PCIe Lane CPU) 1 GPU in x16 mode. OR 2 GPUs in both x16 mode OR 3 GPUs one in x16 mode and two of those in x8 mode and so on. Beware that 28-PCIe Lane-CPUs in this example would support different GPU configurations than the 40 lane CPU.

Currently, the AMD Threadripper CPUs will give you 64 PCIe Lanes to hook your GPUs up to, if you want more you will have to go the multi-CPU route with Intel Xeons.

To confuse things a bit more, some Mainboards do offer four x16 GPUs (needs 64 PCIe-Lanes) on CPUs with only 44 PCIe Lanes. How is this even possible?

Enter PLX Chips. On some motherboards these chips serve as a type of switch, managing your PCIe-Lanes and leads the CPU to believe fewer Lanes are being used. This way, you can use e.g. 32 PCIe-Lanes with a 16 PCIe-Lane CPU or 64 PCIe-Lanes on a 44-Lane CPU. Beware though, only a few Motherboards have these PLX Chips. The Asus WS X299 Sage is one of them, allowing up to 7 GPUs to be used at 8x speed with a 44-Lane CPU, or even 4 x16 GPUs on a 44 Lanes CPU.

This screenshot of the Asus WS X299 Sage Manual clearly states what type of GPU-Configurations are supported (Always check the manual before buying expensive stuff):

Asus WS X299 Sage

Image-Source: Asus Mainboard Manual

PCIe-Lane Conclusion: For Multi-GPU Setups, having a CPU with lots of PCIe-Lanes is important, unless you have a Mainboard that comes with PLX chips. Having GPUs run in x8 Mode instead of x16, will only marginally slow down the performance. (Note though, the PLX Chips won’t increase your GPU bandwidth to the CPU, just make it possible to have more cards run in higher modes)

Best GPU Performance / Dollar

Ok so here it is. The Lists everyone should be looking at when choosing the right GPU to buy. The best performing GPU per Dollar!

GPU Benchmark Comparison: Octane

This List is based on OctaneBench 4.00.

(It’s quite difficult to get an average Price for some of these cards since crypto-currency mining is so popular right now, so I used MSRP)

GPU NameVRAMOctaneBenchPrice $ MSRPPerformance/Dollar
RTX 207082105500.381
GTX 107081334000.333
GTX 1070 Ti81534500.340
GTX 10608943000.313
GTX 1080 Ti112227000.317
GTX 108081485500.269
RTX 208082267990.282
RTX 2080 Ti1130411990.253
TITAN XP1225013000.192
RTX Titan2432627000.120
Titan V1239630000.132
GTX TITAN Z1218929990.063
Quadro P60002413938490.036
Quadro GP1001628470000.040
RTX 206061703500.485

Source: Complete OctaneBench Benchmark List

GPU Benchmark Comparison: Redshift

The Redshift Render Engine has its own Benchmark and here is a List based off of Redshift Bench. Note how the cards scale (1080TI) [RedshiftBench Mark (Time [min], shorter is better)]:

GPU NameVRAMRedshiftBenchPrice $ MSRPPerformance/Dollar
RTX 2070811.355501.601
GTX 1070817.114001.461
GTX 1080 Ti1111.447001.248
RTX 2080810.597991.181
4x GTX 1080 Ti113.0728001.163
2x GTX 1080 Ti116.1514001.161
8x GTX 1080 Ti111.5756001.137
GTX 1080816.005501.136
RTX 2080 Ti118.3811990.995
TITAN XP1210.5413000.729
Titan V128.5030000.392
Quadro P60002411.3138490.229
Quadro GP100169.5770000.149
4x RTX 2080 Ti112.2847960.914

Source: Complete Redshift Benchmark Results List

GPU Benchmark Comparison: VRAY-RT

And here is a List based off of VRAY-RT Bench. Note how the GTX 1080 interestingly seems to perform worse than the GTX 1070 in this benchmark:

GPU NameVRAMVRAY-BenchPrice $ MSRPPerformance/Dollar
GTX 107081:25 min4002.941
RTX 207081:05 min5502.797
GTX 1080 TI111:00 min7002.380
2x GTX 1080 TI110:32 min14002.232
GTX 108081:27 min5502.089
4x GTX 1080 TI110:19 min28001.879
TITAN XP120:53 min13001.451
8x GTX 1080 TI110:16 min56001.116
TITAN V120:41 min30000.813
Quadro P6000241:04 min38490.405

Source: VRAY Benchmark List

Speed up your Multi-GPU Rendertimes

So, unfortunately, GPUs don’t scale linearly. 2 GPUs render an Image about 1,8 times faster. Having 4 GPUs will only render about 3,5x faster. This is quite a bummer, isn’t it? Having multiple GPUs communicate with each other to render the same task, costs so much performance, that one GPU in a 4-GPU rig is mainly just managing decisions.

One solution could be the following: When final rendering image sequences, use as few GPUs as possible per task.

Let’s make an example:

What we usually do in a multi-GPU rig is, have all GPUs work on the same task. A single task, in this case, would be an image in our image sequence.

4 GPUs together render one Image and then move on to the next Image in the Image sequence until the entire sequence has been rendered.

We can speed up preparation time per GPU (when the GPUs sit idly waiting for a task to start) and bypass some of the multi-GPU slow-downs when we have each GPU render on its own task.

Now 4 GPUs are simultaneously rendering 4 Images, but each GPU is rendering its own Image.

Some 3D-Software might have this feature built-in, if not, it is best to use some kind of Render Manager, such as Thinkbox Deadline (Free for up to 2 Nodes/Computers).

Beware though, that you might have to increase your System RAM a bit and have a strong CPU since every GPU-Task needs its amount of RAM and CPU performance.

Buying GPUs

NVIDIA and AMD GPUs are both hard to get by for a reasonable price nowadays since mining is so popular. Graphics Cards are mostly Out of Stock and even when they are available, they are nowhere near MSRP. There is a site called nowinstock that gives you an overview of the most popular places to buy GPUs in your country and notifies you as soon as cards are available.

I put together some Builds in different Price ranges here for you to get a head start in configuring your own dream-build.

Redshift vs. Octane

Another thing I am asked often is if one should go with the Redshift Render Engine or Octane.

As I myself have used both extensively, in my experience, thanks to the Shader Graph Editor and the vast Multi-Pass Manager of Redshift, I like to use the Redshift Render Engine more for doing work that needs complex Material Setups and heavy Compositing.

Octane is great if you want results fast, as it is slightly easier to learn for beginners. But this, of course, is a personal opinion and I would love to hear yours!

Custom PC-Builder

If you want to get the best parts within your budget you should have a look at the Web-Base PC-Builder Tool that I’ve created.

Select the main purpose that you’ll use the computer for and adjust your budget to create the perfect PC with part recommendations that will fit within your budget.

CGDirector PC-Builder Tool

PC-Builder Facebook Title Image

What Hardware do you want to buy?

addBalken(“tablepress-3″,”column-5”);
addBalken(“tablepress-4″,”column-5”);
addBalken(“tablepress-5″,”column-5”);

Alex from CGDirector - post author

Hi, I'm Alex, a Freelance 3D Generalist, Motion Designer and Compositor.

I've built a multitude of Computers, Workstations and Renderfarms and love to optimize them as much as possible.

Feel free to comment and ask for suggestions on your PC-Build or 3D-related Problem, I'll do my best to help out!

209
Comments

Enes

Hey Alex,

great post and very informative! We are looking to build a GPU rendering machine at my office since we have been doing a lot of animations lately. At the moment we are using 3ds max + Vray (CPU) and we outsource to a render farm for final rendering production. Given the expense of the renderfarm we are now looking to allocate that money to build an in-house rendering rig and to move to a full GPU rendering system.

Worth noting we work in architecture and our models are usually huge and very RAM heavy.

We are looking at 4x RTX 2080ti system to begin with and an initial budget of $7-8k for hardware on this first setup (in the future, if profitable, we might expand and upgrade or build additional machines).

My question to you is on how to deal with a 4x GPU system. From your post I can tell the best combo is to assign 1 GPU to 1 frame at a time so to avoid the idle downtimes realted to assigning all GPUs to 1 frame sequentially. Now since our scenes are very memory heavy the VRAM would capped at 11GB (in the case of a RTX 2080 ti).

Have you ever used NVLINK in a similar system and if yes, what configuration would you suggest and what do you think is the advantage (if any) of running GPU renderings thru NVLINK vs 4x separate cards individually?

From what i have gathered online my assumptions are:

– Can’t connect more than 2 cards at a time using NVLINK.
– VRAM is doubled on NVLINK but CUDA cores instructions run pretty much the same way (instructions are sent in parallel regardless, no?)
– On 4X 2080ti system I can potentially have a 2x (2x2080ti) system with a total of 22GB VRAM per scene and can potentially send 1 frame per coupled GPUs.

Does this make sense? Is it even feasible? And if yes, what are the pros/cons compared to a typical 4x setup.
I found very little (and sometimes contradictory) information on NVLINK usage for GPU rendering that I would really like to have you and your experience shine some light on this obscure topic!

Thank you again for reading through all this and looking forward to your reply.

Cheers,
Enes

Graham Allen

Good morning Alex,

I love this site and appreciate all of the knowledge you share. The articles are so informative so thank you very much.

My fiancé is a designer using CAD and Vray to render her scenes all on my older PC but it really struggles in the rendering of the active scene (sometimes takes 5 hours to get a good image) Below is my current PC setup-

Motherboard: Asrock H170 M Pro4 Motherboard
Ram: HyperX Fury 16gb (2x8gb)
CPU: Intel core i5 Skylake 6500/3.2 GHz
Case: Rosewill Line M Micro ATX Mini Tower

As you can see its not really designed for rendering (I built this for live streaming)

My question is this- we were thinking of building a workstation with a budget of around £2000 but would you recommend just upgrading what we have instead? I am torn whether to buy a good GPU and get the ram up to 64gb and lastly put in the i9 9900k processor.

Do you have a recommendation?

Thank you very much

Hey Graham,

Thanks for dropping a line and thank you for the kind words!

Upgrading what you have is not possible because your Asrock H170 M Pro4 Motherboard doesn’t support the i9-9900K CPU. The Asrock H170 can only support up to the i7-7700K. That said, I suggest that you start from scratch and build a new workstation dedicated for rendering from the ground up, so to speak. And since you will be doing a lot of rendering, I suggest that you invest in a CPU with high-core count like the Threadripper CPUs.

Given your budget of £2000 (which is around $ 2,491.90), I came up with a possible build for your use case scenario. Please see below:

Parts List:
CPU: AMD Threadripper 2970WX 3.0GHz 24-Core Processor ($910.00)
CPU Cooler: Noctua NH-U14S TR4 ($79.90)
Motherboard: Gigabyte X399 Designare EX ATX TR4 ($429.99)
GPU: NVIDIA GTX 1660 6GB – Gigabyte Windforce ($229.99)
Memory: 64GB (4 x 16GB) Corsair Vengeance LPX DDR4-3200 CL16 ($319.99)
Storage SSD: Crucial MX500 500GB 2.5″ Solid State Drive ($59.99)
Power Supply: Seasonic Focus Plus Gold 650W ATX 2.4 Power Supply ($99.99)
Case: Fractal Design Define XL R2 Titanium Big Tower Case ($138.37)

The total comes up to $2268.22 which is around £1820 and some change. The remaining amount could be used to invest in a better GPU like an RTX 2070 which basically brings the best price to performance ratio at the moment.

Also, you may want to check the site’s PC Builder Tool at https://www.cgdirector.com/pc-builder/ for recommendations based on budget and use case scenario.

Cheers,
Alex

Julio Verani

Hi Alex!

I’m here in a huge dilemma and i hope you can help me out.

I’m about to make an upgrade in my PC (it’s already 6 yrs old) because the rendering times are becoming hazardous for my projects timeline.

I saw the new Ryzen 3900x as a nice catch for this new rig, since rendering benefits for it’s high core count and good clock speeds, but now it’s out of stock and without any prediction of when the supply will normalize.

Then i look the other way and see the i9 9900k with a lower core count but faster clocks, with plenty of availability everywhere.

I ask, will i regret investing in the i9 setup now, other than waiting longer (maybe much) for the 3900x?

Hi Julio,

Thanks for dropping a line!

You are right – with the type of work you do, you will definitely benefit from a CPU with a high core count and good clock speeds. If you can’t get your hands on the Ryzen 9 3900X, the i9-9900K from Intel will be a good alternative. And no, you will not regret investing in an i9 rig right now.

The i9-9900K may have less cores and a slightly slower base clock speed than the Ryzen 9 3900X (8 cores vs 12 cores and 3.6 GHz vs 3.8 GHz) but it does have a slightly faster Turbo Boost speed of 5.0 GHz compared to the 4.6 GHz Max Boost speed of the 3900X. I’d say this evens out the playing field a bit so to speak and makes the 9900K on par with the 3900X in most single-core workloads.

Lastly, I suggest that you check the site’s PC Builder Tool at https://www.cgdirector.com/pc-builder/ to see what other components will best go with your rig should you decide to go for an i9-9900K build.

You are right, AMD is really making it difficult currently to decide between i9 and ryzen, but the availability unfortunately does not seem to be getting much better soon. If you can wait a while until 3900X prices return to normal, feel free to do so. If not, go with the i9 9900k but beware that the LGA 1151 socket is not as future proof as an AM4 socket currently.

Cheers,
Alex

Julio Verani

Yeah, the rig is already selected (all components) just the issue of the CPU.

AMD Ryzen 9 3900X 3.8 GHz 12-Core
MSI MPG X570 GAMING EDGE WIFI ATX AM4
Corsair Vengeance LPX 32 GB (2 x 16 GB) DDR4-3200
Samsung 860 Evo 1 TB 2.5″ SSD
MSI GeForce RTX 2080 8 GB GAMING X TRIO

My current bottleneck is indeed rendering and there, the 3900x rules over the i9 so i guess i’ll wait a bit longer, since i’m working on my PC for 6 years, a month or two more won’t really cause me any problems.

Thanks again for the reply!

Tiffany Chau

Hi Alex!

Your site has been a great help for my first pc build! I’m on budget as a student and I’m mainly working in software such as rhino, solidworks, zbrush, keyshot, substance painter, and blender. My current build is using ryzen 5 3600, x570 asrock steel legend (for future proofing), 500w evga 80 plus psu, and I’m testing out a PNY XLR8 RTX 2060 blower style card. Do you think this build should be okay? Also I noticed the AMD RX 5700/5700 xt for around the same price as the 2060, would you happen to have any opinion on whether it’d be worth switching to these gpus instead? I’d like to leave some headroom for rendering so the only thing I was unsure about was the lack of ray tracing. Thanks for reading this and I’d love to hear your thoughts!

Thanks,. Tiffany.

Aigars

I would like to know about 5700XT as well, whether AMD is now competitive in mainstream 2D-3D software tools for artists as well and I could just ignore 2070 Super now.
I am planning on upgrading system myself, after 8 years of using i7 2600k and 2012 hardware as such. Planning to buy either 5700XT or 2070Super.
The question about temperature and noise level has been answered regarding 5700XT with the Gigabyte Windforce X3 model recently coming out putting my worries to rest about it. The next question, and last one, is about driver support and other technology feats that will allow me to do same stuff at similar performance with AMD agaisnt nVidia support.

The price difference is noticeable, but I want to make the best decision with future proof setup for years to come.

Jason

Alex,

First of all, great article. Kudos.
I am building my first render farm and will be using many of your suggestions. I have built multiple rigs for modeling and rendering but never a rig with multiple GPUs. I will be using (4) 2080Ti and my question is on the install. Do i need to bridge them using an 4-slot NVLink or not?

Thanks in advance.

Spencer Fitch

I have an Nvidia promo code for 500$ off an RTX Titan.. I would be upgrading from a 1080ti and I am using vray. With optix 7 coming out (Ability for vray to take and use the RT cores, would this be a feasible upgrade?

Darko

Hey Alex, great article!
I have a question. I have RTX 2080 TI and a GTX Titan X, Maxwell architecture gpus. Do you think its a good idea to pair them in the same system, and will it make a difference for the better or the worse?

Thanks a mill,
Darko

Sascha

Hi Alex,

I have been going through your reviews and must say: Amazing work! Thanks a lot.

What I wonder is: Why do you not report on Dual CPU motherboards/setups? Or any Setups that handle more than 8 GPUs? Is there anything that makes those options irrelevant for you?

I was looking to find a single machine setup that works with the Amfeltec rigs:

http://amfeltec.com/products/multi-gpu-cluster/

Use 4 of them (getting 16 external GPUs) and maybe even combine them with more internal cards…

Any thoughts on this kind of setup?

Thanks

Sascha

Jamie Holmes

So in terms of RAM requirements, if I am running 4 x GTX 1080 Ti’s each with 11GB of VRAM I will need at 128GB of system RAM as the combined VRAM is 44GB and you recommend doubling this.

Is that correct?

Roozbeh

hi there,and thank you for sharing your knowledge…i need some help in hardware configuration…im a 3d artist and using these softwares in a daily basis, 3ds max ,sketchup pro autocad designer and Lumion pro 9 and vray and corona render. i have built my pc more than 3 years ago and now i received my new gpu from Nvidia (RTX 2070 Super FE) i want to use it alongside of my old ASUS ROG Matrix GTX 980 Ti 6GB

but here is the question how can i use it for gpu rendering with just a single monitor and how to install it in motherboard PCIE slots with my limited cpu PCIE lanes?

Here is my PC Specs :

Windows 10 pro version 1903 OS Build 18363.239

Motherboard : Asus X99 Deluxe 3.1 Bios 3802

CPU: i7-5820k overclocked to 4.00 Ghz. just 28 lanes

RAM :32 Gb Corsair Veangeance 3200 MHz

GPU : GTX 970 + uninstalled (RTX 2070 super FE)

Case Green X3+ Viper advaned super mid tower

Cpu cololer: Green Glacier GLC-240-A 240mm water cooling (im afaird may have never heard about that) !!!

PSU ; 1200 +80 Green GP1200B-OCDG

SSD: 500 Gb samsung 860 pro

HDD : WD 2TB +Seagate Barracuda 2TB

Monitor: Dell Quad HD 2K U2715H I

Jamie Holmes

This should be fairly easy to answer.

Install the RTX 2070 in the first (TOP) PCIE slot and connect your monitor to this card. Install the GTX 970 in the third PCIE slot. make sure both cards are properly powered by your PSU.

The 2070 will run at x16 and the 970 will run at x8 which in this config will be fine. Octane/Redshift will automatically recognize both cards and allocate workloads across them.

The only thing to be aware of here is that your available VRAM will only be 6GB as you will be bottle-necked by the card with the lowest VRAM.