Best Hardware for GPU Rendering in Octane – Redshift – Vray (Updated)

CG Director Author Alex Glawionby Alex Glawion   /  Updated 

Graphics Card (GPU) based render engines such as Redshift, Octane, or VRAY have matured quite a bit and are overtaking CPU-based Render-Engines – both in popularity and speed.

But what hardware gives the best-bang-for-the-buck, and what do you have to keep in mind when building your GPU-Workstation compared to a CPU Rendering Workstation?

Building an all-round 3D Modeling and CPU Rendering Workstation can be somewhat straightforward, but optimizing GPU Rendering performance is a whole other story.

GPU render engines

So, what are the most affordable and best PC-Parts for rendering with Octane, Redshift, VRAY, or other GPU Render Engines?

Let’s take a look:

Best Hardware for GPU Rendering


Since GPU-Render Engines use the GPU(s) to render, technically, you should go for a max-core-clock CPU like the Intel i9 12900K or the AMD Ryzen 9 5950X that clocks at 3,4Ghz (4,9Ghz Turbo).

At first glance, this makes sense because the CPU does help speed up some parts of the rendering process, such as scene preparation.

That said, though, there is another factor to consider when choosing a CPU: PCIe-Lanes.

GPUs are connected to the CPU via PCIe-Lanes on the motherboard. Different CPUs support a different number of PCIe-Lanes.

Top-tier GPUs usually need 16x PCIe 3.0 Lanes to run at full performance without bandwidth throttling.

PCIE 4 x16 Slot

Image-Credit: MSI, Unify x570 Motherboard – A typical PCIe x16 Slot

Mainstream CPUs such as the i9 12900K/5950X have 16 GPU<->CPU PCIe-Lanes, meaning you could use only one GPU at full speed with these kinds of CPUs.

If you want to use more than one GPU at full speed, you would need a different CPU that supports more PCIe-Lanes.

AMD’s Threadripper CPUs, for example, are great for driving lots of GPUs.

They have 64 PCIe-Lanes (e.g., the AMD Threadripper 2950X or Threadripper 3960X)

GPUs, though, can also run in lower bandwidth modes such as 8x PCIe 3.0 (or 4.0) Speeds.

This also means they use up fewer PCIe-Lanes (namely 8x). Usually, there is a negligible difference in Rendering Speed when having current-gen GPUs run in 8x mode instead of 16x mode.

At x8 PCIe Bandwidths, you could run two GPUs on an i9 10900K, or Ryzen 9 5950X. (For a total of 16 PCIe Lanes, given the Motherboard and Chipset supports DUAL GPUs and has sufficient PCIe Slots)

You could theoretically run 4 GPUs in x16 Mode on a Threadripper CPU (= 64 PCIe Lanes). Unfortunately, this is not supported, and the best you can achieve with Threadripper CPUs is a x16, x8, x16, x8 Configuration.

CPUs with a high number of PCIe-Lanes usually fall into the HEDT (= High-End-Desk-Top) Platform range and are often great for CPU Rendering as well, as they tend to have more cores and, therefore, higher multi-core performance.

HEDT Processors

Here’s a quick bandwidth comparison between having two Titan X GPUs run in x8/x8, x16/x8 and x16/x16 mode. The differences are within the margin of error.

Beware though, that the Titan X’s in this benchmark certainly don’t saturate a x8 PCIe 3.0 bus and the benchmark scene fits into the GPUs VRAM easily, meaning there is not much communication going on over the PCIe-Lanes.

PCIE_Lanes Compariosn

When actively rendering and your scene fits nicely into the GPU’s VRAM (find out how much VRAM you need here), the speed of GPU Render Engines is dependent on GPU performance.

Some processes, though, that happen before and during rendering rely heavily on the performance of the CPU, Storage, and (possibly) your network.

For example, extracting and preparing Mesh Data to be used by the GPU, loading high-quality textures from your Storage, and preparing the scene data.

These processing stages can take considerable time in very complex scenes and will bottleneck the overall rendering performance, if a low-end CPU, Disk, or RAM are used.

If your scene is too large to fit into your GPU’s memory, the GPU Render Engine will need to access your System’s RAM or even swap to disk, which will slow down the rendering considerably.

Best Memory (RAM) for GPU Rendering

Different kinds of RAM won’t speed up your GPU Rendering all that much. You do have to make sure, that you have enough RAM though, or else your System will crawl to a halt.

Corsair Vengeance LPX

Image-Source: Corsair

Keep the following rules in mind to optimize for performance as much as possible:

  • To be safe, your RAM size should be at least 1.5 – 2x your combined VRAM size
  • Your CPU can benefit from higher Memory Clocks which can in turn slightly speed up the GPU rendering
  • Your CPU can benefit from more Memory Channels on certain Systems which in turn can slightly speed up your GPU rendering
  • Look for lower Latency RAM (e.g. CL14 is better than CL16) which can benefit your CPU’s performance and can therefore also speed up your GPU rendering slightly

RAM Speed and Latency

Take a look at our RAM (Memory) Guide here, which should get you up to speed.

If you just need a quick recommendation, look into Corsair Vengeance Memory, as we have tested these Modules in a lot of GPU Rendering systems and can recommend them without hesitation.

Best Graphics Card for GPU Rendering

Finally, the GPU:

To use Octane and Redshift you will need a GPU that has CUDA-Cores, meaning you will need an NVIDIA GPU.

Some versions of VRAY used to additionally support OpenCL, meaning you could use an AMD GPU, but this is no longer the case.

If you are using other Render Engines, be sure to check compatibility here.

The best NVIDIA GPUs for Rendering are:

Nvidia RTX 2070

Image-Source: Nvidia

Although some Quadro GPUs offer even more VRAM, the value of these “Pro”-level GPUs is worse for GPU rendering compared to mainstream or “Gaming” GPUs.

There are some features such as ECC VRAM, higher Floating Point precision, or official Support and Drivers that make them valuable in the eyes of enterprise, Machine-learning, or CAD users, to name a few.

For your GPU Rendering needs, stick to mainstream RTX GPUs for the best value.

GPU Cooling

Blower Style Cooler (Recommended for Multi-GPU setups)

  • PRO: Better Cooling when closely stacking more than one card (heat is blown out of the case)
  • CON: Louder than Open-Air Cooling

Open-Air Cooling (Recommended for single GPU Setups)

  • PRO: Quieter than Blower Style, Cheaper, more models available
  • CON: Bad Cooling when stacking cards (heat stays in the case)

Blower style vs open-air GPU

Hybrid AiO Cooling (All-in-One Watercooling Loop with Fans)

  • PRO: Best All-In-One Cooling for stacking cards
  • CON: More Expensive, needs room for radiators in Case

Full Custom Watercooling

  • PRO: Best temps when stacking cards, Quiet, some cards only use single slot height
  • CON: Needs lots of extra room in the case for tank and radiators, Much more expensive

GPU Cooling Variants - Blower - open air - hybrid - water cooled

NVIDIA GPUs have a Boosting Technology, that automatically overclocks your GPU to a certain degree, as long as it stays within a predefined temperature and power limit.

So making sure your GPUs stay as cool as possible, will allow them to boost longer and therefore improve the performance.

You can observe this effect, especially in Laptops, where there is little room for cooling, and the GPUs tend to get very hot and loud and throttle very early. So if you are thinking of Rendering on a Laptop, keep this in mind.

A quick note on Riser Cables. With PCIe- or Riser-Cables you can basically place your GPUs further away from the PCIe-Slot of your Motherboard. Either to show off your GPU vertically in front of the Case’s tempered glass side panel, or because you have some space constraints that you are trying to solve (e.g. the GPUs don’t fit).

If this is you, take a look at our Guide on finding the right Riser-Cables for your need.

Power Supply

Be sure to get a strong enough Power supply for your system. Most GPUs have a typical Power Draw of around 180-250W, though the Nvidia RTX 3080 and 3090 GPUs can draw even more.

I Recommend at least 650W for a Single-GPU-Build. Add 250W for every additional GPU that you have in your System.

Good PSU manufacturers to look out for, are Corsair, beQuiet, Seasonic, and Coolermaster but you might prefer others.

Corsair AX760W PSU

Image-Credit: Corsair

Use this Wattage-Calculator here that lets you Calculate how strong your PSU will have to be by selecting your planned components.

Motherboard & PCIe-Lanes

Make sure the Motherboard has the desired amount of PCIe-Lanes and does not share Lanes with SATA or M.2 slots.

Be careful what PCIe Configurations the Motherboard supports. Some have 3 or 4 physical PCIe Slots but only support one x16 PCIe Card (electrical speed).

This can get quite confusing.

Check the Motherboard manufacturer’s Website to be sure the Multi-GPU configuration you are aiming for is supported.

Here is what you should be looking for in the Motherboard specifications:

Asus Rampage PCIE Lane Config

Image-Source: Asus

In the above example, you would be able to use (with a 40 PCIe Lane CPU) 1 GPU in x16 mode. OR 2 GPUs in both x16 mode OR 3 GPUs, one in x16 mode and two of those in x8 mode and so on. Beware that 28-PCIe Lane-CPUs in this example would support different GPU configurations than the 40 lane CPU.

Currently, the AMD Threadripper CPUs will give you 64 PCIe Lanes to hook your GPUs up to, if you want more you will have to go the Epyc or Xeon route.

To confuse things even more, some Motherboards do offer four x16 GPUs (needs 64 PCIe-Lanes) on CPUs with only 44 PCIe Lanes. How is this even possible?

Enter PLX Chips.

How does a PLX chip work

On some motherboards, these chips serve as a type of switch, managing your PCIe-Lanes and leads the CPU to believe fewer Lanes are being used.

This way, you can use e.g. 32 PCIe-Lanes with a 16 PCIe-Lane CPU or 64 PCIe-Lanes on a 44-Lane CPU.

Beware though, only a few Motherboards have these PLX Chips. The Asus WS X299 Sage is one of them, allowing up to 7 GPUs to be used at 8x speed with a 44-Lane CPU, or even 4 x16 GPUs on a 44 Lanes CPU.

This screenshot of the Asus WS X299 Sage Manual clearly states what type of GPU-Configurations are supported (Always check the manual before buying expensive stuff):

Asus WS X299 Sage

Image-Source: Asus Mainboard Manual

PCIe-Lane Conclusion

For Multi-GPU Setups, having a CPU with lots of PCIe-Lanes is important, unless you have a Motherboard that comes with PLX chips.

Having GPUs run in x8 Mode instead of x16, will only marginally slow down the performance on most GPUs. (Note though, the PLX Chips won’t increase your GPU bandwidth to the CPU, just make it possible to have more cards run in higher modes)

Best GPU Performance / Dollar

Ok so here it is. The Lists everyone should be looking at when choosing the right GPU to buy. The best performing GPU per Dollar!

GPU Benchmark Comparison: Octane

This List is based on OctaneBench 2020.

GPU NameVRAM (GB)OctaneBench ScorePrice $Performance/Dollar
8x RTX 2080 Ti1127339592
4x RTX 2080 Ti1114334796
4x RTX 2080 Super811002880
4x RTX 2070 Super810572200
4x RTX 2080810173196
4x RTX 2060 Super89611260
4x GTX 1080 Ti118372800
2x RTX 2080 Ti116932398
RTX 3090246611499
RTX 3090 Ti246921999
RTX 3080 Ti126481199
RTX 308010559699
2x RTX 2080 Super85411440
2x RTX 2070 Super85141100
2x RTX 2060 Super8485840
2x RTX 207084821000
RTX 3070 Ti8454599
RTX 30708403499
2x GTX 1080 Ti113821400
Quadro RTX 6000243804400
RTX 3060 Ti8376399
RTX 306012289329
Quadro RTX 8000483655670
RTX 2080 Ti113551199
Titan V123323000
RTX 2080 Super8285720
RTX 20808261620
RTX 2070 Super8259550
RTX 2060 Super8240420
Quadro RTX 40008232950
RTX 20708228500
Quadro RTX 5000162222100
GTX 1080 Ti11195700
RTX 2060 (6GB)6188360
GTX 980 Ti6142300
GTX 1660 Super6134230
GTX 1660 Ti6130280
GTX 16606113230
GTX 980494200
RTX A6000486285000
RTX A5000245932250
RTX Titan243612499
RTX 30504179249
GPU NameVRAM (GB)Octanebench ScorePrice $Performance/Dollar

Source: Complete OctaneBench Benchmark List

GPU Benchmark Comparison: Redshift

The Redshift Render Engine has its own Benchmark and here is a List based on the Redshift Benchmark 3.0.26:

GPU(s)VRAMTime (Minutes)PricePerf / $
1x GTX 1080 Ti1108.56300
1x RTX 2060 SUPER806.31350
1x RTX 30601205.38350
1x RTX 2070806.28400
1x RTX 3060 Ti804.26450
1x RTX 2070 SUPER806.12450
1x RTX 30701003.57500
1x RTX 3070 Ti803.27599
1x RTX 2080806.01600
1x RTX 2080 SUPER805.47650
1x RTX 30801003.07850
2x RTX 2070 SUPER803.03900
1x RTX 3080 Ti1202.441199
1x RTX 2080 Ti1104.271200
2x RTX 2080803.101200
2x RTX 2080 SUPER802.581300
1x RTX 30902402.421499
4x RTX 2070801.561600
4x RTX 2070 SUPER801.421800
1x RTX 3090 Ti2402.361999
2x RTX 2080 Ti1102.182400
4x RTX 2080801.362400
4x RTX 2080 SUPER801.322600
1x RTX Titan2404.162700
2x RTX 30902401.154000
4x RTX 2080 Ti1101.074800
4x RTX 30902400.458000
8x RTX 2080 Ti1100.499600
GPU(s)VRAMTime (Minutes)PricePerf / $

Source: Complete Redshift Benchmark Results List

GPU Benchmark Comparison: VRAY-RT

And here is a List based off of VRAY-RT Bench. Note how the GTX 1080 interestingly seems to perform worse than the GTX 1070 in this benchmark:

GPU NameVRAMVRAY-BenchPrice $ MSRPPerformance/Dollar
GTX 107081:25 min4002.941
RTX 207081:05 min5502.797
GTX 1080 TI111:00 min7002.380
2x GTX 1080 TI110:32 min14002.232
GTX 108081:27 min5502.089
4x GTX 1080 TI110:19 min28001.879
TITAN XP120:53 min13001.451
8x GTX 1080 TI110:16 min56001.116
TITAN V120:41 min30000.813
Quadro P6000241:04 min38490.405

Source: VRAY Benchmark List

Speed up your Multi-GPU Rendertimes

Note – This section is quite advanced. Feel free to skip it.

So, unfortunately, GPUs don’t always scale perfectly. 2 GPUs render an Image about 1,9 times faster. Having 4 GPUs will sometimes render about 3,6x faster.

Multi-GPU scaling

Having multiple GPUs communicate with each other to render the same task, costs so much performance, that a large part of one GPU in a 4-GPU rig is mainly just there for managing decisions.

One solution could be the following: When final rendering image sequences, use as few GPUs as possible per task.

Let’s make an example:

What we usually do in a multi-GPU rig is, have all GPUs work on the same task. A single task, in this case, would be an image in our image sequence.

4 GPUs together render one Image and then move on to the next Image in the Image sequence until the entire sequence has been rendered.

We can speed up preparation time per GPU (when the GPUs sit idly, waiting for the CPU to finish preparing the scene) and bypass some of the multi-GPU slow-downs when we have each GPU render on its own task. We can do this by rendering one task per GPU.

So a machine with 4 GPUs would now render 4 tasks (4 images) at once, each on one GPU, instead of 4 GPUs working on the same image, as before.

Some 3D-Software might have this feature built-in, if not, it is best to use some kind of Render Manager, such as Thinkbox Deadline (Free for up to 2 Nodes/Computers).

GPUs per task

Option to set the amount of GPUs rendering on one task in Thinkbox Deadline

Beware though, that you might have to increase your System RAM a bit and have a strong CPU since every GPU-Task needs its amount of RAM and CPU performance.

We’ve put together an in-depth Guide on How to Render faster. You might want to check that out too.

PCIE Gen 4.0 compatibility issues

In case you have a Motherboard that’ll let you hook up several GPUs to it, but you don’t have any room to plug them in side-by-side, be sure to check out our PCIe-Riser Cable Guide.

Redshift vs. Octane

Another thing I am asked often is if one should go with the Redshift or Octane.

As I myself have used both extensively, in my experience, thanks to the Shader Graph Editor and the vast Multi-Pass Manager of Redshift, I like to use the Redshift Render Engine more for doing work that needs complex Material Setups and heavy Compositing.

Octane is great if you want results fast, as it’s learning curve is shallower. But this, of course, is a personal opinion and I would love to hear yours!

Custom PC-Builder

If you want to get the best parts within your budget, have a look at the PC-Builder Tool.

Select the main purpose that you’ll use the computer for and adjust your budget to create the perfect PC with part recommendations that will fit within your budget.

CGDirector PC-Builder Tool

PC-Builder Facebook Title Image

Answers to frequently asked questions (FAQ)

Is the GPU or CPU more important for rendering?

It depends on the render engine you are using. If you’re using a GPU render engine such as Redshift, Octane, or Cycles GPU, the GPU will be considerably more important for rendering than the CPU.

For CPU render engines, such as Cycles CPU, V-Ray CPU, Arnold CPU, the CPU will be more important.

Interestingly, the CPU also plays a minor role in maximizing GPU render performance. CPUs with high single-core performance are less of a bottleneck for the GPU(s).

Is RTX better than GTX for Rendering?

Yes, Nvidia’s RTX GPUs perform better in GPU Rendering than GTX GPUs. The reason is simple: RTX GPUs both designate higher-tiered and more expensive GPUs that perform better compared to Nvidia’s GTX line-up, and also have Ray-Tracing cores, which can additionally increase render performance in supported engines.

Does more RAM help for rendering?

More RAM will only speed up your rendering if you’ve had too little, to begin with. You see, RAM really only bottlenecks performance if it’s full and data has to be swapped to disk.

If you have a simple 3D scene that only needs about 16GB of RAM for rendering, then 32, 64, or 128GB of RAM will do nothing for that particular task. If you only had 8GB of RAM beforehand, then installing more RAM will considerably increase render performance.

Does more VRAM help for rendering?

Similar to RAM, more VRAM only helps increase performance in scenarios where you had too little, to begin with.

If your 3D scene fits entirely into your GPU’s VRAM, then having a GPU with more VRAM won’t impact performance at all (given all other specs are the same).

Of course, many GPU render engines might be able to utilize larger ray-trees or other optimizations if there’s more free VRAM, so depending on the render engine, you could see a small speedup with more VRAM even if your scene is simple and easily fits into your GPU(s)’s VRAM.

Over to you

What Hardware do you want to buy? Let me know in the comments! Or ask us anything in our forum 🙂

CGDirector is Reader-supported. When you buy through our links, we may earn an affiliate commission.

Alex Glawion - post author

Hi, I’m Alex, a Freelance 3D Generalist, Motion Designer and Compositor.

I’ve built a multitude of Computers, Workstations and Renderfarms and love to optimize them as much as possible.

Feel free to comment and ask for suggestions on your PC-Build or 3D-related Problem, I’ll do my best to help out!


Also check out our Forum for feedback from our Expert Community.

Leave a Reply

  • frank

    Hi Alex,
    It’s not clear to me which is the optimal configuration for real time rendering, IPR (Interactive Prodcuction Render) in Vray. I don’t care so much about the final render time, I care more about it being accurate and fast to set up the scene in real time (camera, lights, materials) before rendering the final image. My scenes are not very big, rather small, product rendering with studio lighting (beverages, cosmetics, etc). Thank you very much for your advice and your work.

    • Alex Glawion

      Hey Frank,
      Render Preparation time is dependent – 99% of the time – on your CPU’s single core performance. So optimal CPUs here would be something like the 5900X, 5800X3D, 12700K, 12900K.

      And also make sure your PCIe-Lanes aren’t bottlenecked or throttled by other devices.

      Once the scene is prepared (lights, meshes, ray-tree, textures etc.) and it has been uploaded to your GPU’s VRAM, then you rarely see the CPU do any of the heavy lifting anymore, that’s the GPU’s job then, to run through the bucket-rendering phase (or progressive phase)

      The CPU is much more involved in real-time previews than it is in final-rendering. Especially when you’re constantly updating scene settings like material color, moving polygons, objects and the like.

      V-Ray does a pretty good job of not having to recreate re-prepare the entire scene on scene update, but it can only do this to a certain degree (mostly on an object level) until it does have to rebake its meshes and recreate the ray-tree etc from the start.

      Long story short: The CPU’s single core performance is what the IPR is most dependent on in preparation time.

      This is a bit of a dilemma for multi-GPU PC builds, because if you need more PCIe Lanes for multiple GPUs, you’ll most likely have a Threadripper or other HEDT-level CPU, which comes with access to more PCIe Lanes and the motherboard has more PCIe Slots, but those CPU (because of their higher core count) also clock lower and therefore also have lower single-core performance than their mainstream counterparts (such as 5900X, 12900K etc). Mainstream CPUs though can only drive a single GPU at full PCIe Lane Bandwidths. If you only have one GPU, that’s the way to go for minimum viewport lag / ipr delay.


  • g1practicetest

    Hi Alex,

    Very helpful thread to educate someone who doesn’t know much about building optimized workstation PC.

    For building a PC optimized primarily for Blender and GPU rendering w/ Octane, my current build lacks viewport performance, and in different software aside from Blender – currently have a 2080/Threadripper 1920X

    Looking to build a new workstation, I’m debating between another Threadripper at 24C/48T to run 1 or 2 RTX 3090s, or a Ryzen 5950X with the same 1 or 2 RTX 3090s.

    From what I understand, the Ryzen will have faster viewport performance but I’m unsure how it will affect the ability to utilize multiple GPUs if I get there.

    Blender seems to benefit in some areas for multithreading, though I primarily use it for modelling, animation, compositing scenes and rendering with Octane. + I regularly use other software, primarily in the order of
    Photoshop, Blender, Octane, Zbrush, Marvelous Designer, Substance Painter, After Effects and Houdini.

    I have a pretty high budget of 10k CAD, would love any feedback with this.

    • g1practicetest

      Also don’t fully understand the benefit of multiple GPUs – in my work specific case, I can afford to wait longer for render times, but idk how much better it is to use multiple

      But if I were to live preview things, will multiple GPUs be useful for quick live feedback?

      If I find it isn’t a huge deal, I’m considering getting an Intel Core i9-12900K

      • Alex Glawion

        It sounds like the 5950X or 12900K are more suited to your work. Unless you absolutely need 3 or 4 GPUs, I’d definitely lean towards the faster viewport / active work performance on mainstream CPUs.

        The Threadrippers are great for 3-4 gpus, more fast storage and if you need cores for CPU Rendering, but the workloads you listed will definitely run better on a higher clocked CPU.

        GPU Rendering scales almost linearly, so yes you’ll have faster previews in cycles or octane and faster renders.


  • Evond Blake

    I have dilemma. I currently have two RTX titan gpus in my system for 3D rendering using Redshift, Octane and Blender cycles. These cards suffer from overheating because of a single slot between them. My question is, are two RTX TITAN’s better than a single RTX 3090 for 3d rendering? What about in Davinci Resolve 17? I would like to hear you thoughts

    • Alex Glawion

      Hey Evond,
      Two RTX Titans will be just slightly faster than a single RTX 3090. Both GPU Render Engines and DaVinci resolve as well scale quite well with multiple GPUs. RS, Octane, Cycles will scale slightly better than DaVinci.

      Nvidia really didn’t think this through, putting open-air coolers on workstation GPUs that are often used in multi-gpu configs.

      You’ll need blower-style coolers if you want to stack GPUs closely on top of each other unfortunately and still want good cooling performance.



    Hi, I have one question.

    I am planning on using nvlink with two quadro rtx A6000 48gb cards.

    So will I get 98 gb vram or 48 gb separate Vrams?

    I am really confused about this 🙁

    Will I get to see 96gb vram figure inside octane or not?

    What will happen when I combine these cards using nvilink?

    Any help will be appreciated 🙂

    Thank you!

  • Mau

    Hi Alex!
    I have a question regarding the amount of GPU memory to render a scene in Blender.
    Unfortunately I don’t have a very powerful hardware, I should upgrade it. As a GPU I have a GTX 1650 Super with 4GB. The system RAM is 6GB.
    Going to render a scene with 380,000 polygons and textures in 4k with Cylces, it gives me the error “System is out of GPU and shared host memory”.
    How is it possible that with only 380,000 polygons the 4GB of ram in the GPU is not enough?
    Maybe it also depends on the textures that are all in 4k?

    Does this error depend only on the graphics card or also on the low system ram?

    Thanks in advance for the help!

    • Alex Glawion

      Hey Mau,
      Any Software you have running, Browsers, etc., as well as the Operating System, will already use up a large chunk of your RAM and GPU VRAM, so Blender will not have the entirety for rendering.

      4GB GPU and just 6GB of System Memory isn’t much and a moderately complex Scene can easily fill that capacity up quickly.

      Textures eat into your VRAM and RAM as well, in addition to caches, micro displacements, subdiv modifiers, and so on. Your Polycount might be a lot higher during rendertime than it is in your viewport.


      • Mau

        I will update my hardware as soon as I can.
        Thanks for the help Alex!

  • Load More Comments