• Welcome to our Forum! Ask PC-Build Questions, discuss Tech-News, Content Creation & Gaming Workloads or get to know the CGDirector Community off-topic. Feel free to chime in with insight or questions on any existing topic too! :)

Redshift Rendering Node using Riser Cables

Walkinghome6

Walkinghome6

Tech Intern
Joined
Jun 5, 2021
Messages
4
Reaction score
0
Points
1
Location
Nashville
I'm putting plans together to build a render farm and am wanting to use PCI riser cables to lift the GPUs up and off of the motherboard so that I can provide better air cooling solutions. Also, to try and maximize the number of GPUs I can put on 1 node. I haven't seen much online of anyone doing this. But I have seen this article talking about it below. I don't want to use liquid cooling. I want to do something similar to a mining rig, but using riser cables to keep the bandwidth for Redshift.


How far has anyone taking this? I'm looking to use an AMD Threadripper Pro for it's 128 lanes. I'm currently looking at this MB:


And looking at this mining case:


Thanks
 
Alex Glawion

Alex Glawion

CG Hardware Specialist @ CGDirector
Staff member
Joined
Jun 12, 2020
Messages
973
Reaction score
187
Points
43
Should be no problem. As long as the cables are premium and they are not damaged or bent too much, the bandwidth should see no decrease.

I'd take the fastest clocking Threadripper as GPU Rendering is somewhat dependent on CPU clocks as well, especially when preparing the scene, mesh stuff, uploading and such.

With a lot of GPUs in one Node, I'd think about doing multiple concurrent tasks with a render manager such as thinkbox deadline. It depends on the task duration really. If your renders take 2mins, and 20s of that is preparation (so cpu single core dependent), you're leaving a lot of performance on the table when all of those GPUs wait for the bucket phase to start.

If your frames take hours and hours, of course, then having all your gpus render on the same frame is fine.
 
Walkinghome6

Walkinghome6

Tech Intern
Joined
Jun 5, 2021
Messages
4
Reaction score
0
Points
1
Location
Nashville
Thanks for the advice Alex. I agree with what you’re saying. My UHD frames seem to be in the 3-10min range with my current setup, rendering to picture viewer on 3x3090s turbos all in one box.

I’ve heard people tell me that the setup will never work. I think I may have to invest some time in trying out different cables. Definitely will be using deadline.

I wish I could find a post online where someone documents actually doing this. I only find mining setups. I guess I’ll share my results.
 
Alex Glawion

Alex Glawion

CG Hardware Specialist @ CGDirector
Staff member
Joined
Jun 12, 2020
Messages
973
Reaction score
187
Points
43
Would be curious to see how it goes! Most CUDA Redshift workloads don't require high pcie bandwidth. Especially on RTX 3090s that have 24GB of VRAM. There'll be no transfer necessary unless your scenes are crazy complex. Of course all the data has to be uploaded to the GPU initially, but after that, I don't see why there should be any performance hit.
 
Walkinghome6

Walkinghome6

Tech Intern
Joined
Jun 5, 2021
Messages
4
Reaction score
0
Points
1
Location
Nashville
What about cached particles simulations, cached fluid meshes, and cached VDBs?Doesn’t that update the VRAM every frame?
 
Alex Glawion

Alex Glawion

CG Hardware Specialist @ CGDirector
Staff member
Joined
Jun 12, 2020
Messages
973
Reaction score
187
Points
43
Yes, but it's expected that a lot of scene data has to be updated after every frame. What I mean is that during rendering a single frame, e.g. switching buckets, there's not much transfer going on. This happens mostly when you're utilizing out of core and have to access system ram because your vram is too small. This shouldn't happen in your case.

But after every frame, yes, there has to be some Data up and download through the pcie bus even if deadline keeps the scene data in memory in batch mode.
 
Walkinghome6

Walkinghome6

Tech Intern
Joined
Jun 5, 2021
Messages
4
Reaction score
0
Points
1
Location
Nashville
I have been lucky to not work on anything that required out of core yet. I updated to 3090s from 980tis a few months ago. I just try to make sure things are instanced correctly where I can.

I am looking at the lowest core, highest clock Threadripper Pro. Not sure if the 128 lane count vs non-pro 88 will be a big deal if I manage to get 7 GPUs and an NVMe SSD all in one system.

I’m going to be having some electrical work done for adequate power to the area in my house this is going.
 
Top