When you have an integrated or dedicated GPU, your operating system will take up as much as half of your RAM and use it as shared GPU memory.
That, to the uninitiated, might seem strange, but there’s actually a very logical explanation behind it all.
Before we delve any deeper into the why and how, let’s first “deconstruct” this topic into four key questions:
- What is shared GPU memory?
- How does it affect your PC?
- Does it affect your rendering, 3D modeling, and gaming performance?
- Can you turn this peculiar feature off and, if so, should you?
Let’s take a closer look!
What is Shared GPU Memory
Let’s start off with the basic definition:
Shared GPU memory is a type of virtual memory that’s typically used when your GPU runs out of dedicated video memory.
Shared GPU memory, therefore, is not the same as dedicated GPU memory. There’s a big difference between these two types of VRAM.
Why do operating systems use RAM as a source of memory for your GPU when there’s much more of it available on hard disk drives and SSDs?
The reason is rather simple: RAM is much faster than any SSD or HDD out there.
Difference Between Dedicated and Shared GPU Memory
Dedicated and shared GPU memory aren’t the same.
Dedicated GPU memory is the physical memory located on a discrete graphics card itself. Generally, these are high-speed memory modules (GDDR / HBM) placed close to the GPU’s core chip, which is used for rendering software, applications, and games (amongst other things).
Shared GPU memory is “sourced” and taken from your System RAM – it’s not physical, but virtual – basically just an allocation or reserved area on your System RAM; this memory can then be used as VRAM (Video-RAM) once your dedicated GPU memory (if you have any) is full.
If you have an iGPU (a GPU that is integrated inside your CPU), this iGPU does not have any kind of dedicated VRAM of its own. Therefore it’ll have to use your System RAM.
Your system will dedicate up to 50% of your physical RAM to shared GPU memory, regardless if you have an integrated or a dedicated GPU.
As you can see in the image below, my PC has an Nvidia GTX Titan with 6GB of VRAM (Dedicated GPU Memory), and because I have 16GB of System RAM, 8 of those are allocated to be used for “Shared GPU Memory” (half of my System RAM).
Should you decrease or increase Shared GPU Memory?
An important question arises: should you tinker with these settings? Well, it really depends on your setup.
If you have a dedicated GPU, leaving things as they are would probably be for the best. Depending on your particular graphics card and its VRAM capacity, shared GPU memory might not even be used at all!
If push comes to shove and your OS has to resort to this peculiar procedure, it won’t cause any additional performance issues apart from spikes, and frame drops the very moment your GPU’s VRAM buffer is filled.
If you don’t have a dedicated GPU but rather an integrated one (such as the Intel UHD Graphics 730, for instance), tinkering with shared GPU memory settings should still be avoided.
Modern operating systems do a great job of managing and allocating memory, so it’s best to let them do their thing.
Because 50% of your System RAM can already be used by your (i)GPU, chances are slim that your workload will need more VRAM (shared memory) without also needing RAM to function.
If you had a simple application that only used a lot of your GPU’s resources but not your CPU’s, then you might see a performance impact when allocating more shared GPU memory to be used.
Dedicated memory represents the amount of physical VRAM a GPU possesses, whereas shared GPU memory represents a virtual amount taken from your system’s RAM.
Modern operating systems do a great job when it comes to optimizing shared video memory usage.
There’s really no need for you to manually adjust anything, though if your fingers are itching, you can do so by adjusting video memory settings in your BIOS or – in case you have an Intel or AMD APU – by making the desired changes in the Windows Registry Editor.
Is shared GPU Memory slower than dedicated GPU Memory (VRAM)?
Yes. Because shared Memory is essentially RAM, if a GPU has to resort to using the System’s RAM for its computations, it’ll take a performance hit.
Dedicated VRAM Modules are close to the GPU’s core chip and can be accessed much quicker than going through the PCIe-Bus through the Motherboard to find something in System RAM.
What is Shared GPU Memory Used For?
Shared GPU memory is borrowed from the total amount of available RAM and is used when the system runs out of dedicated GPU memory.
The OS taps into your RAM because it’s the next best thing performance-wise; RAM is a lot faster than any SSD on the market, and that’ll surely remain the case for the foreseeable future.
How to Change Shared GPU Memory Value in Windows?
Changing the amount of your system’s shared GPU memory is not as straightforward as one might think.
You’ll need to tinker around with your BIOS settings if you want to adjust or turn off the shared GPU memory feature.
It’s not recommended, though, as making these kinds of changes isn’t going to improve the performance of your system in most workloads.
What’s the Difference Between Total, Dedicated, and Shared GPU Memory?
Dedicated video memory is the amount of physical memory on your GPU.
Shared GPU memory is the amount of virtual memory that will be used in case dedicated video memory runs out. This typically amounts to 50% of available RAM.
When these two pools of memory are combined, you get the total amount.
Over To You
How much VRAM does your graphics card have, and did you ever tinker with any of these settings?