![Shared GPU memory has doubled since I put ram in just now. from 8Gb to 16GB (also 8GB dedicated on the 3080) : r/ZephyrusG15 Shared GPU memory has doubled since I put ram in just now. from 8Gb to 16GB (also 8GB dedicated on the 3080) : r/ZephyrusG15](https://preview.redd.it/shared-gpu-memory-has-doubled-since-i-put-ram-in-just-now-v0-83nevesmgkga1.png?width=1028&format=png&auto=webp&s=2600cda878460b1746f99e9228a7548e8c9bcaaa)
Shared GPU memory has doubled since I put ram in just now. from 8Gb to 16GB (also 8GB dedicated on the 3080) : r/ZephyrusG15
![How do I increase the shared GPU memory allocation multiplicator? - CUDA Programming and Performance - NVIDIA Developer Forums How do I increase the shared GPU memory allocation multiplicator? - CUDA Programming and Performance - NVIDIA Developer Forums](https://global.discourse-cdn.com/nvidia/optimized/3X/d/5/d50b9a81fe57b2f3b1a075c9bf50f39cc9dd5241_2_690x410.png)
How do I increase the shared GPU memory allocation multiplicator? - CUDA Programming and Performance - NVIDIA Developer Forums
![Force Full Usage of Dedicated VRAM instead of Shared Memory (RAM) · Issue #45 · microsoft/tensorflow-directml · GitHub Force Full Usage of Dedicated VRAM instead of Shared Memory (RAM) · Issue #45 · microsoft/tensorflow-directml · GitHub](https://user-images.githubusercontent.com/15016720/93714923-7f87e780-fb2b-11ea-86ff-2f8c017c4b27.png)
Force Full Usage of Dedicated VRAM instead of Shared Memory (RAM) · Issue #45 · microsoft/tensorflow-directml · GitHub
![python - How can I decrease Dedicated GPU memory usage and use Shared GPU memory for CUDA and Pytorch - Stack Overflow python - How can I decrease Dedicated GPU memory usage and use Shared GPU memory for CUDA and Pytorch - Stack Overflow](https://i.stack.imgur.com/vTJJ1.png)