Laptop / Hybrid GPU Power Management Issue (NVIDIA, iGPU + dGPU)
Overview
I've been thoroughly enjoying Omarchy over the past few weeks. However, I've noticed a significant drop in battery life on my laptop. The main culprit appears to be the NVIDIA dGPU, which remains active even when not in use. It does not enter low-power states such as sleep, suspend, or d3cold, resulting in unnecessary power drain.
My goal is to set up a proper hybrid GPU configuration, where the Intel iGPU handles all general rendering tasks, and the NVIDIA dGPU is only activated on-demand for specific applications (e.g., via prime-run). I have achieved this setup successfully in the past on Arch Linux using tools like envycontrol and prime-run.
❌ Current Behavior
- The NVIDIA dGPU remains active at all times, even when no applications explicitly require it.
- Running
sudo lsof /dev/nvidia* 2>/dev/nullshows Hyprland, Walker, and other apps using the dGPU. -
nvtopconfirms thatwalkeris utilizing the dGPU. - I have followed the official Hyprland Multi-GPU Guide, and I'm currently using:
- And also using
envycontrol -s hybrid
✅ Expected Behavior
- iGPU as Default - All regular rendering tasks, including the Wayland compositor (Hyprland), should default to the Intel iGPU.
- dGPU On-Demand - The NVIDIA dGPU should only be activated when launching applications with prime-run or other explicit offloading methods.
- dGPU Power Saving - When idle, the NVIDIA dGPU should transition into a low-power state (e.g., d3cold), conserving battery life.
Steps I have taken and guide
📦 1. Create Persistent GPU Symlinks with udev
This prevents /dev/dri/card0, /dev/dri/card1 shuffling between reboots.
🔧 Script: gpu-symlinks.sh
#!/bin/bash
declare -A GPU_SYMLINKS=(
["Intel"]="intel-igpu"
["AMD"]="amd-igpu"
["NVIDIA"]="nvidia-dgpu"
)
UDEV_DIR="/etc/udev/rules.d"
GPU_LIST=$(lspci -d ::03xx)
if [ -z "$GPU_LIST" ]; then
echo "No GPUs detected!"
exit 1
fi
for VENDOR in "${!GPU_SYMLINKS[@]}"; do
PCI_ID=$(echo "$GPU_LIST" | grep "$VENDOR" | head -n1 | cut -f1 -d' ')
[ -z "$PCI_ID" ] && continue
SYMLINK_NAME="${GPU_SYMLINKS[$VENDOR]}"
RULE_PATH="$UDEV_DIR/${SYMLINK_NAME}-dev-path.rules"
echo "Creating udev rule for $VENDOR GPU at $PCI_ID → /dev/dri/$SYMLINK_NAME"
UDEV_RULE=$(cat <<EOF
KERNEL=="card*", \\
KERNELS=="0000:$PCI_ID", \\
SUBSYSTEM=="drm", \\
SUBSYSTEMS=="pci", \\
SYMLINK+="dri/$SYMLINK_NAME"
EOF
)
echo "$UDEV_RULE" | sudo tee "$RULE_PATH" > /dev/null
done
echo "Reloading udev rules..."
sudo udevadm control --reload
sudo udevadm trigger
✅ Result
/dev/dri/intel-igpu -> card0
/dev/dri/nvidia-dgpu -> card1
These symlinks always point to the correct GPU regardless of boot order.
⚙️ 2. Configure Hyprland to Use iGPU
In your ~/.config/uwsm/env-hyprland or systemd environment for Hyprland:
export AQ_DRM_DEVICES="/dev/dri/intel-igpu:/dev/dri/nvidia-dgpu"
This sets iGPU as the primary renderer and allows fallback to dGPU (for external monitors or specific apps).
🧪 3. Set EnvyControl to Hybrid Mode
sudo pacman -S envycontrol
sudo envycontrol -s hybrid
This enables dynamic GPU offloading: iGPU is default, dGPU available for apps via offload.
🎮 4. Run Apps on the NVIDIA dGPU (When Needed)
Use prime-run or explicit environment variables:
prime-run blender
prime-run resolve
prime-run steam
Manual example:
__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia blender
🛑 5. Don’t Force NVIDIA Globally in Hyprland Avoid this in your Hyprland config:
# ❌ Do NOT use when running on iGPU
# env = NVD_BACKEND,direct
# env = LIBVA_DRIVER_NAME,nvidia
# env = __GLX_VENDOR_LIBRARY_NAME,nvidia
These force NVIDIA rendering globally and can break iGPU setups.
🔋 6. Optional: Enable NVIDIA Power Management To auto-disable dGPU when not in use:
sudo pacman -S nvidia-prime
Create config:
sudo tee /etc/modprobe.d/nvidia-power-management.conf <<EOF
options nvidia NVreg_DynamicPowerManagement=0x02
EOF
Start persistence daemon:
sudo systemctl enable nvidia-persistenced.service
sudo systemctl start nvidia-persistenced.service
Check power state:
cat /proc/driver/nvidia/gpus/0000*/power
This would be amazing! For some reason my laptop defaults to not powering the GPU without even a BIOS option :(
I know this is more hardware dependent and probably not a omarchy issue, but I do like the extra battery life
Thanks @itsmedardan for the thorough explanation on this! Its the main reason for me for not having made the switch as a daily driver yet tbh!
Once this is supported by default (as well as having an easier way to whitelist programs to use prime-run) I believe more people with hybrid gpu setups will join :)
Nice, this worked for me as well. Was banging my head against a wall wondering why my Intel GPU wasn't working. After applying these changes it halved my power consumption with no noticeable performance decrease.
Thank you so much for this! Been struggling to get hyrbrid set up in Linux for a long time. One thing for people to be aware of is don't just blindly copy the env-hyprland file. If you have an amd-igpu this will give you a black screen, and you will not even be able to switch to a TTY. So just make sure you replace the intel-igpu line with amd-igpu, and not go through all the troubleshooting I did only to find out I just blindly copied something that didn't match my set up.
Did all thats mentioned in hyprland wiki and yet my output is
~ ❯ lsof -n -w -t /dev/nvidia*
8191
8269
~ ❯ ps -p 8191
PID TTY TIME CMD
8191 ? 00:00:44 Hyprland
~ ❯ ps -p 8269
PID TTY TIME CMD
8269 ? 00:00:02 walker
edit :
#!/bin/bash
GPU_PATH="/sys/bus/pci/devices/0000:01:00.0/power/control"
PROFILE=$(powerprofilesctl get)
# Change GPU power based on profile
case "$PROFILE" in
power-saver|balanced)
echo auto | sudo tee $GPU_PATH >/dev/null
;;
performance)
echo on | sudo tee $GPU_PATH >/dev/null
;;
*)
echo auto | sudo tee $GPU_PATH >/dev/null
;;
esac
echo "Gpu mode set to $PROFILE"
sleep 1
exit 0
made this script to sync with powerprofilesctl
- even though it shows this , 2-3s after hyprland loads my gpu is switched to amd-igpu which is indicated by an led on my keyboard and power in my waybar
- turning on walker switches back to dgpu
Now i need some way for walker to always use igpu
* `nvtop` confirms that `walker` is utilizing the dGPU.
So , i was able to fix that on mine
So I noticed that whenever I launched walker, it was always picking my NVIDIA RTX 3050 dGPU by default. I checked /usr/share/vulkan/icd.d/ and saw that only nvidia_icd.json was present — no AMD ICD, which meant Vulkan couldn’t even see my iGPU.
I fixed it by installing the AMD Vulkan drivers:
sudo pacman -S vulkan-radeon lib32-vulkan-radeon
After that, /usr/share/vulkan/icd.d/ showed both nvidia_icd.json and radeon_icd.x86_64.json.
Then I forced Vulkan to prefer AMD by default, but still keep NVIDIA available, with:
env = VK_ICD_FILENAMES,/usr/share/vulkan/icd.d/radeon_icd.x86_64.json:/usr/share/vulkan/icd.d/nvidia_icd.json
When I ran vulkaninfo | grep deviceName, it showed AMD Radeon
Walker menu currently uses dgpu on first load , on subsequent launches it uses igpu
I don't have a dedicated graphics card but I still get terrible battery life when the laptop is suspended. I suspect this could be because of the disk encryption when sleep it is constantly expecting user input. The laptop at most gets a couple hours when suspended, even when fully charged.
I want to properly benchmark this so that I can track what settings had an impact and which didn't.. any suggestions?
I heard ubuntu and other distributions had done some work to fix this with setting with disk encryption.
My system is a Lenovo Thinkpad T14 Gen 2
How do you undo this?
I tried doing this setup, but it makes my gpu not detect any other monitors than the laptop monitor. Now I've spent hours trying to troubleshoot why, but I can't.
Thanks, this got me a lot closer. I need to do more testing but I think I have it now. I'm on a zephyrus G16 and asus devices often need a few extra steps so most can ignore the asusctl / g14 kernel pieces. Also note I'm using an intel igpu but there would be the amd equivalent for the intel specific pieces.
Following @itsmedardan steps plus a few of the asus specific ones, I was to the point most processes defaulted to igpu but still had xorg and Hyprland showing up on nvidia-smi processes and was still ~20 watt draw with chromium and ghostty open vs ~10 watts when I did envycontrol -s integrated and the same apps open . After the below steps the nvidia gpu goes into d3 mode with no processes or handles and with chromium and ghostty open I'm ~11-13 watts. Obviously not a perfect test but I assume there is a small amount of overhead to being in hybrid mode even with the dgpu off.
Now I can:
- default to a d3cold state
- launch steam with
VK_ICD_FILENAMES=/usr/share/vulkan/icd.d/nvidia_icd.json prime-run steam(steam works withou the env var, but then launching a game doesn't even if I add it to before the command) - launch a game on nvidia without special per game config
- close game and fully exit steam
- automatcially get a d3cold state after a bit
I've spent more time than I want to already so for the time being I'll probably just make a bash alias for prime-run with the env vars VK_ICD_FILENAMES and __GLX_VENDOR_LIBRARY_NAME. Forgive any slop below, I did a quick pass cleaning it up but used claude to pull my history to piece together the steps I ended up sticking with.
Obvious big con is if you are plugged in / set to performance mode ideally most things would run on nvidia without needing to explicitly tell them to.
NVIDIA Hybrid GPU Power Management on Arch Linux
Guide for configuring NVIDIA dGPU to properly power off (D3cold) when idle on hybrid Intel + NVIDIA laptops running Hyprland.
System Configuration
| Component | Details |
|---|---|
| Laptop | ASUS ROG Zephyrus G16 GU605CR |
| iGPU | Intel Arc Pro 140T(Arrow Lake-H) |
| dGPU | NVIDIA RTX 5070 Ti Mobile (GB205M, Blackwell) |
| OS | Omarchy 3.2.2 Linux |
| Kernel | linux-g14 (asus-linux) |
| Compositor | Hyprland via UWSM |
| Bootloader | Limine |
| NVIDIA Driver | nvidia-open-dkms 580.105.08 |
Configuration Files
1. Kernel Parameters (Limine)
Add to your Limine boot entry's cmdline by editing /etc/default/limine and running sudo limine-mkinitcpio:
nvidia-drm.modeset=1
Generated output in /boot/limine/limine.conf:
2. NVIDIA Power Management Module Options
Create /etc/modprobe.d/nvidia-power-management.conf:
options nvidia NVreg_DynamicPowerManagement=0x02
options nvidia NVreg_EnableS0ixPowerManagement=1
-
NVreg_DynamicPowerManagement=0x02- Enables fine-grained power management -
NVreg_EnableS0ixPowerManagement=1- Enables S0ix (modern standby) power management
3. Runtime Power Management udev Rules
Create /etc/udev/rules.d/80-nvidia-pm.rules:
# Enable runtime PM for NVIDIA VGA/3D controller
ACTION=="add|bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030000", TEST=="power/control", ATTR{power/control}="auto"
ACTION=="add|bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030200", TEST=="power/control", ATTR{power/control}="auto"
# Enable runtime PM for NVIDIA audio
ACTION=="add|bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x040300", TEST=="power/control", ATTR{power/control}="auto"
Important: Use ACTION=="add|bind" not just ACTION=="bind". If nvidia modules are in initramfs, the driver binds before udev rules load, so add catches device discovery.
4. Stable GPU Device Symlinks
The /dev/dri/card* numbers can change between boots. Create stable symlinks using the helper script:
#!/bin/bash
## https://github.com/itsmedardan version of the hyprland recommended way to
## make static paths for gpus https://wiki.hypr.land/Configuring/Multi-GPU/#creating-consistent-device-paths-for-specific-cards
declare -A GPU_SYMLINKS=(
["Intel"]="intel-igpu"
["AMD"]="amd-igpu"
["NVIDIA"]="nvidia-dgpu"
)
UDEV_DIR="/etc/udev/rules.d"
GPU_LIST=$(lspci -d ::03xx)
if [ -z "$GPU_LIST" ]; then
echo "No GPUs detected!"
exit 1
fi
for VENDOR in "${!GPU_SYMLINKS[@]}"; do
PCI_ID=$(echo "$GPU_LIST" | grep "$VENDOR" | head -n1 | cut -f1 -d' ')
[ -z "$PCI_ID" ] && continue
SYMLINK_NAME="${GPU_SYMLINKS[$VENDOR]}"
RULE_PATH="$UDEV_DIR/${SYMLINK_NAME}-dev-path.rules"
echo "Creating udev rule for $VENDOR GPU at $PCI_ID → /dev/dri/$SYMLINK_NAME"
UDEV_RULE=$(cat <<EOF
KERNEL=="card*", \\
KERNELS=="0000:$PCI_ID", \\
SUBSYSTEM=="drm", \\
SUBSYSTEMS=="pci", \\
SYMLINK+="dri/$SYMLINK_NAME"
EOF
)
echo "$UDEV_RULE" | sudo tee "$RULE_PATH" > /dev/null
done
echo "Reloading udev rules..."
sudo udevadm control --reload
sudo udevadm trigger
This script auto-detects GPUs and creates appropriate udev rules.
This creates /dev/dri/intel-igpu and /dev/dri/nvidia-dgpu symlinks.
Reference: Hyprland Wiki - Creating consistent device paths
5. EnvyControl Hybrid Mode
Install required packages:
yay -S vulkan-intel lib32-vulkan-intel
yay -S envycontrol nvidia-prime
Set hybrid GPU mode using EnvyControl:
sudo envycontrol -s hybrid --verbose
Note: The --verbose flag is required on Omarchy systems because the mkinitcpio wrapper prompts for confirmation to run limine-mkinitcpio. Without --verbose, the command hangs waiting for input.
This configures the system for hybrid mode where the iGPU is default and dGPU is available on-demand via prime-run.
6. SDDM Wayland Configuration
Prevent SDDM from using the NVIDIA GPU by forcing Wayland.
Create /etc/sddm.conf.d/10-wayland.conf:
[General]
DisplayServer=wayland
GreeterEnvironment=wayland
7. Hyprland Environment Variables
Add to ~/.config/uwsm/env (or your Hyprland environment file):
# Force Hyprland to use Intel iGPU only. This is the only one I'm sure is needed
export AQ_DRM_DEVICES="/dev/dri/intel-igpu"
# Force Mesa EGL instead of NVIDIA EGL
export __EGL_VENDOR_LIBRARY_FILENAMES="/usr/share/glvnd/egl_vendor.d/50_mesa.json"
export __GLX_VENDOR_LIBRARY_NAME="mesa"
# Disable NVIDIA PRIME offload by default
export __NV_PRIME_RENDER_OFFLOAD=0
export __VK_LAYER_NV_optimus="non_NVIDIA_only"
# Use Intel Vulkan driver by default need to install vulkan-intel and lib32-vulkan-intel for this file to exist. I've since commented it out and doesn't seem to impact
# export VK_ICD_FILENAMES="/usr/share/vulkan/icd.d/intel_icd.x86_64.json"
This prevents Hyprland and Wayland apps from opening handles to /dev/nvidia* devices, allowing the GPU to suspend.
8. Disable NVIDIA Forcing in Hyprland Config
If your Hyprland config (e.g., from Omarchy) has NVIDIA environment variables, comment them out:
In ~/.config/hypr/hyprland.conf:
# NVIDIA environment variables - COMMENT THESE OUT for hybrid mode
# env = NVD_BACKEND,direct
# env = LIBVA_DRIVER_NAME,nvidia
# env = __GLX_VENDOR_LIBRARY_NAME,nvidia
9. Disable nvidia-persistenced
The persistence daemon keeps the GPU initialized, preventing D3cold:
sudo systemctl disable nvidia-persistenced
sudo systemctl stop nvidia-persistenced
Trade-off: First CUDA/GPU application launch will have slightly higher latency.
10. Regenerate initramfs (if nvidia modules are included)
I did this before some of the other changes that I was able to confirm had an impact so this maybe unnecessary but I moved renamed /usr/share/glvnd/egl_vendor.d/10_nvidia.json to 90_nvidia.json to make mesa the higher priority
Verification
After rebooting, verify the GPU is in D3cold:
# Check runtime status (should be "suspended")
cat /sys/bus/pci/devices/0000:01:00.0/power/runtime_status
# Check power state (should be "D3cold")
cat /sys/bus/pci/devices/0000:01:00.0/power_state
# Check power control (should be "auto")
cat /sys/bus/pci/devices/0000:01:00.0/power/control
# Check NVIDIA driver power status
cat /proc/driver/nvidia/gpus/0000:01:00.0/power
Expected output when suspended:
Runtime D3 status: Enabled (fine-grained)
Video Memory: Off
GPU Hardware Support:
Video Memory Self Refresh: Supported
Video Memory Off: Supported
S0ix Power Management:
Platform Support: Supported
Status: Enabled
Diagnostic Commands
# Check GPU power draw (note: this wakes the GPU!)
nvidia-smi
# Check what processes are using NVIDIA devices
lsof /dev/nvidia*
# Check nvidia module reference counts
lsmod | grep nvidia
# Check boot messages for nvidia errors
journalctl -b | grep -i nvidia
# Check ASUS-specific GPU settings
cat /sys/devices/platform/asus-nb-wmi/dgpu_disable # Must be 0
cat /sys/devices/platform/asus-nb-wmi/gpu_mux_mode # 1 = hybrid mode
Troubleshooting
2025 zepheryus G16 GPU stuck in D3cold at boot (won't wake)
Symptom: nvidia-smi fails with "driver not communicating"
Cause: ASUS firmware has dGPU disabled
Fix: The armoury driver doesn't hit the vanilla kernel till 6.19. In the meantime install G14 or cachyos kernel. see asus-linux.org arch guide.
# Check dgpu_disable (1 = disabled, 0 = enabled)
asusctl armoury dgpu_disable 0
Or use: asusctl armory dgpu_disable 0
GPU won't suspend (stays at 5-15W idle)
Check 1: Is power/control set to auto?
cat /sys/bus/pci/devices/0000:01:00.0/power/control
# If "on", the udev rules didn't apply
Check 2: Any processes holding nvidia handles? Both hyprland and walker were not showing up as processes in nvidia-smi but had nvidia handles
lsof /dev/nvidia*
# Common culprits: Hyprland, electron apps, OBS
Check 3: Is Video Memory active?
cat /proc/driver/nvidia/gpus/0000:01:00.0/power
# If "Video Memory: Active", something allocated VRAM
Check 4: Is nvidia-persistenced running?
systemctl status nvidia-persistenced
# If active, stop and disable it
udev rules not applying
Cause: nvidia modules in initramfs bind before udev runs
Fix: Change ACTION=="bind" to ACTION=="add|bind" in udev rules
Hyprland/apps still opening nvidia handles
Cause: EGL probes all GPUs by default
Fix: Set __EGL_VENDOR_LIBRARY_FILENAMES to force Mesa (see Section 5)
Running Apps on dGPU
When you need GPU acceleration, use environment variables to offload:
# Or use prime-run wrapper if installed
VK_ICD_FILENAMES=/usr/share/vulkan/icd.d/nvidia_icd.json prime-run app
The GPU will wake from D3cold, run the workload, then suspend again when idle.
References
@CoffeeSquirel envycontrol has a --reset flag to revert its changes. Everything else from op you should be able to just do in reverse (uninstall packages, unset values, etc)
@CoffeeSquirel envycontrol has a --reset flag to revert its changes. Everything else from op you should be able to just do in reverse (uninstall packages, unset values, etc)
Thanks for your input, I did find out the laptop actually outputs to other monitors, and I can't for the life of me figure out why it does not output to my main monitor. It's literally only omarchy on this device and this monitor. Monitor is a Philips Evnia 49M2C8900 if you're curious