Skip to main content
  1. Guides/
  2. Intel iGPU Split Passthrough/

Configuring Kubernetes

·2 mins· ·
Gpu Kubernetes Intel K3s K8s
Table of Contents
Intel iGPU Split Passthrough - This article is part of a series.
Part 4: This Article

You have everything in place to make Kubernetes GPU-aware! For that you’ll be using the Intel Device Plugins for Kubernetes.

Configuration
#

Step 1: Label GPU-Enabled Nodes
#

First, label each GPU-equipped node so the Intel Device Plugin Operator knows where to work its magic (replace <node-name> with your own):

kubectl label nodes <node-name> intel.feature.node.kubernetes.io/gpu=true

Step 2: Install Intel Device Plugin Components
#

NOTE
Update (July 2025): When I first documented this process, I was living the manual life – helm install this, kubectl apply that. Since then, I’ve migrated to FluxCD for GitOps-based deployments. For those interested in the automated approach, my home-ops repository shows how to deploy Intel Device Plugins declaratively.

Grab the Intel Helm charts and update:

helm repo add intel https://intel.github.io/helm-charts/
helm repo update

Deploy the Device Plugin Operator:

helm install --namespace=kube-system intel-device-plugins-operator intel/intel-device-plugins-operator

Step 3: Configure the GPU Device Plugin
#

Create values.yaml to define your sharing strategy:

name: i915
sharedDevNum: 1         # Maximum pods per GPU
nodeFeatureRule: false  # Disable automatic node feature discovery

Deploy the GPU plugin with your created config:

helm install --namespace=kube-system intel-device-plugins-gpu intel/intel-device-plugins-gpu -f values.yaml

Using GPU Resources
#

Resource Requests
#

You can now configure pods to request GPU resources through Helm charts or Deployment manifests:

resources:
  limits:
    gpu.intel.com/i915: "1"
  requests:
    gpu.intel.com/i915: "1"

Node Selection
#

Got a mixed cluster? Force GPU workloads to GPU enabled nodes:

nodeSelector:
  intel.feature.node.kubernetes.io/gpu: "true"

Final Check
#

When you deployed your first application with a resource request for a GPU, verify within the container by executing:

ls /dev/dri

Sweet success looks (again) like this:

by-path  card0  renderD128

Congratulations! You’ve just pulled off the triple axel of virtualization: GPU passthrough from host to VM to container. Your media stack is now hardware-accelerated, your streams are smooth, and somewhere, a CPU is breathing a sigh of relief. Happy transcoding!

Kaj
Author
Kaj
Product Manager by day, DevOps tinkerer by night.
Intel iGPU Split Passthrough - This article is part of a series.
Part 4: This Article

Related

Overview & Prerequisites
·2 mins
Gpu Virtualization Kubernetes Intel Proxmox K3s K8s Talos
Adding the GPU to the Kubernetes Nodes
·2 mins
Gpu Virtualization Kubernetes Intel Proxmox Talos
Configuring the Proxmox hosts
·3 mins
Gpu Virtualization Kubernetes Intel Proxmox