Skip to content

Using PCI Pass-Through

PCI pass-through assigns a physical NPU directly to a single VM. In this mode, only the driver running in the assigned VM has exclusive access to the NPU, and the NPU is not shared between VMs.

Enable IOMMU and Configure Recommended Kernel Parameters

First, enable the following setting in the server platform BIOS.

  • VT-D/IOMMU

The BIOS menu name and configuration steps may vary depending on the system vendor.

PCI pass-through uses IOMMU to isolate DMA access from the NPU device assigned to the VM, protecting the host and other VMs' memory. PCI pass-through does not work when IOMMU is disabled.

Note

IOMMU is already enabled in the default kernels of the currently supported distributions.

The recommended kernel parameters for an NPU PCI pass-through environment are as follows. Add them to GRUB_CMDLINE_LINUX in /etc/default/grub.

Note

The example below assumes a server with an Intel processor and an RBLN-CA22. For AMD processors, use amd_iommu=on instead of intel_iommu=on. Some kernel parameters may differ depending on the server product.

transparent_hugepage=madvise pcie_aspm=force pci=pcie_bus_perf pci=bfsort pci=noats iommu=pt intel_iommu=on iommu.strict=1

Update the GRUB configuration and reboot.

$ update-grub
$ reboot

Using PCI Pass-Through on Linux KVM

You can configure NPU pass-through on an Ubuntu Linux Kernel-based Virtual Machine (KVM) using the virsh command.

The following prerequisites must be met before configuring NPU pass-through.

  • Ubuntu Linux KVM must be installed (sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils).
  • The VM machine type must be q35. q35 is required for NPU pass-through because it supports the PCIe native topology.

Configuring NPU Pass-Through to a VM with virsh

This procedure covers how to assign an NPU to an existing VM configured with the q35 machine type via pass-through. VM creation and basic configuration are out of scope for this document; refer to the documents below for details.

For details, see Ubuntu Virtualization - virsh and libvirt - PCI passthrough of host devices.

  1. Identify the PCI device BDF (bus/device/function) of the NPU to assign to the VM in pass-through mode.

    In the following example, the NPU's PCI device BDF is 1b:00.0.

    $ lspci -nn | grep accelerators
    1b:00.0 Processing accelerators [1200]: Device [1eff:1220] (rev 03)
    
  2. Convert the BDF identified in step 1 into the domain, bus, slot, and function format, specify it in the XML, and assign the NPU device to the VM using that XML file.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    $ cat > npu-device.xml << EOF
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x1b' slot='0x00' function='0x0'/>
      </source>
    </hostdev>
    EOF
    
    $ virsh attach-device vm-name npu-device.xml --config
    

    Note

    You can also configure this by editing the VM XML directly with virsh edit and adding the <hostdev> entry.

  3. Start the VM with the assigned NPU.

    $ virsh start vm-name
    
  4. Connect to the VM and verify that the NPU has been assigned correctly.

    1
    2
    3
    $ virsh console vm-name
    $ lspci -nn | grep accelerators
    07:00.0 Processing accelerators [1200]: Device [1eff:1220] (rev 03)