You are not logged in.
Pages: 1
Greetings everyone. Hoping some folks on the forum can point me in the right direction with this issue.
I'm working on a system that is utilizing KVM and PCI Passthrough of some Intel PCIe NIC's. The project is being done to be as minimal as possible so it's been a lot of from source with minimal dependencies. It's been an awesome learning experience so far but I'm getting to a point with PCI devices and Kernel configuration parameters that's beyond my current know-how. What I'm seeing is as follows:
On the current 4.9 kernel from the repo, my 4 PCIe NICs shows up each in their own IOMMU group. Nic 1 in group 12, Nic 2 in group 13, etc. This is great as I understand it this means I can detach each of these NICs individually from the host and pass them directly to one of the KVM guests. Having done this with one of the VM's so far all seems well.
So now I wanted to set this up on the slimmed down, minimal kernel. I've added the following configuration options in the custom kernel (I do realize that they should all be CONFIG_* and they are in the acutal configuration file):
INTEL_IOMMU=y
INTEL_IOMMU_SVM=y
IOMMU_API=y
IOMMU_SUPPORT=y
VFIO_IOMMU_TYPE1=y
VFIO=y
VFIO_PCI=y
VFIO_PCI_MMAP=y
KVM_VFIO=y
Now when I boot the custom kernel, all four of the Intel NIC's show up in the same IOMMU group (group 6 if it matters). I could detach all of them from the host but I was hoping to avoid that if possible.
Again what's happening with this low-level PCI/kernel situation is outside of my current understanding but I'm trying to learn. I've tried doing a comm/diff of the configs from the custom kernel and the larger repo kernel one but that results in quite a bit of items that are enabled in the repo one (obviously since it has to be more generic). Does anyone have any tips or suggestions on how to track down which configuration option might be contributing to the differences in IOMMU groupings for these NICs?
Offline
Update just in case anyone else ever runs into this issue as well.
Looking closer at the repo kernel it looks like the ACS Override patch is applied. So all that was needed was applying the ACS override patch as well as utilizing the pcie_acs_override boot parameter. While not an ideal situation due to the potential security issues, it doesn't seem to concern any of the major distro's?? Arch, Debian, CentOS, RHEL, Ubuntu, etc all seem to provide this patch within their kernel build.
ACS override worked but is a bad idea. Looks like upgrading to a slightly newer kernel and utilizing CONFIG_PCI_MMCONFIG=y accomplished the same thing without the need for the ACS override patch.
Last edited by aut0exec (2019-07-09 01:10:16)
Offline
Pages: 1