[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20240118165838.1934853-1-seanjc@google.com>
Date: Thu, 18 Jan 2024 08:58:38 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Jason Gunthorpe <jgg@...dia.com>, Yan Zhao <yan.y.zhao@...el.com>,
David Matlack <dmatlack@...gle.com>
Subject: [ANNOUNCE] PUCK Notes - 2024.01.17 - TDP MMU for IOMMU
Recording and slides:
https://drive.google.com/corp/drive/folders/1sSr_8FE5KjjGGnpX7_QlHAX3QGoRnck7?resourcekey=0-UB_vbXfpY4Dezo9xI_-6iA
Key Takeways:
- Having KVM notify (or install PTEs in) the IOMMU page tables for _all_ PTEs
created by KVM may not be necessary to achieve the desired performance, e.g.
proactively mapping in the IOMMU may only be necessary when swapping in
memory for oversubscribed VMs.
- Synchronously notifying/installing could be a net negative for guest
performance, e.g. could add significant latency in KVM's page fault path if
a PTE operation necessitates an IOMMU TLB invalidation.
- Despite hardware vendors' intentions/claims, CPU and IOMMU page table entries
aren't 100% interchangeable. E.g. even on Intel, where the formats are
compatible, it's still possible to create EPT PTEs (CPU) that are not usable
in the IOMMU.
- Given the above, having KVM manage and/or notify IOMMU Page tables would be
a premature optimization.
- Recommended next step is to explore using heterogeneous memory management
(HMM) to manage IOMMU page tables and coordinate with mmu_notifiers, and see
if HMM can be optimized to meet the performance goals without involving KVM.
Powered by blists - more mailing lists