[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YdcpIQgMZJrqswKU@google.com>
Date: Thu, 6 Jan 2022 17:38:41 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: David Stevens <stevensd@...omium.org>
Cc: Marc Zyngier <maz@...nel.org>, Paolo Bonzini <pbonzini@...hat.com>,
James Morse <james.morse@....com>,
Alexandru Elisei <alexandru.elisei@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Will Deacon <will@...nel.org>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Chia-I Wu <olv@...omium.org>
Subject: Re: [PATCH v5 4/4] KVM: mmu: remove over-aggressive warnings
On Thu, Jan 06, 2022, David Stevens wrote:
> On Thu, Jan 6, 2022 at 4:19 AM Sean Christopherson <seanjc@...gle.com> wrote:
> >
> > On Wed, Jan 05, 2022, Sean Christopherson wrote:
> > > Ah, I got royally confused by ensure_pfn_ref()'s comment
> > >
> > > * Certain IO or PFNMAP mappings can be backed with valid
> > > * struct pages, but be allocated without refcounting e.g.,
> > > * tail pages of non-compound higher order allocations, which
> > > * would then underflow the refcount when the caller does the
> > > * required put_page. Don't allow those pages here.
> > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > > that doesn't apply here because kvm_faultin_pfn() uses the low level
> > > __gfn_to_pfn_page_memslot().
> >
> > On fifth thought, I think this is wrong and doomed to fail. By mapping these pages
> > into the guest, KVM is effectively saying it supports these pages. But if the guest
> > uses the corresponding gfns for an action that requires KVM to access the page,
> > e.g. via kvm_vcpu_map(), ensure_pfn_ref() will reject the access and all sorts of
> > bad things will happen to the guest.
> >
> > So, why not fully reject these types of pages? If someone is relying on KVM to
> > support these types of pages, then we'll fail fast and get a bug report letting us
> > know we need to properly support these types of pages. And if not, then we reduce
> > KVM's complexity and I get to keep my precious WARN :-)
>
> Our current use case here is virtio-gpu blob resources [1]. Blob
> resources are useful because they avoid a guest shadow buffer and the
> associated memcpys, and as I understand it they are also required for
> virtualized vulkan.
>
> One type of blob resources requires mapping dma-bufs allocated by the
> host directly into the guest. This works on Intel platforms and the
> ARM platforms I've tested. However, the amdgpu driver sometimes
> allocates higher order, non-compound pages via ttm_pool_alloc_page.
Ah. In the future, please provide this type of information in the cover letter,
and in this case, a paragraph in patch 01 is also warranted. The context of _why_
is critical information, e.g. having something in the changelog explaining the use
case is very helpful for future developers wondering why on earth KVM supports
this type of odd behavior.
> These are the type of pages which KVM is currently rejecting. Is this
> something that KVM can support?
I'm not opposed to it. My complaint is that this series is incomplete in that it
allows mapping the memory into the guest, but doesn't support accessing the memory
from KVM itself. That means for things to work properly, KVM is relying on the
guest to use the memory in a limited capacity, e.g. isn't using the memory as
general purpose RAM. That's not problematic for your use case, because presumably
the memory is used only by the vGPU, but as is KVM can't enforce that behavior in
any way.
The really gross part is that failures are not strictly punted to userspace;
the resulting error varies significantly depending on how the guest "illegally"
uses the memory.
My first choice would be to get the amdgpu driver "fixed", but that's likely an
unreasonable request since it sounds like the non-KVM behavior is working as intended.
One thought would be to require userspace to opt-in to mapping this type of memory
by introducing a new memslot flag that explicitly states that the memslot cannot
be accessed directly by KVM, i.e. can only be mapped into the guest. That way,
KVM has an explicit ABI with respect to how it handles this type of memory, even
though the semantics of exactly what will happen if userspace/guest violates the
ABI are not well-defined. And internally, KVM would also have a clear touchpoint
where it deliberately allows mapping such memslots, as opposed to the more implicit
behavior of bypassing ensure_pfn_ref().
If we're clever, we might even be able to share the flag with the "guest private
memory"[*] concept being pursued for confidential VMs.
[*] https://lore.kernel.org/all/20211223123011.41044-1-chao.p.peng@linux.intel.com
Powered by blists - more mailing lists