[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yk1spw4zIxR73VX8@google.com>
Date: Wed, 6 Apr 2022 10:34:15 +0000
From: Quentin Perret <qperret@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Andy Lutomirski <luto@...nel.org>,
Steven Price <steven.price@....com>,
Chao Peng <chao.p.peng@...ux.intel.com>,
kvm list <kvm@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
Linux API <linux-api@...r.kernel.org>, qemu-devel@...gnu.org,
Paolo Bonzini <pbonzini@...hat.com>,
Jonathan Corbet <corbet@....net>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
the arch/x86 maintainers <x86@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>, Hugh Dickins <hughd@...gle.com>,
Jeff Layton <jlayton@...nel.org>,
"J . Bruce Fields" <bfields@...ldses.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...nel.org>,
"Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
Vlastimil Babka <vbabka@...e.cz>,
Vishal Annapurve <vannapurve@...gle.com>,
Yu Zhang <yu.c.zhang@...ux.intel.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
"Nakajima, Jun" <jun.nakajima@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
Andi Kleen <ak@...ux.intel.com>,
David Hildenbrand <david@...hat.com>,
Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>
Subject: Re: [PATCH v5 00/13] KVM: mm: fd-based approach for supporting KVM
guest private memory
On Tuesday 05 Apr 2022 at 18:03:21 (+0000), Sean Christopherson wrote:
> On Tue, Apr 05, 2022, Quentin Perret wrote:
> > On Monday 04 Apr 2022 at 15:04:17 (-0700), Andy Lutomirski wrote:
> > > >> - it can be very useful for protected VMs to do shared=>private
> > > >> conversions. Think of a VM receiving some data from the host in a
> > > >> shared buffer, and then it wants to operate on that buffer without
> > > >> risking to leak confidential informations in a transient state. In
> > > >> that case the most logical thing to do is to convert the buffer back
> > > >> to private, do whatever needs to be done on that buffer (decrypting a
> > > >> frame, ...), and then share it back with the host to consume it;
> > > >
> > > > If performance is a motivation, why would the guest want to do two
> > > > conversions instead of just doing internal memcpy() to/from a private
> > > > page? I would be quite surprised if multiple exits and TLB shootdowns is
> > > > actually faster, especially at any kind of scale where zapping stage-2
> > > > PTEs will cause lock contention and IPIs.
> > >
> > > I don't know the numbers or all the details, but this is arm64, which is a
> > > rather better architecture than x86 in this regard. So maybe it's not so
> > > bad, at least in very simple cases, ignoring all implementation details.
> > > (But see below.) Also the systems in question tend to have fewer CPUs than
> > > some of the massive x86 systems out there.
> >
> > Yep. I can try and do some measurements if that's really necessary, but
> > I'm really convinced the cost of the TLBI for the shared->private
> > conversion is going to be significantly smaller than the cost of memcpy
> > the buffer twice in the guest for us.
>
> It's not just the TLB shootdown, the VM-Exits aren't free.
Ack, but we can at least work on the rest (number of exits, locking, ...).
The cost of the memcpy and the TLBI are really incompressible.
> And barring non-trivial
> improvements to KVM's MMU, e.g. sharding of mmu_lock, modifying the page tables will
> block all other updates and MMU operations. Taking mmu_lock for read, should arm64
> ever convert to a rwlock, is not an option because KVM needs to block other
> conversions to avoid races.
FWIW the host mmu_lock isn't all that useful for pKVM. The host doesn't
have _any_ control over guest page-tables, and the hypervisor can't
safely rely on the host for locking, so we have hypervisor-level
synchronization.
> Hmm, though batching multiple pages into a single request would mitigate most of
> the overhead.
Yep, there are a few tricks we can play to make this fairly efficient in
the most common cases. And fine-grain locking at EL2 is really high up
on the todo list :-)
Thanks,
Quentin
Powered by blists - more mailing lists