[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220615091759.GB1823790@chaop.bj.intel.com>
Date: Wed, 15 Jun 2022 17:17:59 +0800
From: Chao Peng <chao.p.peng@...ux.intel.com>
To: Andy Lutomirski <luto@...nel.org>
Cc: Sean Christopherson <seanjc@...gle.com>,
Vishal Annapurve <vannapurve@...gle.com>,
Marc Orr <marcorr@...gle.com>, kvm list <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-api@...r.kernel.org,
linux-doc@...r.kernel.org, qemu-devel@...gnu.org,
Paolo Bonzini <pbonzini@...hat.com>,
Jonathan Corbet <corbet@....net>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86 <x86@...nel.org>, "H . Peter Anvin" <hpa@...or.com>,
Hugh Dickins <hughd@...gle.com>,
Jeff Layton <jlayton@...nel.org>,
"J . Bruce Fields" <bfields@...ldses.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...nel.org>,
Steven Price <steven.price@....com>,
"Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
Vlastimil Babka <vbabka@...e.cz>,
Yu Zhang <yu.c.zhang@...ux.intel.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Jun Nakajima <jun.nakajima@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
Andi Kleen <ak@...ux.intel.com>,
David Hildenbrand <david@...hat.com>, aarcange@...hat.com,
ddutile@...hat.com, dhildenb@...hat.com,
Quentin Perret <qperret@...gle.com>,
Michael Roth <michael.roth@....com>, mhocko@...e.com
Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM
guest private memory
On Tue, Jun 14, 2022 at 01:59:41PM -0700, Andy Lutomirski wrote:
> On Tue, Jun 14, 2022 at 12:09 PM Sean Christopherson <seanjc@...gle.com> wrote:
> >
> > On Tue, Jun 14, 2022, Andy Lutomirski wrote:
> > > On Tue, Jun 14, 2022 at 12:32 AM Chao Peng <chao.p.peng@...ux.intel.com> wrote:
> > > >
> > > > On Thu, Jun 09, 2022 at 08:29:06PM +0000, Sean Christopherson wrote:
> > > > > On Wed, Jun 08, 2022, Vishal Annapurve wrote:
> > > > >
> > > > > One argument is that userspace can simply rely on cgroups to detect misbehaving
> > > > > guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM
> > > > > kill from the host is typically considered a _host_ issue and will be treated as
> > > > > a missed SLO.
> > > > >
> > > > > An idea for handling this in the kernel without too much complexity would be to
> > > > > add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from
> > > > > allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor
> > > > > faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would
> > > > > still work, but writes to previously unreserved/unallocated memory would get a
> > > > > SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent
> > > > > unintentional allocations without having to coordinate unmapping/remapping across
> > > > > multiple processes.
> > > >
> > > > Since this is mainly for shared memory and the motivation is catching
> > > > misbehaved access, can we use mprotect(PROT_NONE) for this? We can mark
> > > > those range backed by private fd as PROT_NONE during the conversion so
> > > > subsequence misbehaved accesses will be blocked instead of causing double
> > > > allocation silently.
> >
> > PROT_NONE, a.k.a. mprotect(), has the same vma downsides as munmap().
Yes, right.
> >
> > > This patch series is fairly close to implementing a rather more
> > > efficient solution. I'm not familiar enough with hypervisor userspace
> > > to really know if this would work, but:
> > >
> > > What if shared guest memory could also be file-backed, either in the
> > > same fd or with a second fd covering the shared portion of a memslot?
> > > This would allow changes to the backing store (punching holes, etc) to
> > > be some without mmap_lock or host-userspace TLB flushes? Depending on
> > > what the guest is doing with its shared memory, userspace might need
> > > the memory mapped or it might not.
> >
> > That's what I'm angling for with the F_SEAL_FAULT_ALLOCATIONS idea. The issue,
> > unless I'm misreading code, is that punching a hole in the shared memory backing
> > store doesn't prevent reallocating that hole on fault, i.e. a helper process that
> > keeps a valid mapping of guest shared memory can silently fill the hole.
> >
> > What we're hoping to achieve is a way to prevent allocating memory without a very
> > explicit action from userspace, e.g. fallocate().
>
> Ah, I misunderstood. I thought your goal was to mmap it and prevent
> page faults from allocating.
I think we still need the mmap, but want to prevent allocating when
userspace touches previously mmaped area that has never filled the page.
I don't have clear answer if other operations like read/write should be
also prevented (probably yes). And only after an explicit fallocate() to
allocate the page these operations would act normally.
>
> It is indeed the case (and has been since before quite a few of us
> were born) that a hole in a sparse file is logically just a bunch of
> zeros. A way to make a file for which a hole is an actual hole seems
> like it would solve this problem nicely. It could also be solved more
> specifically for KVM by making sure that the private/shared mode that
> userspace programs is strict enough to prevent accidental allocations
> -- if a GPA is definitively private, shared, neither, or (potentially,
> on TDX only) both, then a page that *isn't* shared will never be
> accidentally allocated by KVM.
KVM is clever enough to not allocate since it knows a GPA is shared or
not. This case it's the host userspace that can cause the allocating and
is too complex to check on every access from guest.
> If the shared backing is not mmapped,
> it also won't be accidentally allocated by host userspace on a stray
> or careless write.
As said above, mmap is still prefered, otherwise too many changes are
needed for usespace VMM.
Thanks,
Chao
>
>
> --Andy
Powered by blists - more mailing lists