[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZxfHNo1dUVcOLJYK@google.com>
Date: Tue, 22 Oct 2024 08:39:34 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Roman Gushchin <roman.gushchin@...ux.dev>, Matthew Wilcox <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
Vlastimil Babka <vbabka@...e.cz>, linux-kernel@...r.kernel.org, stable@...r.kernel.org,
Hugh Dickins <hughd@...gle.com>, kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [PATCH v2] mm: page_alloc: move mlocked flag clearance into free_pages_prepare()
On Tue, Oct 22, 2024, Yosry Ahmed wrote:
> On Mon, Oct 21, 2024 at 9:33 PM Roman Gushchin <roman.gushchin@...ux.dev> wrote:
> >
> > On Tue, Oct 22, 2024 at 04:47:19AM +0100, Matthew Wilcox wrote:
> > > On Tue, Oct 22, 2024 at 02:14:39AM +0000, Roman Gushchin wrote:
> > > > On Mon, Oct 21, 2024 at 09:34:24PM +0100, Matthew Wilcox wrote:
> > > > > On Mon, Oct 21, 2024 at 05:34:55PM +0000, Roman Gushchin wrote:
> > > > > > Fix it by moving the mlocked flag clearance down to
> > > > > > free_page_prepare().
> > > > >
> > > > > Urgh, I don't like this new reference to folio in free_pages_prepare().
> > > > > It feels like a layering violation. I'll think about where else we
> > > > > could put this.
> > > >
> > > > I agree, but it feels like it needs quite some work to do it in a nicer way,
> > > > no way it can be backported to older kernels. As for this fix, I don't
> > > > have better ideas...
> > >
> > > Well, what is KVM doing that causes this page to get mapped to userspace?
> > > Don't tell me to look at the reproducer as it is 403 Forbidden. All I
> > > can tell is that it's freed with vfree().
> > >
> > > Is it from kvm_dirty_ring_get_page()? That looks like the obvious thing,
> > > but I'd hate to spend a lot of time on it and then discover I was looking
> > > at the wrong thing.
> >
> > One of the pages is vcpu->run, others belong to kvm->coalesced_mmio_ring.
>
> Looking at kvm_vcpu_fault(), it seems like we after mmap'ing the fd
> returned by KVM_CREATE_VCPU we can access one of the following:
> - vcpu->run
> - vcpu->arch.pio_data
> - vcpu->kvm->coalesced_mmio_ring
> - a page returned by kvm_dirty_ring_get_page()
>
> It doesn't seem like any of these are reclaimable,
Correct, these are all kernel allocated pages that KVM exposes to userspace to
facilitate bidirectional sharing of large chunks of data.
> why is mlock()'ing them supported to begin with?
Because no one realized it would be problematic, and KVM would have had to go out
of its way to prevent mlock().
> Even if we don't want mlock() to err in this case, shouldn't we just do
> nothing?
Ideally, yes.
> I see a lot of checks at the beginning of mlock_fixup() to check
> whether we should operate on the vma, perhaps we should also check for
> these KVM vmas?
Definitely not. KVM may be doing something unexpected, but the VMA certainly
isn't unique enough to warrant mm/ needing dedicated handling.
Focusing on KVM is likely a waste of time. There are probably other subsystems
and/or drivers that .mmap() kernel allocated memory in the same way. Odds are
good KVM is just the messenger, because syzkaller knows how to beat on KVM. And
even if there aren't any other existing cases, nothing would prevent them from
coming along in the future.
> Trying to or maybe set VM_SPECIAL in kvm_vcpu_mmap()? I am not
> sure tbh, but this doesn't seem right.
Agreed. VM_DONTEXPAND is the only VM_SPECIAL flag that is remotely appropriate,
but setting VM_DONTEXPAND could theoretically break userspace, and other than
preventing mlock(), there is no reason why the VMA can't be expanded. I doubt
any userspace VMM is actually remapping and expanding a vCPU mapping, but trying
to fudge around this outside of core mm/ feels kludgy and has the potential to
turn into a game of whack-a-mole.
> FWIW, I think moving the mlock clearing from __page_cache_release ()
> to free_pages_prepare() (or another common function in the page
> freeing path) may be the right thing to do in its own right. I am just
> wondering why we are not questioning the mlock() on the KVM vCPU
> mapping to begin with.
>
> Is there a use case for this that I am missing?
Not that I know of, I suspect mlock() is allowed simply because it's allowed by
default.
Powered by blists - more mailing lists