lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZG5uB4SiaS92YEWr@google.com>
Date:   Wed, 24 May 2023 13:05:27 -0700
From:   Sean Christopherson <seanjc@...gle.com>
To:     Peter Xu <peterx@...hat.com>
Cc:     David Stevens <stevensd@...omium.org>,
        Marc Zyngier <maz@...nel.org>,
        Oliver Upton <oliver.upton@...ux.dev>,
        Paolo Bonzini <pbonzini@...hat.com>,
        linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
        linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v6 1/4] KVM: mmu: introduce new gfn_to_pfn_noref functions

On Wed, May 24, 2023, Peter Xu wrote:
> On Wed, May 24, 2023 at 11:29:45AM -0700, Sean Christopherson wrote:
> > On Wed, May 24, 2023, Peter Xu wrote:
> > > On Wed, May 24, 2023 at 09:46:13AM -0700, Sean Christopherson wrote:
> > > > If we hack kvm_pfn_to_refcounted_page(), then all of those protections are lost
> > > > because KVM would drop its assertions and also skip dirtying pages, i.e. would
> > > > effectively suppress the latent detection by check_new_page_bad().
> > > 
> > > So it's probably that I totally have no idea what are the attributes for
> > > those special pages so I don't understand enough on why we need to handle
> > > those pages differently from e.g. PFNMAP pages, and also the benefits.
> > > 
> > > I think what I can tell is that they're pages that doesn't have
> > > PageCompound bits set on either head or tails, however it's still a
> > > multi-2-order large page.  Is there an example on how these pages are used
> > > and allocated?  Why would we need those pages, and whether these pages need
> > > to be set dirty/accessed after all?
> > 
> > The use case David is interested in is where an AMD GPU driver kmallocs() a
> > chunk of memory, let's it be mmap()'d by userspace, and userspace then maps it
> > into the guest for a virtual (passthrough?) GPU.  For all intents and purposes,
> > it's normal memory, just not refcounted.
> 
> I'm not familiar enough with kmalloc, but what I think is kmalloc for large
> chunks will be the same as alloc_pages, and I thought it should also be a
> compound page already.  If that needs to be mmap()ed to userapp then I
> assume it mostly should be kmalloc_large().

Sorry, by "kmalloc()" I was handwaving at all of the variations of kernel allocated
memory.  From a separate thread[*], looks like the actual usage is a direct call to
alloc_pages() that deliberately doesn't set __GFP_COMP.  Note, I'm pretty sure the
comment about "mapping pages directly into userspace" being illegal really means
something like "don't allow these pages to be gup()'d or mapped via standard mmap()".
IIUC, ttm_pool_alloc() fills tt->pages and then ttm_bo_vm_fault_reserved() does
vmf_insert_pfn_prot() to shove the pfn into userspace.

  static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
					unsigned int order)
  {
	unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS;
	struct ttm_pool_dma *dma;
	struct page *p;
	void *vaddr;

	/* Don't set the __GFP_COMP flag for higher order allocations.
	 * Mapping pages directly into an userspace process and calling
	 * put_page() on a TTM allocated page is illegal.
	 */
	if (order)
		gfp_flags |= __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN |
			__GFP_KSWAPD_RECLAIM;

	if (!pool->use_dma_alloc) {
		p = alloc_pages(gfp_flags, order);
		if (p)
			p->private = order;
		return p;

	}

[*] https://lore.kernel.org/all/20220815095423.11131-1-dmitry.osipenko@collabora.com

> kmalloc -> kmalloc_large -> __kmalloc_large_node:
> 
> 	flags |= __GFP_COMP;
> 
> Then when the new page allocated and being prepared (prep_new_page):
> 
> 	if (order && (gfp_flags & __GFP_COMP))
> 		prep_compound_page(page, order);
> 
> I assume prep_compound_page() will make PageCompound return true for those
> pages returned.  So I know I still miss something, but not sure
> where.. because IIRC we're at least talking about !PageCompound pages.

Yeah, they're !PageCompound().

> > > >  static bool kvm_is_ad_tracked_page(struct page *page)
> > > >  {
> > > > +       /*
> > > > +        * Assert that KVM isn't attempting to mark a freed page as Accessed or
> > > > +        * Dirty, i.e. that KVM's MMU doesn't have a use-after-free bug.  KVM
> > > > +        * (typically) doesn't pin pages that are mapped in KVM's MMU, and
> > > > +        * instead relies on mmu_notifiers to know when a mapping needs to be
> > > > +        * zapped/invalidated.  Unmapping from KVM's MMU must happen _before_
> > > > +        * KVM returns from its mmu_notifier, i.e. the page should have an
> > > > +        * elevated refcount at this point even though KVM doesn't hold a
> > > > +        * reference of its own.
> > > > +        */
> > > > +       if (WARN_ON_ONCE(!page_count(page)))
> > > > +               return false;
> > > > +
> > > >         /*
> > > >          * Per page-flags.h, pages tagged PG_reserved "should in general not be
> > > >          * touched (e.g. set dirty) except by its owner".
> > > > 
> > > 
> > > This looks like a good thing to have, indeed.  But again it doesn't seem
> > > like anything special to the pages we're discussing here, say, !Compound &&
> > > refcount==0 ones.
> > 
> > The problem is that if KVM ignores refcount==0 pages, then KVM can't distinguish
> > between the legitimate[*] refcount==0 AMD GPU case and a buggy refcount==0
> > use-after-free scenario.  I don't want to make that sacrifice as the legimiate
> > !refcounted use case is a very specific use case, whereas consuming refcounted
> > memory is ubiquituous (outside of maybe AWS).
> > 
> > [*] Consuming !refcounted pages is safe only for flows that are tied into the
> >     mmu_notifiers.  The current proposal/plan is to add an off-by-default module
> >     param that let's userspace opt-in to kmap() use of !refcounted memory, e.g.
> >     this case and PFNMAP memory.
> 
> I see.
> 
> I think you mentioned that we can use one special bit in the shadow pte to
> mark such special pages.  Does it mean that your above patch will still
> cover what you wanted to protect even if we use the trick?  Because then
> kvm_is_ad_tracked_page() should only be called when we're sure the special
> bit is not set.  IOW, we can still rule out these pages already and
> page_count()==0 check here can still be helpful to track kvm bugs?

Yep, exactly.  FWIW, I was thinking that the SPTE bit would flag refcounted pages,
not these "special" pages, but either way would work.  All that matters is that
KVM tracks whether or not the page was refcounted when KVM installed the SPTE.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ