lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 11 May 2019 21:11:42 -0700
From:   Matthew Wilcox <willy@...radead.org>
To:     Kees Cook <keescook@...omium.org>
Cc:     Laura Abbott <labbott@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Eric Biggers <ebiggers@...nel.org>,
        Rik van Riel <riel@...riel.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN

On Sat, May 11, 2019 at 05:03:08PM -0700, Kees Cook wrote:
> On Fri, May 10, 2019 at 08:41:43PM -0400, Laura Abbott wrote:
> > On 5/10/19 3:43 PM, Kees Cook wrote:
> > > This feature continues to cause more problems than it solves[1]. Its
> > > intention was to check the bounds of page-allocator allocations by using
> > > __GFP_COMP, for which we would need to find all missing __GFP_COMP
> > > markings. This work has been on hold and there is an argument[2]
> > > that such markings are not even the correct signal for checking for
> > > same-allocation pages. Instead of depending on BROKEN, this just removes
> > > it entirely. It can be trivially reverted if/when a better solution for
> > > tracking page allocator sizes is found.
> > > 
> > > [1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
> > > [2] https://lkml.kernel.org/r/20190415022412.GA29714@bombadil.infradead.org
> > 
> > I agree the page spanning is broken but is it worth keeping the
> > checks against __rodata __bss etc.?
> 
> They're all just white-listing later checks (except RODATA which is
> doing a cheap RO test which is redundant on any architecture with actual
> rodata...) so they don't have any value in staying without the rest of
> the page allocator logic.
> 
> > > -	/* Is the object wholly within one base page? */
> > > -	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> > > -		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> > > -		return;
> > > -
> > > -	/* Allow if fully inside the same compound (__GFP_COMP) page. */
> > > -	endpage = virt_to_head_page(end);
> > > -	if (likely(endpage == page))
> > > -		return;
> 
> We _could_ keep the mixed CMA/reserved/neither check if we really wanted
> to, but that's such a corner case of a corner case, I'm not sure it's
> worth doing the virt_to_head_page() across the whole span to figure
> it out.

I'd delete that first check, because it's a subset of the second check,

	/* Is the object wholly within a single (base or compound) page? */
	endpage = virt_to_head_page(end);
	if (likely(endpage == page))
		return;

	/*
	 * If the start and end are more than MAX_ORDER apart, they must
	 * be from separate allocations
	 */
	if (n >= (PAGE_SIZE << MAX_ORDER))
		usercopy_abort("spans too many pages", NULL, to_user, 0, n);

	/*
	 * If neither page is compound, we can't tell if the object is
	 * within a single allocation or not
	 */
	if (!PageCompound(page) && !PageCompound(endpage))
		return;

> I really wish we had size of allocation reliably held somewhere. We'll
> need it for doing memory tagging of the page allocator too...

I think we'll need to store those allocations in a separate data structure
on the side.  As far as the rest of the kernel is concerned, those struct
pages belong to them once the page allocator has given them.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ