lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <201905111657.76FE0BDCEC@keescook>
Date:   Sat, 11 May 2019 17:03:08 -0700
From:   Kees Cook <keescook@...omium.org>
To:     Laura Abbott <labbott@...hat.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Eric Biggers <ebiggers@...nel.org>,
        Matthew Wilcox <willy@...radead.org>,
        Rik van Riel <riel@...riel.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN

On Fri, May 10, 2019 at 08:41:43PM -0400, Laura Abbott wrote:
> On 5/10/19 3:43 PM, Kees Cook wrote:
> > This feature continues to cause more problems than it solves[1]. Its
> > intention was to check the bounds of page-allocator allocations by using
> > __GFP_COMP, for which we would need to find all missing __GFP_COMP
> > markings. This work has been on hold and there is an argument[2]
> > that such markings are not even the correct signal for checking for
> > same-allocation pages. Instead of depending on BROKEN, this just removes
> > it entirely. It can be trivially reverted if/when a better solution for
> > tracking page allocator sizes is found.
> > 
> > [1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
> > [2] https://lkml.kernel.org/r/20190415022412.GA29714@bombadil.infradead.org
> > 
> > Suggested-by: Eric Biggers <ebiggers@...nel.org>
> > Signed-off-by: Kees Cook <keescook@...omium.org>
> > ---
> >   mm/usercopy.c    | 67 ------------------------------------------------
> >   security/Kconfig | 11 --------
> >   2 files changed, 78 deletions(-)
> > 
> > diff --git a/mm/usercopy.c b/mm/usercopy.c
> > index 14faadcedd06..15dc1bf03303 100644
> > --- a/mm/usercopy.c
> > +++ b/mm/usercopy.c
> > @@ -159,70 +159,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n,
> >   		usercopy_abort("null address", NULL, to_user, ptr, n);
> >   }
> > -/* Checks for allocs that are marked in some way as spanning multiple pages. */
> > -static inline void check_page_span(const void *ptr, unsigned long n,
> > -				   struct page *page, bool to_user)
> > -{
> > -#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
> > -	const void *end = ptr + n - 1;
> > -	struct page *endpage;
> > -	bool is_reserved, is_cma;
> > -
> > -	/*
> > -	 * Sometimes the kernel data regions are not marked Reserved (see
> > -	 * check below). And sometimes [_sdata,_edata) does not cover
> > -	 * rodata and/or bss, so check each range explicitly.
> > -	 */
> > -
> > -	/* Allow reads of kernel rodata region (if not marked as Reserved). */
> > -	if (ptr >= (const void *)__start_rodata &&
> > -	    end <= (const void *)__end_rodata) {
> > -		if (!to_user)
> > -			usercopy_abort("rodata", NULL, to_user, 0, n);
> > -		return;
> > -	}
> > -
> > -	/* Allow kernel data region (if not marked as Reserved). */
> > -	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
> > -		return;
> > -
> > -	/* Allow kernel bss region (if not marked as Reserved). */
> > -	if (ptr >= (const void *)__bss_start &&
> > -	    end <= (const void *)__bss_stop)
> > -		return;
> > -
> 
> 
> I agree the page spanning is broken but is it worth keeping the
> checks against __rodata __bss etc.?

They're all just white-listing later checks (except RODATA which is
doing a cheap RO test which is redundant on any architecture with actual
rodata...) so they don't have any value in staying without the rest of
the page allocator logic.

> 
> > -	/* Is the object wholly within one base page? */
> > -	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> > -		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> > -		return;
> > -
> > -	/* Allow if fully inside the same compound (__GFP_COMP) page. */
> > -	endpage = virt_to_head_page(end);
> > -	if (likely(endpage == page))
> > -		return;
> > -
> > -	/*
> > -	 * Reject if range is entirely either Reserved (i.e. special or
> > -	 * device memory), or CMA. Otherwise, reject since the object spans
> > -	 * several independently allocated pages.
> > -	 */
> > -	is_reserved = PageReserved(page);
> > -	is_cma = is_migrate_cma_page(page);
> > -	if (!is_reserved && !is_cma)
> > -		usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
> > -
> > -	for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
> > -		page = virt_to_head_page(ptr);
> > -		if (is_reserved && !PageReserved(page))
> > -			usercopy_abort("spans Reserved and non-Reserved pages",
> > -				       NULL, to_user, 0, n);
> > -		if (is_cma && !is_migrate_cma_page(page))
> > -			usercopy_abort("spans CMA and non-CMA pages", NULL,
> > -				       to_user, 0, n);
> > -	}

We _could_ keep the mixed CMA/reserved/neither check if we really wanted
to, but that's such a corner case of a corner case, I'm not sure it's
worth doing the virt_to_head_page() across the whole span to figure
it out.

I really wish we had size of allocation reliably held somewhere. We'll
need it for doing memory tagging of the page allocator too...

-- 
Kees Cook

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ