[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <201907021009.3CF9EBB4@keescook>
Date: Tue, 2 Jul 2019 10:11:58 -0700
From: Kees Cook <keescook@...omium.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Eric Biggers <ebiggers@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
Laura Abbott <labbott@...hat.com>,
Rik van Riel <riel@...riel.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN
On Fri, May 10, 2019 at 01:43:36PM -0700, Kees Cook wrote:
> This feature continues to cause more problems than it solves[1]. Its
> intention was to check the bounds of page-allocator allocations by using
> __GFP_COMP, for which we would need to find all missing __GFP_COMP
> markings. This work has been on hold and there is an argument[2]
> that such markings are not even the correct signal for checking for
> same-allocation pages. Instead of depending on BROKEN, this just removes
> it entirely. It can be trivially reverted if/when a better solution for
> tracking page allocator sizes is found.
>
> [1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
> [2] https://lkml.kernel.org/r/20190415022412.GA29714@bombadil.infradead.org
>
> Suggested-by: Eric Biggers <ebiggers@...nel.org>
> Signed-off-by: Kees Cook <keescook@...omium.org>
So, after looking at this more, I think I'm going to keep this patch,
and we can add new sanity checks on a per-Page flag check. (See below.)
Andrew, can you apply this to -mm please?
> ---
> mm/usercopy.c | 67 ------------------------------------------------
> security/Kconfig | 11 --------
> 2 files changed, 78 deletions(-)
>
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> index 14faadcedd06..15dc1bf03303 100644
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -159,70 +159,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n,
> usercopy_abort("null address", NULL, to_user, ptr, n);
> }
>
> -/* Checks for allocs that are marked in some way as spanning multiple pages. */
> -static inline void check_page_span(const void *ptr, unsigned long n,
> - struct page *page, bool to_user)
> -{
> -#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
> - const void *end = ptr + n - 1;
> - struct page *endpage;
> - bool is_reserved, is_cma;
> -
> - /*
> - * Sometimes the kernel data regions are not marked Reserved (see
> - * check below). And sometimes [_sdata,_edata) does not cover
> - * rodata and/or bss, so check each range explicitly.
> - */
> -
> - /* Allow reads of kernel rodata region (if not marked as Reserved). */
> - if (ptr >= (const void *)__start_rodata &&
> - end <= (const void *)__end_rodata) {
> - if (!to_user)
> - usercopy_abort("rodata", NULL, to_user, 0, n);
> - return;
> - }
> -
> - /* Allow kernel data region (if not marked as Reserved). */
> - if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
> - return;
> -
> - /* Allow kernel bss region (if not marked as Reserved). */
> - if (ptr >= (const void *)__bss_start &&
> - end <= (const void *)__bss_stop)
> - return;
> -
> - /* Is the object wholly within one base page? */
> - if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> - ((unsigned long)end & (unsigned long)PAGE_MASK)))
> - return;
> -
> - /* Allow if fully inside the same compound (__GFP_COMP) page. */
> - endpage = virt_to_head_page(end);
> - if (likely(endpage == page))
> - return;
> -
> - /*
> - * Reject if range is entirely either Reserved (i.e. special or
> - * device memory), or CMA. Otherwise, reject since the object spans
> - * several independently allocated pages.
> - */
> - is_reserved = PageReserved(page);
> - is_cma = is_migrate_cma_page(page);
> - if (!is_reserved && !is_cma)
> - usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
> -
> - for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
> - page = virt_to_head_page(ptr);
> - if (is_reserved && !PageReserved(page))
> - usercopy_abort("spans Reserved and non-Reserved pages",
> - NULL, to_user, 0, n);
> - if (is_cma && !is_migrate_cma_page(page))
> - usercopy_abort("spans CMA and non-CMA pages", NULL,
> - to_user, 0, n);
> - }
> -#endif
> -}
> -
> static inline void check_heap_object(const void *ptr, unsigned long n,
> bool to_user)
> {
> @@ -236,9 +172,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
> if (PageSlab(page)) {
> /* Check slab allocator for flags and size. */
> __check_heap_object(ptr, n, page, to_user);
> - } else {
> - /* Verify object does not incorrectly span multiple pages. */
> - check_page_span(ptr, n, page, to_user);
> }
In the future, instead of this catch-all "else", we can add things like:
} else if (PageCompound(page)) {
... do some check for compound pages ...
} else if (PageReserved(page))
... etc ...
}
But for 5.3, I think we need to just entirely drop the PAGESPAN thing.
-Kees
> }
>
> diff --git a/security/Kconfig b/security/Kconfig
> index 353cfef71d4e..8392647f5a4c 100644
> --- a/security/Kconfig
> +++ b/security/Kconfig
> @@ -176,17 +176,6 @@ config HARDENED_USERCOPY_FALLBACK
> Booting with "slab_common.usercopy_fallback=Y/N" can change
> this setting.
>
> -config HARDENED_USERCOPY_PAGESPAN
> - bool "Refuse to copy allocations that span multiple pages"
> - depends on HARDENED_USERCOPY
> - depends on EXPERT
> - help
> - When a multi-page allocation is done without __GFP_COMP,
> - hardened usercopy will reject attempts to copy it. There are,
> - however, several cases of this in the kernel that have not all
> - been removed. This config is intended to be used only while
> - trying to find such users.
> -
> config FORTIFY_SOURCE
> bool "Harden common str/mem functions against buffer overflows"
> depends on ARCH_HAS_FORTIFY_SOURCE
> --
> 2.17.1
>
>
> --
> Kees Cook
--
Kees Cook
Powered by blists - more mailing lists