[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cd768a99-5afa-999c-989a-efee66fa0ddb@redhat.com>
Date: Mon, 29 Oct 2018 07:45:14 -0400
From: Chris von Recklinghausen <crecklin@...hat.com>
To: Igor Stoppa <igor.stoppa@...il.com>,
Mimi Zohar <zohar@...ux.vnet.ibm.com>,
Kees Cook <keescook@...omium.org>,
Matthew Wilcox <willy@...radead.org>,
Dave Chinner <david@...morbit.com>,
James Morris <jmorris@...ei.org>,
Michal Hocko <mhocko@...nel.org>,
kernel-hardening@...ts.openwall.com,
linux-integrity@...r.kernel.org,
linux-security-module@...r.kernel.org
Cc: igor.stoppa@...wei.com, Dave Hansen <dave.hansen@...ux.intel.com>,
Jonathan Corbet <corbet@....net>,
Laura Abbott <labbott@...hat.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 09/17] prmem: hardened usercopy
On 10/23/2018 05:34 PM, Igor Stoppa wrote:
> Prevent leaks of protected memory to userspace.
> The protection from overwrited from userspace is already available, once
> the memory is write protected.
>
> Signed-off-by: Igor Stoppa <igor.stoppa@...wei.com>
> CC: Kees Cook <keescook@...omium.org>
> CC: Chris von Recklinghausen <crecklin@...hat.com>
> CC: linux-mm@...ck.org
> CC: linux-kernel@...r.kernel.org
> ---
> include/linux/prmem.h | 24 ++++++++++++++++++++++++
> mm/usercopy.c | 5 +++++
> 2 files changed, 29 insertions(+)
>
> diff --git a/include/linux/prmem.h b/include/linux/prmem.h
> index cf713fc1c8bb..919d853ddc15 100644
> --- a/include/linux/prmem.h
> +++ b/include/linux/prmem.h
> @@ -273,6 +273,30 @@ struct pmalloc_pool {
> uint8_t mode;
> };
>
> +void __noreturn usercopy_abort(const char *name, const char *detail,
> + bool to_user, unsigned long offset,
> + unsigned long len);
> +
> +/**
> + * check_pmalloc_object() - helper for hardened usercopy
> + * @ptr: the beginning of the memory to check
> + * @n: the size of the memory to check
> + * @to_user: copy to userspace or from userspace
> + *
> + * If the check is ok, it will fall-through, otherwise it will abort.
> + * The function is inlined, to minimize the performance impact of the
> + * extra check that can end up on a hot path.
> + * Non-exhaustive micro benchmarking with QEMU x86_64 shows a reduction of
> + * the time spent in this fragment by 60%, when inlined.
> + */
> +static inline
> +void check_pmalloc_object(const void *ptr, unsigned long n, bool to_user)
> +{
> + if (unlikely(__is_wr_after_init(ptr, n) || __is_wr_pool(ptr, n)))
> + usercopy_abort("pmalloc", "accessing pmalloc obj", to_user,
> + (const unsigned long)ptr, n);
> +}
> +
> /*
> * The write rare functionality is fully implemented as __always_inline,
> * to prevent having an internal function call that is capable of modifying
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> index 852eb4e53f06..a080dd37b684 100644
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -22,8 +22,10 @@
> #include <linux/thread_info.h>
> #include <linux/atomic.h>
> #include <linux/jump_label.h>
> +#include <linux/prmem.h>
> #include <asm/sections.h>
>
> +
> /*
> * Checks if a given pointer and length is contained by the current
> * stack frame (if possible).
> @@ -284,6 +286,9 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user)
>
> /* Check for object in kernel to avoid text exposure. */
> check_kernel_text_object((const unsigned long)ptr, n, to_user);
> +
> + /* Check if object is from a pmalloc chunk. */
> + check_pmalloc_object(ptr, n, to_user);
> }
> EXPORT_SYMBOL(__check_object_size);
>
Could you add code somewhere (lkdtm driver if possible) to demonstrate
the issue and verify the code change?
Thanks,
Chris
Powered by blists - more mailing lists