lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXu5jLKg5pg7ZviKA7P6z5Wik4E8cqhRwFNAxQ6ZYoMftvUfw@mail.gmail.com>
Date:	Sat, 20 Dec 2014 13:13:22 -0800
From:	Kees Cook <keescook@...omium.org>
To:	Andy Lutomirski <luto@...capital.net>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	"x86@...nel.org" <x86@...nel.org>,
	"# 3.4.x" <stable@...r.kernel.org>
Subject: Re: [PATCH v2] x86_64, vdso: Fix the vdso address randomization algorithm

On Sat, Dec 20, 2014 at 9:40 AM, Andy Lutomirski <luto@...capital.net> wrote:
> The theory behind vdso randomization is that it's mapped at a random
> offset above the top of the stack.  To avoid wasting a page of
> memory for an extra page table, the vdso isn't supposed to extend
> past the lowest PMD into which it can fit.  Other than that, the
> address should be a uniformly distributed address that meets all of
> the alignment requirements.
>
> The current algorithm is buggy: the vdso has about a 50% probability
> of being at the very end of a PMD.  The current algorithm also has a
> decent chance of failing outright due to incorrect handling of the
> case where the top of the stack is near the top of its PMD.
>
> This fixes the implementation.  The paxtest estimate of vdso
> "randomisation" improves from 11 bits to 18 bits.  (Disclaimer: I
> don't know what the paxtest code is actually calculating.)
>
> It's worth noting that this algorithm is inherently biased: the vdso
> is more likely to end up near the end of its PMD than near the
> beginning.  Ideally we would either nix the PMD sharing requirement
> or jointly randomize the vdso and the stack to reduce the bias.
>
> In the mean time, this is a considerable improvement with basically
> no risk of compatibility issues, since the allowed outputs of the
> algorithm are unchanged.
>
> As an easy test, doing this:
>
> for i in `seq 10000`
>   do grep -P vdso /proc/self/maps |cut -d- -f1
> done |sort |uniq -d
>
> used to produce lots of output (1445 lines on my most recent run).
> A tiny subset looks like this:
>
> 7fffdfffe000
> 7fffe01fe000
> 7fffe05fe000
> 7fffe07fe000
> 7fffe09fe000
> 7fffe0bfe000
> 7fffe0dfe000
>
> Note the suspicious fe000 endings.  With the fix, I get a much more
> palatable 76 repeated addresses.
>
> Cc: stable@...r.kernel.org
> Signed-off-by: Andy Lutomirski <luto@...capital.net>

Thanks for fixing this! :)

Reviewed-by: Kees Cook <keescook@...omium.org>

-Kees

> ---
>  arch/x86/vdso/vma.c | 45 +++++++++++++++++++++++++++++----------------
>  1 file changed, 29 insertions(+), 16 deletions(-)
>
> diff --git a/arch/x86/vdso/vma.c b/arch/x86/vdso/vma.c
> index 009495b9ab4b..1c9f750c3859 100644
> --- a/arch/x86/vdso/vma.c
> +++ b/arch/x86/vdso/vma.c
> @@ -41,12 +41,17 @@ void __init init_vdso_image(const struct vdso_image *image)
>
>  struct linux_binprm;
>
> -/* Put the vdso above the (randomized) stack with another randomized offset.
> -   This way there is no hole in the middle of address space.
> -   To save memory make sure it is still in the same PTE as the stack top.
> -   This doesn't give that many random bits.
> -
> -   Only used for the 64-bit and x32 vdsos. */
> +/*
> + * Put the vdso above the (randomized) stack with another randomized
> + * offset.  This way there is no hole in the middle of address space.
> + * To save memory make sure it is still in the same PTE as the stack
> + * top.  This doesn't give that many random bits.
> + *
> + * Note that this algorithm is imperfect: the distribution of the vdso
> + * start address within a PMD is biased toward the end.
> + *
> + * Only used for the 64-bit and x32 vdsos.
> + */
>  static unsigned long vdso_addr(unsigned long start, unsigned len)
>  {
>  #ifdef CONFIG_X86_32
> @@ -54,22 +59,30 @@ static unsigned long vdso_addr(unsigned long start, unsigned len)
>  #else
>         unsigned long addr, end;
>         unsigned offset;
> -       end = (start + PMD_SIZE - 1) & PMD_MASK;
> +
> +       /*
> +        * Round up the start address.  It can start out unaligned as a result
> +        * of stack start randomization.
> +        */
> +       start = PAGE_ALIGN(start);
> +
> +       /* Round the lowest possible end address up to a PMD boundary. */
> +       end = (start + len + PMD_SIZE - 1) & PMD_MASK;
>         if (end >= TASK_SIZE_MAX)
>                 end = TASK_SIZE_MAX;
>         end -= len;
> -       /* This loses some more bits than a modulo, but is cheaper */
> -       offset = get_random_int() & (PTRS_PER_PTE - 1);
> -       addr = start + (offset << PAGE_SHIFT);
> -       if (addr >= end)
> -               addr = end;
> +
> +       if (end > start) {
> +               offset = get_random_int() % (((end - start) >> PAGE_SHIFT) + 1);
> +               addr = start + (offset << PAGE_SHIFT);
> +       } else {
> +               addr = start;
> +       }
>
>         /*
> -        * page-align it here so that get_unmapped_area doesn't
> -        * align it wrongfully again to the next page. addr can come in 4K
> -        * unaligned here as a result of stack start randomization.
> +        * Forcibly align the final address in case we have a hardware
> +        * issue that requires alignment for performance reasons.
>          */
> -       addr = PAGE_ALIGN(addr);
>         addr = align_vdso_addr(addr);
>
>         return addr;
> --
> 2.1.0
>



-- 
Kees Cook
Chrome OS Security
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ