lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 23 Dec 2017 17:48:49 -0800
From:   Andy Lutomirski <luto@...nel.org>
To:     Andrey Ryabinin <aryabinin@...tuozzo.com>
Cc:     Jakub Kicinski <kubakici@...pl>, Andy Lutomirski <luto@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...nel.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: linux/master crashes on boot with KASAN=y

On Sat, Dec 23, 2017 at 4:41 AM, Andrey Ryabinin
<aryabinin@...tuozzo.com> wrote:
> On 12/23/2017 11:01 AM, Jakub Kicinski wrote:
>> Hi!
>>
>> I bisected a crash on boot to this:
>>
>> commit 21506525fb8ddb0342f2a2370812d47f6a1f3833 (HEAD, refs/bisect/bad)
>> Author: Andy Lutomirski <luto@...nel.org>
>> Date:   Mon Dec 4 15:07:16 2017 +0100
>>
>>     x86/kasan/64: Teach KASAN about the cpu_entry_area
>
>
> Thanks.
> There is nothing wrong with this patch, it just uncovered older bug.
> The actual problem comes from f06bdd4001c2 ("x86/mm: Adapt MODULES_END based on fixmap section size")
> which is made kasan_mem_to_shadow(MODULES_END) potentially unaligned to page boundary.
> And when we feed unaligned address to kasan_populate_zero_shadow() it doesn't do the right thing.
>
> Could you tell me if the fix bellow works for you?
>
> ---
>  arch/x86/include/asm/kasan.h            | 8 ++++++++
>  arch/x86/include/asm/pgtable_64_types.h | 4 +++-
>  2 files changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
> index b577dd0916aa..0c580e4b2ccc 100644
> --- a/arch/x86/include/asm/kasan.h
> +++ b/arch/x86/include/asm/kasan.h
> @@ -5,6 +5,14 @@
>  #include <linux/const.h>
>  #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
>
> +#ifndef KASAN_SHADOW_SCALE_SHIFT
> +# ifdef CONFIG_KASAN
> +#  define KASAN_SHADOW_SCALE_SHIFT 3
> +# else
> +#  define KASAN_SHADOW_SCALE_SHIFT 0
> +# endif
> +#endif
> +
>  /*
>   * Compiler uses shadow offset assuming that addresses start
>   * from 0. Kernel addresses don't start from 0, so shadow
> diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
> index 6d5f45dcd4a1..d34a90d6c374 100644
> --- a/arch/x86/include/asm/pgtable_64_types.h
> +++ b/arch/x86/include/asm/pgtable_64_types.h
> @@ -6,6 +6,7 @@
>
>  #ifndef __ASSEMBLY__
>  #include <linux/types.h>
> +#include <asm/kasan.h>
>  #include <asm/kaslr.h>
>
>  /*
> @@ -96,7 +97,8 @@ typedef struct { pteval_t pte; } pte_t;
>  #define VMALLOC_END    (VMALLOC_START + _AC((VMALLOC_SIZE_TB << 40) - 1, UL))
>  #define MODULES_VADDR    (__START_KERNEL_map + KERNEL_IMAGE_SIZE)
>  /* The module sections ends with the start of the fixmap */
> -#define MODULES_END   __fix_to_virt(__end_of_fixed_addresses + 1)
> +#define MODULES_END   (__fix_to_virt(__end_of_fixed_addresses + 1) & \
> +                       ~((PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) - 1))

Could this be #define MODULES_END KASAN_ROUND_DOWN(__fix_to_virt(...)) instead?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ