lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+fCnZd3sP1_x2c5FvztA6LzsBY3Fq3cD5cJ6FQ+FAnmawe06Q@mail.gmail.com>
Date: Thu, 6 Feb 2025 00:43:46 +0100
From: Andrey Konovalov <andreyknvl@...il.com>
To: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>
Cc: luto@...nel.org, xin@...or.com, kirill.shutemov@...ux.intel.com, 
	palmer@...belt.com, tj@...nel.org, brgerst@...il.com, ardb@...nel.org, 
	dave.hansen@...ux.intel.com, jgross@...e.com, will@...nel.org, 
	akpm@...ux-foundation.org, arnd@...db.de, corbet@....net, dvyukov@...gle.com, 
	richard.weiyang@...il.com, ytcoode@...il.com, tglx@...utronix.de, 
	hpa@...or.com, seanjc@...gle.com, paul.walmsley@...ive.com, 
	aou@...s.berkeley.edu, justinstitt@...gle.com, jason.andryuk@....com, 
	glider@...gle.com, ubizjak@...il.com, jannh@...gle.com, bhe@...hat.com, 
	vincenzo.frascino@....com, rafael.j.wysocki@...el.com, 
	ndesaulniers@...gle.com, mingo@...hat.com, catalin.marinas@....com, 
	junichi.nomura@....com, nathan@...nel.org, ryabinin.a.a@...il.com, 
	dennis@...nel.org, bp@...en8.de, kevinloughlin@...gle.com, morbo@...gle.com, 
	dan.j.williams@...el.com, julian.stecklina@...erus-technology.de, 
	peterz@...radead.org, cl@...ux.com, kees@...nel.org, 
	kasan-dev@...glegroups.com, x86@...nel.org, 
	linux-arm-kernel@...ts.infradead.org, linux-riscv@...ts.infradead.org, 
	linux-kernel@...r.kernel.org, linux-mm@...ck.org, llvm@...ts.linux.dev, 
	linux-doc@...r.kernel.org
Subject: Re: [PATCH 01/15] kasan: Allocation enhancement for dense tag-based mode

On Tue, Feb 4, 2025 at 6:34 PM Maciej Wieczor-Retman
<maciej.wieczor-retman@...el.com> wrote:
>
> Tag-based KASAN (on arm64) works by generating a random 8-bit tag and
> putting it in both the top byte of the pointer (that points to the
> allocated memory) and into all bytes of shadow memory that correspond to
> the chunk of allocated regular memory. Each byte of shadow memory covers
> a 16 byte chunk of allocated memory - a value called KASAN granularity.
> This means that out-of-bounds memory accesses that happen inside the 16
> bytes can't be caught.
>
> The dense mode offers reducing the tag width from 8 to 4 bits and
> storing two tags in one byte of shadow memory - one in the upper 4 bits
> of the byte and one in the lower 4. This way one byte of shadow memory
> can cover 32 bytes of allocated memory while still keeping the "16 bytes
> per one tag" granularity. The lower 4 bits of each shadow byte map bytes
> of memory with offsets 0-15 and the upper 4 bits map offsets 16-31.
>
> Example:
> The example below shows how the shadow memory looks like after
> allocating 48 bytes of memory in both normal tag-based mode and the
> dense mode. The contents of shadow memory are overlaid onto address
> offsets that they relate to in the allocated kernel memory. Each cell
> |    | symbolizes one byte of shadow memory.
>
> = The regular tag based mode:
> - Randomly generated 8-bit tag equals 0xAB.
> - 0xFE is the tag that symbolizes unallocated memory.
>
> Shadow memory contents:           |  0xAB  |  0xAB  |  0xAB  |  0xFE  |
> Shadow memory address offsets:    0        1        2        3        4
> Allocated memory address offsets: 0        16       32       48       64
>
> = The dense tag based mode:
> - Randomly generated 4-bit tag equals 0xC.
> - 0xE is the tag that symbolizes unallocated memory.
>
> Shadow memory contents:           |0xC 0xC |0xC 0xE |0xE 0xE |0xE 0xE |
> Shadow memory address offsets:    0        1        2        3        4
> Allocated memory address offsets: 0        32       64       96       128
>
> Add a new config option and defines that can override the standard
> system of one tag per one shadow byte.
>
> Add alternative version of the kasan_poison() that deals with tags not
> being aligned to byte size in shadow memory.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>
> ---
>  include/linux/kasan.h | 18 ++++++++++++++++++
>  lib/Kconfig.kasan     | 21 +++++++++++++++++++++
>  mm/kasan/kasan.h      |  4 +---
>  mm/kasan/shadow.c     | 33 ++++++++++++++++++++++++++++++---
>  4 files changed, 70 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 03b440658817..ea0f5acd875b 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -35,6 +35,24 @@ typedef unsigned int __bitwise kasan_vmalloc_flags_t;
>
>  /* Software KASAN implementations use shadow memory. */
>
> +#ifdef CONFIG_KASAN_SW_TAGS_DENSE
> +#define KASAN_GRANULE_SHIFT    (KASAN_SHADOW_SCALE_SHIFT - 1)
> +#define KASAN_SHADOW_SCALE_SIZE        (1UL << KASAN_SHADOW_SCALE_SHIFT)
> +static inline u8 kasan_dense_tag(u8 tag)
> +{
> +       return (tag << KASAN_TAG_WIDTH | tag);
> +}
> +#else
> +#define KASAN_GRANULE_SHIFT    KASAN_SHADOW_SCALE_SHIFT
> +#define KASAN_SHADOW_SCALE_SIZE        (1UL << KASAN_GRANULE_SHIFT)
> +static inline u8 kasan_dense_tag(u8 tag)
> +{
> +       return tag;
> +}
> +#endif
> +
> +#define KASAN_GRANULE_SIZE     (1UL << KASAN_GRANULE_SHIFT)
> +

Is there a reason these definitions are added to
include/linux/kasan.h? At least within this patch, they are only used
within mm/kasan, so let's keep them in mm/kasan/kasan.h.

>  #ifdef CONFIG_KASAN_SW_TAGS
>  /* This matches KASAN_TAG_INVALID. */
>  #define KASAN_SHADOW_INIT 0xFE
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 98016e137b7f..d08b4e9bf477 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -19,6 +19,13 @@ config ARCH_DISABLE_KASAN_INLINE
>           Disables both inline and stack instrumentation. Selected by
>           architectures that do not support these instrumentation types.
>
> +config ARCH_HAS_KASAN_SW_TAGS_DENSE
> +       bool
> +       help
> +         Enables option to compile tag-based KASAN with densely packed tags -
> +         two 4-bit tags per one byte of shadow memory. Set on architectures
> +         that have 4-bit tag macros.
> +
>  config CC_HAS_KASAN_GENERIC
>         def_bool $(cc-option, -fsanitize=kernel-address)
>
> @@ -223,4 +230,18 @@ config KASAN_EXTRA_INFO
>           boot parameter, it will add 8 * stack_ring_size bytes of additional
>           memory consumption.
>
> +config KASAN_SW_TAGS_DENSE
> +       bool "Two 4-bit tags in one shadow memory byte"
> +       depends on KASAN_SW_TAGS
> +       depends on ARCH_HAS_KASAN_SW_TAGS_DENSE

I think this should also depend on KASAN_OUTLINE: Clang/GCC aren't
aware of the dense mode.

> +       help
> +         Enables packing two tags into one shadow byte to half the memory usage
> +         compared to normal tag-based mode.

But adds some performance impact?

> +
> +         After setting this option, tag width macro is set to 4 and size macros
> +         are adjusted based on used KASAN_SHADOW_SCALE_SHIFT.

I think this paragraph is an implementation detail and we can drop it.

> +
> +         ARCH_HAS_KASAN_SW_TAGS_DENSE is needed for this option since the
> +         special tag macros need to be properly set for 4-bit wide tags.
> +
>  endif # KASAN
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 72da5ddcceaa..0e04c5e2c405 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -128,9 +128,7 @@ static inline bool kasan_requires_meta(void)
>
>  #endif /* CONFIG_KASAN_GENERIC */
>
> -#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
> -#define KASAN_GRANULE_SIZE     (1UL << KASAN_SHADOW_SCALE_SHIFT)
> -#else
> +#ifdef CONFIG_KASAN_HW_TAGS
>  #include <asm/mte-kasan.h>
>  #define KASAN_GRANULE_SIZE     MTE_GRANULE_SIZE
>  #endif
> diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
> index d6210ca48dda..368503f54b87 100644
> --- a/mm/kasan/shadow.c
> +++ b/mm/kasan/shadow.c
> @@ -123,7 +123,8 @@ EXPORT_SYMBOL(__hwasan_memcpy);
>
>  void kasan_poison(const void *addr, size_t size, u8 value, bool init)
>  {
> -       void *shadow_start, *shadow_end;
> +       u8 *shadow_start, *shadow_end, *shadow_start_aligned, *shadow_end_aligned, tag;
> +       u64 addr64, addr_start_aligned, addr_end_aligned;
>
>         if (!kasan_arch_is_ready())
>                 return;
> @@ -134,16 +135,42 @@ void kasan_poison(const void *addr, size_t size, u8 value, bool init)
>          * addresses to this function.
>          */
>         addr = kasan_reset_tag(addr);
> +       addr64 = (u64)addr;
>
> -       if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK))
> +       if (WARN_ON(addr64 & KASAN_GRANULE_MASK))
>                 return;
>         if (WARN_ON(size & KASAN_GRANULE_MASK))
>                 return;
>
>         shadow_start = kasan_mem_to_shadow(addr);
>         shadow_end = kasan_mem_to_shadow(addr + size);
> +       addr_start_aligned = round_up(addr64, KASAN_SHADOW_SCALE_SIZE);
> +       addr_end_aligned = round_down(addr64 + size, KASAN_SHADOW_SCALE_SIZE);
> +       shadow_start_aligned = kasan_mem_to_shadow((void *)addr_start_aligned);
> +       shadow_end_aligned = kasan_mem_to_shadow((void *)addr_end_aligned);
> +
> +       /* If size is empty just return. */
> +       if (!size)
> +               return;
>
> -       __memset(shadow_start, value, shadow_end - shadow_start);
> +       /* Memset the first unaligned tag in shadow memory. */
> +       if (addr64 % KASAN_SHADOW_SCALE_SIZE) {

So this is required, because KASAN_SHADOW_SCALE_SIZE is 32 but minimal
slab alignment is still KASAN_GRANULE_SIZE == 16... We should at least
hide this check is under IS_ENABLED(KASAN_SW_TAGS_DENSE).

> +               tag = *shadow_start & KASAN_TAG_MASK;
> +               tag |= value << KASAN_TAG_WIDTH;
> +               *shadow_start = tag;
> +       }
> +
> +       /* Memset the middle aligned part in shadow memory. */
> +       tag = kasan_dense_tag(value);
> +       __memset(shadow_start_aligned, tag, shadow_end_aligned - shadow_start_aligned);
> +
> +       /* Memset the last unaligned tag in shadow memory. */
> +       if ((addr64 + size) % KASAN_SHADOW_SCALE_SIZE) {

Would it be possible to move this part to kasan_poison_last_granule()?
That functions seems to be serving a similar purpose but for the
Generic mode.

It might also be cleaner to add a kasan_poison_first_granule() that
contains the if (addr64 % KASAN_SHADOW_SCALE_SIZE) check.

> +               tag = KASAN_TAG_MASK << KASAN_TAG_WIDTH;
> +               tag &= *shadow_end;
> +               tag |= value;
> +               *shadow_end = tag;
> +       }
>  }
>  EXPORT_SYMBOL_GPL(kasan_poison);
>
> --
> 2.47.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ