lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMj1kXG_Z=7B_eDAk3vhtDjfcnka3AoSKNzvFQDzpvYY2EyVfg@mail.gmail.com>
Date: Fri, 13 Sep 2024 17:00:42 +0200
From: Ard Biesheuvel <ardb@...nel.org>
To: Mike Rapoport <rppt@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Andreas Larsson <andreas@...sler.com>, 
	Andy Lutomirski <luto@...nel.org>, Arnd Bergmann <arnd@...db.de>, Borislav Petkov <bp@...en8.de>, 
	Brian Cain <bcain@...cinc.com>, Catalin Marinas <catalin.marinas@....com>, 
	Christoph Hellwig <hch@...radead.org>, Christophe Leroy <christophe.leroy@...roup.eu>, 
	Dave Hansen <dave.hansen@...ux.intel.com>, Dinh Nguyen <dinguyen@...nel.org>, 
	Geert Uytterhoeven <geert@...ux-m68k.org>, Guo Ren <guoren@...nel.org>, Helge Deller <deller@....de>, 
	Huacai Chen <chenhuacai@...nel.org>, Ingo Molnar <mingo@...hat.com>, 
	Johannes Berg <johannes@...solutions.net>, 
	John Paul Adrian Glaubitz <glaubitz@...sik.fu-berlin.de>, Kent Overstreet <kent.overstreet@...ux.dev>, 
	"Liam R. Howlett" <Liam.Howlett@...cle.com>, Luis Chamberlain <mcgrof@...nel.org>, 
	Mark Rutland <mark.rutland@....com>, Masami Hiramatsu <mhiramat@...nel.org>, 
	Matt Turner <mattst88@...il.com>, Max Filippov <jcmvbkbc@...il.com>, 
	Michael Ellerman <mpe@...erman.id.au>, Michal Simek <monstr@...str.eu>, Oleg Nesterov <oleg@...hat.com>, 
	Palmer Dabbelt <palmer@...belt.com>, Peter Zijlstra <peterz@...radead.org>, 
	Richard Weinberger <richard@....at>, Russell King <linux@...linux.org.uk>, Song Liu <song@...nel.org>, 
	Stafford Horne <shorne@...il.com>, Steven Rostedt <rostedt@...dmis.org>, 
	Thomas Bogendoerfer <tsbogend@...ha.franken.de>, Thomas Gleixner <tglx@...utronix.de>, 
	Uladzislau Rezki <urezki@...il.com>, Vineet Gupta <vgupta@...nel.org>, Will Deacon <will@...nel.org>, 
	bpf@...r.kernel.org, linux-alpha@...r.kernel.org, linux-arch@...r.kernel.org, 
	linux-arm-kernel@...ts.infradead.org, linux-csky@...r.kernel.org, 
	linux-hexagon@...r.kernel.org, linux-kernel@...r.kernel.org, 
	linux-m68k@...ts.linux-m68k.org, linux-mips@...r.kernel.org, 
	linux-mm@...ck.org, linux-modules@...r.kernel.org, 
	linux-openrisc@...r.kernel.org, linux-parisc@...r.kernel.org, 
	linux-riscv@...ts.infradead.org, linux-sh@...r.kernel.org, 
	linux-snps-arc@...ts.infradead.org, linux-trace-kernel@...r.kernel.org, 
	linux-um@...ts.infradead.org, linuxppc-dev@...ts.ozlabs.org, 
	loongarch@...ts.linux.dev, sparclinux@...r.kernel.org, x86@...nel.org
Subject: Re: [PATCH v3 7/8] execmem: add support for cache of large ROX pages

Hi Mike,

On Mon, 9 Sept 2024 at 08:51, Mike Rapoport <rppt@...nel.org> wrote:
>
> From: "Mike Rapoport (Microsoft)" <rppt@...nel.org>
>
> Using large pages to map text areas reduces iTLB pressure and improves
> performance.
>
> Extend execmem_alloc() with an ability to use huge pages with ROX
> permissions as a cache for smaller allocations.
>
> To populate the cache, a writable large page is allocated from vmalloc with
> VM_ALLOW_HUGE_VMAP, filled with invalid instructions and then remapped as
> ROX.
>
> Portions of that large page are handed out to execmem_alloc() callers
> without any changes to the permissions.
>
> When the memory is freed with execmem_free() it is invalidated again so
> that it won't contain stale instructions.
>
> The cache is enabled when an architecture sets EXECMEM_ROX_CACHE flag in
> definition of an execmem_range.
>
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@...nel.org>
> ---
>  include/linux/execmem.h |   2 +
>  mm/execmem.c            | 289 +++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 286 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/execmem.h b/include/linux/execmem.h
> index dfdf19f8a5e8..7436aa547818 100644
> --- a/include/linux/execmem.h
> +++ b/include/linux/execmem.h
> @@ -77,12 +77,14 @@ struct execmem_range {
>
>  /**
>   * struct execmem_info - architecture parameters for code allocations
> + * @fill_trapping_insns: set memory to contain instructions that will trap
>   * @ranges: array of parameter sets defining architecture specific
>   * parameters for executable memory allocations. The ranges that are not
>   * explicitly initialized by an architecture use parameters defined for
>   * @EXECMEM_DEFAULT.
>   */
>  struct execmem_info {
> +       void (*fill_trapping_insns)(void *ptr, size_t size, bool writable);
>         struct execmem_range    ranges[EXECMEM_TYPE_MAX];
>  };
>
> diff --git a/mm/execmem.c b/mm/execmem.c
> index 0f6691e9ffe6..f547c1f3c93d 100644
> --- a/mm/execmem.c
> +++ b/mm/execmem.c
> @@ -7,28 +7,88 @@
>   */
>
>  #include <linux/mm.h>
> +#include <linux/mutex.h>
>  #include <linux/vmalloc.h>
>  #include <linux/execmem.h>
> +#include <linux/maple_tree.h>
>  #include <linux/moduleloader.h>
>  #include <linux/text-patching.h>
>
> +#include <asm/tlbflush.h>
> +
> +#include "internal.h"
> +
>  static struct execmem_info *execmem_info __ro_after_init;
>  static struct execmem_info default_execmem_info __ro_after_init;
>
> -static void *__execmem_alloc(struct execmem_range *range, size_t size)
> +#ifdef CONFIG_MMU
> +struct execmem_cache {
> +       struct mutex mutex;
> +       struct maple_tree busy_areas;
> +       struct maple_tree free_areas;
> +};
> +
> +static struct execmem_cache execmem_cache = {
> +       .mutex = __MUTEX_INITIALIZER(execmem_cache.mutex),
> +       .busy_areas = MTREE_INIT_EXT(busy_areas, MT_FLAGS_LOCK_EXTERN,
> +                                    execmem_cache.mutex),
> +       .free_areas = MTREE_INIT_EXT(free_areas, MT_FLAGS_LOCK_EXTERN,
> +                                    execmem_cache.mutex),
> +};
> +
> +static void execmem_cache_clean(struct work_struct *work)
> +{
> +       struct maple_tree *free_areas = &execmem_cache.free_areas;
> +       struct mutex *mutex = &execmem_cache.mutex;
> +       MA_STATE(mas, free_areas, 0, ULONG_MAX);
> +       void *area;
> +
> +       mutex_lock(mutex);
> +       mas_for_each(&mas, area, ULONG_MAX) {
> +               size_t size;
> +
> +               if (!xa_is_value(area))
> +                       continue;
> +
> +               size = xa_to_value(area);
> +
> +               if (IS_ALIGNED(size, PMD_SIZE) &&
> +                   IS_ALIGNED(mas.index, PMD_SIZE)) {
> +                       void *ptr = (void *)mas.index;
> +
> +                       mas_erase(&mas);
> +                       vfree(ptr);
> +               }
> +       }
> +       mutex_unlock(mutex);
> +}
> +
> +static DECLARE_WORK(execmem_cache_clean_work, execmem_cache_clean);
> +
> +static void execmem_fill_trapping_insns(void *ptr, size_t size, bool writable)
> +{
> +       if (execmem_info->fill_trapping_insns)
> +               execmem_info->fill_trapping_insns(ptr, size, writable);
> +       else
> +               memset(ptr, 0, size);

Does this really have to be a function pointer with a runtime check?

This could just be a __weak definition, with the arch providing an
override if the memset() is not appropriate.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ