[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231026085800.GK2824@kernel.org>
Date: Thu, 26 Oct 2023 11:58:00 +0300
From: Mike Rapoport <rppt@...nel.org>
To: Will Deacon <will@...nel.org>
Cc: linux-kernel@...r.kernel.org, Andrew Morton <akpm@...ux-foundation.org>,
Björn Töpel <bjorn@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
"David S. Miller" <davem@...emloft.net>,
Dinh Nguyen <dinguyen@...nel.org>,
Heiko Carstens <hca@...ux.ibm.com>, Helge Deller <deller@....de>,
Huacai Chen <chenhuacai@...nel.org>,
Kent Overstreet <kent.overstreet@...ux.dev>,
Luis Chamberlain <mcgrof@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Michael Ellerman <mpe@...erman.id.au>,
Nadav Amit <nadav.amit@...il.com>,
"Naveen N. Rao" <naveen.n.rao@...ux.ibm.com>,
Palmer Dabbelt <palmer@...belt.com>,
Puranjay Mohan <puranjay12@...il.com>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
Russell King <linux@...linux.org.uk>, Song Liu <song@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
Thomas Gleixner <tglx@...utronix.de>, bpf@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-mips@...r.kernel.org,
linux-mm@...ck.org, linux-modules@...r.kernel.org,
linux-parisc@...r.kernel.org, linux-riscv@...ts.infradead.org,
linux-s390@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, loongarch@...ts.linux.dev,
netdev@...r.kernel.org, sparclinux@...r.kernel.org, x86@...nel.org
Subject: Re: [PATCH v3 04/13] mm/execmem, arch: convert remaining overrides
of module_alloc to execmem
Hi Will,
On Mon, Oct 23, 2023 at 06:14:20PM +0100, Will Deacon wrote:
> Hi Mike,
>
> On Mon, Sep 18, 2023 at 10:29:46AM +0300, Mike Rapoport wrote:
> > From: "Mike Rapoport (IBM)" <rppt@...nel.org>
> >
> > Extend execmem parameters to accommodate more complex overrides of
> > module_alloc() by architectures.
> >
> > This includes specification of a fallback range required by arm, arm64
> > and powerpc and support for allocation of KASAN shadow required by
> > arm64, s390 and x86.
> >
> > The core implementation of execmem_alloc() takes care of suppressing
> > warnings when the initial allocation fails but there is a fallback range
> > defined.
> >
> > Signed-off-by: Mike Rapoport (IBM) <rppt@...nel.org>
> > ---
> > arch/arm/kernel/module.c | 38 ++++++++++++---------
> > arch/arm64/kernel/module.c | 57 ++++++++++++++------------------
> > arch/powerpc/kernel/module.c | 52 ++++++++++++++---------------
> > arch/s390/kernel/module.c | 52 +++++++++++------------------
> > arch/x86/kernel/module.c | 64 +++++++++++-------------------------
> > include/linux/execmem.h | 14 ++++++++
> > mm/execmem.c | 43 ++++++++++++++++++++++--
> > 7 files changed, 167 insertions(+), 153 deletions(-)
>
> [...]
>
> > diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
> > index dd851297596e..cd6320de1c54 100644
> > --- a/arch/arm64/kernel/module.c
> > +++ b/arch/arm64/kernel/module.c
> > @@ -20,6 +20,7 @@
> > #include <linux/random.h>
> > #include <linux/scs.h>
> > #include <linux/vmalloc.h>
> > +#include <linux/execmem.h>
> >
> > #include <asm/alternative.h>
> > #include <asm/insn.h>
> > @@ -108,46 +109,38 @@ static int __init module_init_limits(void)
> >
> > return 0;
> > }
> > -subsys_initcall(module_init_limits);
> >
> > -void *module_alloc(unsigned long size)
> > +static struct execmem_params execmem_params __ro_after_init = {
> > + .ranges = {
> > + [EXECMEM_DEFAULT] = {
> > + .flags = EXECMEM_KASAN_SHADOW,
> > + .alignment = MODULE_ALIGN,
> > + },
> > + },
> > +};
> > +
> > +struct execmem_params __init *execmem_arch_params(void)
> > {
> > - void *p = NULL;
> > + struct execmem_range *r = &execmem_params.ranges[EXECMEM_DEFAULT];
> >
> > - /*
> > - * Where possible, prefer to allocate within direct branch range of the
> > - * kernel such that no PLTs are necessary.
> > - */
>
> Why are you removing this comment? I think you could just move it next
> to the part where we set a 128MiB range.
Oops, my bad. Will add it back.
> > - if (module_direct_base) {
> > - p = __vmalloc_node_range(size, MODULE_ALIGN,
> > - module_direct_base,
> > - module_direct_base + SZ_128M,
> > - GFP_KERNEL | __GFP_NOWARN,
> > - PAGE_KERNEL, 0, NUMA_NO_NODE,
> > - __builtin_return_address(0));
> > - }
> > + module_init_limits();
>
> Hmm, this used to be run from subsys_initcall(), but now you're running
> it _really_ early, before random_init(), so randomization of the module
> space is no longer going to be very random if we don't have early entropy
> from the firmware or the CPU, which is likely to be the case on most SoCs.
Well, it will be as random as KASLR. Won't that be enough?
> > diff --git a/mm/execmem.c b/mm/execmem.c
> > index f25a5e064886..a8c2f44d0133 100644
> > --- a/mm/execmem.c
> > +++ b/mm/execmem.c
> > @@ -11,12 +11,46 @@ static void *execmem_alloc(size_t size, struct execmem_range *range)
> > {
> > unsigned long start = range->start;
> > unsigned long end = range->end;
> > + unsigned long fallback_start = range->fallback_start;
> > + unsigned long fallback_end = range->fallback_end;
> > unsigned int align = range->alignment;
> > pgprot_t pgprot = range->pgprot;
> > + bool kasan = range->flags & EXECMEM_KASAN_SHADOW;
> > + unsigned long vm_flags = VM_FLUSH_RESET_PERMS;
> > + bool fallback = !!fallback_start;
> > + gfp_t gfp_flags = GFP_KERNEL;
> > + void *p;
> >
> > - return __vmalloc_node_range(size, align, start, end,
> > - GFP_KERNEL, pgprot, VM_FLUSH_RESET_PERMS,
> > - NUMA_NO_NODE, __builtin_return_address(0));
> > + if (PAGE_ALIGN(size) > (end - start))
> > + return NULL;
> > +
> > + if (kasan)
> > + vm_flags |= VM_DEFER_KMEMLEAK;
>
> Hmm, I don't think we passed this before on arm64, should we have done?
It was there on arm64 before commit 8339f7d8e178 ("arm64: module: remove
old !KASAN_VMALLOC logic").
There's no need to pass VM_DEFER_KMEMLEAK when KASAN_VMALLOC is enabled and
arm64 always selects KASAN_VMALLOC with KASAN.
And for the generic case, I should have made the condition to check for
KASAN_VMALLOC as well.
> Will
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists