[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240501105455.42b78a0b@gandalf.local.home>
Date: Wed, 1 May 2024 10:54:55 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Mike Rapoport <rppt@...nel.org>
Cc: "Guilherme G. Piccoli" <gpiccoli@...lia.com>, "Luck, Tony"
<tony.luck@...el.com>, Kees Cook <keescook@...omium.org>, Joel Fernandes
<joel@...lfernandes.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "linux-trace-kernel@...r.kernel.org"
<linux-trace-kernel@...r.kernel.org>, Masami Hiramatsu
<mhiramat@...nel.org>, Mark Rutland <mark.rutland@....com>, Mathieu
Desnoyers <mathieu.desnoyers@...icios.com>, Andrew Morton
<akpm@...ux-foundation.org>, "Liam R. Howlett" <Liam.Howlett@...cle.com>,
Vlastimil Babka <vbabka@...e.cz>, Lorenzo Stoakes <lstoakes@...il.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>, Thomas Gleixner
<tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov
<bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>, "x86@...nel.org"
<x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>, Peter Zijlstra
<peterz@...radead.org>, "linux-hardening@...r.kernel.org"
<linux-hardening@...r.kernel.org>, Guenter Roeck <linux@...ck-us.net>, Ross
Zwisler <zwisler@...gle.com>, "wklin@...gle.com" <wklin@...gle.com>,
Vineeth Remanan Pillai <vineeth@...byteword.org>, Suleiman Souhlal
<suleiman@...gle.com>, Linus Torvalds <torvalds@...uxfoundation.org>,
Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>
Subject: Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map
pstore consistently
On Wed, 1 May 2024 17:45:49 +0300
Mike Rapoport <rppt@...nel.org> wrote:
> > +static void __init memmap_copy(void)
> > +{
> > + if (!early_mmap_size)
> > + return;
> > +
> > + mmap_list = kcalloc(early_mmap_size + 1, sizeof(mmap_list), GFP_KERNEL);
>
> We can keep early_mmap_size after boot and then we don't need to allocate
> an extra element in the mmap_list. No strong feeling here, though.
>
> > + if (!mmap_list)
> > + return;
> > +
> > + for (int i = 0; i < early_mmap_size; i++)
> > + mmap_list[i] = early_mmap_list[i];
> > +}
>
> With something like this
>
> /*
> * Parse early_reserve_mem=nn:align:name
> */
> static int __init early_reserve_mem(char *p)
> {
> phys_addr_t start, size, align;
> char *oldp;
> int err;
>
> if (!p)
> return -EINVAL;
>
> oldp = p;
> size = memparse(p, &p);
> if (p == oldp)
> return -EINVAL;
>
> if (*p != ':')
> return -EINVAL;
>
> align = memparse(p+1, &p);
> if (*p != ':')
> return -EINVAL;
>
> start = memblock_phys_alloc(size, align);
So this will allocate the same physical location for every boot, if booting
the same kernel and having the same physical memory layout?
-- Steve
> if (!start)
> return -ENOMEM;
>
> p++;
> err = memmap_add(start, size, p);
> if (err) {
> memblock_phys_free(start, size);
> return err;
> }
>
> p += strlen(p);
>
> return *p == '\0' ? 0: -EINVAL;
> }
> __setup("early_reserve_mem=", early_reserve_mem);
>
> you don't need to touch e820 and it will work the same for all
> architectures.
>
> We'd need a better naming, but I couldn't think of something better yet.
>
> > +
> > +/**
> > + * memmap_named - Find a wildcard region with a given name
> > + * @name: The name that is attached to a wildcard region
> > + * @start: If found, holds the start address
> > + * @size: If found, holds the size of the address.
> > + *
> > + * Returns: 1 if found or 0 if not found.
> > + */
> > +int memmap_named(const char *name, u64 *start, u64 *size)
> > +{
> > + struct mmap_map *map;
> > +
> > + if (!mmap_list)
> > + return 0;
> > +
> > + for (int i = 0; mmap_list[i].name[0]; i++) {
> > + map = &mmap_list[i];
> > + if (!map->size)
> > + continue;
> > + if (strcmp(name, map->name) == 0) {
> > + *start = map->start;
> > + *size = map->size;
> > + return 1;
> > + }
> > + }
> > + return 0;
> > +}
> > +
> > struct kobject *mm_kobj;
> >
> > #ifdef CONFIG_SMP
> > @@ -2793,4 +2864,5 @@ void __init mm_core_init(void)
> > pti_init();
> > kmsan_init_runtime();
> > mm_cache_init();
> > + memmap_copy();
> > }
>
Powered by blists - more mailing lists