[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZjJgIIOvvEdnisNA@kernel.org>
Date: Wed, 1 May 2024 18:30:40 +0300
From: Mike Rapoport <rppt@...nel.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: "Guilherme G. Piccoli" <gpiccoli@...lia.com>,
"Luck, Tony" <tony.luck@...el.com>,
Kees Cook <keescook@...omium.org>,
Joel Fernandes <joel@...lfernandes.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-trace-kernel@...r.kernel.org" <linux-trace-kernel@...r.kernel.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Lorenzo Stoakes <lstoakes@...il.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"x86@...nel.org" <x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
"linux-hardening@...r.kernel.org" <linux-hardening@...r.kernel.org>,
Guenter Roeck <linux@...ck-us.net>,
Ross Zwisler <zwisler@...gle.com>,
"wklin@...gle.com" <wklin@...gle.com>,
Vineeth Remanan Pillai <vineeth@...byteword.org>,
Suleiman Souhlal <suleiman@...gle.com>,
Linus Torvalds <torvalds@...uxfoundation.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>
Subject: Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map
pstore consistently
On Wed, May 01, 2024 at 10:54:55AM -0400, Steven Rostedt wrote:
> On Wed, 1 May 2024 17:45:49 +0300
> Mike Rapoport <rppt@...nel.org> wrote:
>
> > > +static void __init memmap_copy(void)
> > > +{
> > > + if (!early_mmap_size)
> > > + return;
> > > +
> > > + mmap_list = kcalloc(early_mmap_size + 1, sizeof(mmap_list), GFP_KERNEL);
> >
> > We can keep early_mmap_size after boot and then we don't need to allocate
> > an extra element in the mmap_list. No strong feeling here, though.
> >
> > > + if (!mmap_list)
> > > + return;
> > > +
> > > + for (int i = 0; i < early_mmap_size; i++)
> > > + mmap_list[i] = early_mmap_list[i];
> > > +}
> >
> > With something like this
> >
> > /*
> > * Parse early_reserve_mem=nn:align:name
> > */
> > static int __init early_reserve_mem(char *p)
> > {
> > phys_addr_t start, size, align;
> > char *oldp;
> > int err;
> >
> > if (!p)
> > return -EINVAL;
> >
> > oldp = p;
> > size = memparse(p, &p);
> > if (p == oldp)
> > return -EINVAL;
> >
> > if (*p != ':')
> > return -EINVAL;
> >
> > align = memparse(p+1, &p);
> > if (*p != ':')
> > return -EINVAL;
> >
> > start = memblock_phys_alloc(size, align);
>
> So this will allocate the same physical location for every boot, if booting
> the same kernel and having the same physical memory layout?
Up to kaslr that might use that location for the kernel image.
But it's the same as allocating from e820 after kaslr.
And, TBH, I don't have good ideas how to ensure the same physical location
with randomization of the physical address of the kernel image.
> -- Steve
>
>
> > if (!start)
> > return -ENOMEM;
> >
> > p++;
> > err = memmap_add(start, size, p);
> > if (err) {
> > memblock_phys_free(start, size);
> > return err;
> > }
> >
> > p += strlen(p);
> >
> > return *p == '\0' ? 0: -EINVAL;
> > }
> > __setup("early_reserve_mem=", early_reserve_mem);
> >
> > you don't need to touch e820 and it will work the same for all
> > architectures.
> >
> > We'd need a better naming, but I couldn't think of something better yet.
> >
> > > +
> > > +/**
> > > + * memmap_named - Find a wildcard region with a given name
> > > + * @name: The name that is attached to a wildcard region
> > > + * @start: If found, holds the start address
> > > + * @size: If found, holds the size of the address.
> > > + *
> > > + * Returns: 1 if found or 0 if not found.
> > > + */
> > > +int memmap_named(const char *name, u64 *start, u64 *size)
> > > +{
> > > + struct mmap_map *map;
> > > +
> > > + if (!mmap_list)
> > > + return 0;
> > > +
> > > + for (int i = 0; mmap_list[i].name[0]; i++) {
> > > + map = &mmap_list[i];
> > > + if (!map->size)
> > > + continue;
> > > + if (strcmp(name, map->name) == 0) {
> > > + *start = map->start;
> > > + *size = map->size;
> > > + return 1;
> > > + }
> > > + }
> > > + return 0;
> > > +}
> > > +
> > > struct kobject *mm_kobj;
> > >
> > > #ifdef CONFIG_SMP
> > > @@ -2793,4 +2864,5 @@ void __init mm_core_init(void)
> > > pti_init();
> > > kmsan_init_runtime();
> > > mm_cache_init();
> > > + memmap_copy();
> > > }
> >
>
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists