[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z-wFszhJ_9o4dc8O@kernel.org>
Date: Tue, 1 Apr 2025 18:26:43 +0300
From: Mike Rapoport <rppt@...nel.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
Masami Hiramatsu <mhiramat@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vincent Donnefort <vdonnefort@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>, Kees Cook <kees@...nel.org>,
Tony Luck <tony.luck@...el.com>,
"Guilherme G. Piccoli" <gpiccoli@...lia.com>,
linux-hardening@...r.kernel.org
Subject: Re: [PATCH v2 1/2] tracing: ring-buffer: Have the ring buffer code
do the vmap of physical memory
On Tue, Apr 01, 2025 at 11:11:59AM -0400, Steven Rostedt wrote:
> On Tue, 1 Apr 2025 12:56:31 +0300
> Mike Rapoport <rppt@...nel.org> wrote:
>
> > > For example, using "mem=" on the kernel command line will literally
> > > limit the amount of RAM the kernel will use, and in doing so will
> > > limit the page allocations too.
> >
> > And using memmap=m$n on x86 creates a hole in System RAM that does not have
> > neither struct page nor kernel mappings and it is never considered RAM
> > anywhere in mm.
>
> Hmm, when that is used, then we had better not "free" the buffer.
>
> >
> > > IOW, all of these kernel command line things are *subtle*.
> > >
> > > Don't mis-use them by then making assumptions about how they work
> > > today (or how they will work tomorrow).
> >
> > I'd say it's better not to use them at all. They cause weirdness in memory
> > layout and also they are inconsistent in how architectures implement them.
> >
> > > > Mike can correct me if I'm wrong, but the memory that was stolen was actual
> > > > memory returned by the system (E820 in x86). It reserves the memory before
> > > > the memory allocation reserves this memory. So what reserve_mem returns is
> > > > valid memory that can be used by memory allocator, but is currently just
> > > > "reserved" which means it wants to prevent the allocator from using it.
> > >
> > > That may indeed be true of reserve_mem.
> >
> > The reserve_mem behaves like any other early allocation, it has proper
> > struct pages (PG_Reserved) and it is mapped in the direct map so
> > phys_to_virt() will work on it.
> >
> > As for mapping it to userspace, vm_iomap_memory() seems the best API to
> > use. It has all the alignment checks and will refuse to map ranges that are
> > not properly aligned and it will use vma information to create the right
> > mappings.
> >
>
> When using vmap() to get the virtual addresses (via the kmalloc_array() of
> struct pages), the vunmap() gives the memory back to the memory allocator:
>
> ~# free
> total used free shared buff/cache available
> Mem: 8185928 296676 7840576 920 148280 7889252
> Swap: 7812092 0 7812092
> ~# rmdir /sys/kernel/tracing/instances/boot_mapped/
> ~# free
> total used free shared buff/cache available
> Mem: 8206404 290868 7866772 920 148384 7915536
> Swap: 7812092 0 7812092
>
> With no issues.
>
> But if I use vmap_page_range(), how do I give that back to the memory allocator?
But you don't need neither vmap() nor vmap_page_range() to have kernel page
tables for memory that you get from reserve_mem. It's already mapped and
plain phys_to_virt() gives you the virtual address you can use.
> Calling vunmap() on that memory gives me:
>
> 1779.832484] ------------[ cut here ]------------
> [ 1779.834076] Trying to vunmap() nonexistent vm area (000000027c000000)
> [ 1779.835941] WARNING: CPU: 6 PID: 956 at mm/vmalloc.c:3413 vunmap+0x5a/0x60
>
> What's the proper way to say: "I no longer need this physical memory I
> reserved, the kernel can now use it"?
free_reserved_area()
> -- Steve
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists