[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250401215115.602501043@goodmis.org>
Date: Tue, 01 Apr 2025 17:51:15 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vincent Donnefort <vdonnefort@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Mike Rapoport <rppt@...nel.org>,
Jann Horn <jannh@...gle.com>
Subject: [PATCH v4 0/4] tracing: Clean up persistent ring buffer code (was: ring-buffer: Allow persistent memory to be user space mmapped)
[
Note, this does not include the update to allow the persistent
ring buffer to be memory mapped to user space. This is only the
4 patches that I feel makes the code more correct and also
has a bug fix.
]
Now that I learned that the memory passed back from reserve_mem is part
of the memory allocator and just "reserved" and the memory is already
virtually mapped, it can simply use phys_to_virt() on the physical memory
that is returned to get the virtual mapping for that memory!
(Thanks Mike!)
That makes things much easier, especially since it means that the memory
returned by reserve_mem is no different than the memory retrieved by
page_alloc(). This allows that memory to be memory mapped to user space
no differently than it is mapped by the normal buffer.
This new series does the following:
- Enforce the memory mapping is page aligned (both the address and the
size). If not, it errors out.
- Use phys_to_virt() to get to the virtual memory from the reserve_mem
returned addresses. Also use free_reserved_area() to give it
back to the buddy allocator when it is freed.
- Treat the buffer allocated via memmap differently. It still needs to
be virtually mapped (cannot use phys_to_virt) and it must not be
freed nor memory mapped to user space. A new flag is added when a buffer
is created this way to prevent it from ever being memory mapped to user
space and the ref count is upped so that it can never be freed.
- Use vmap_page_range() instead of using kmalloc_array() to create an array
of struct pages for vmap().
- Use flush_kernel_vmap_range() instead of flush_dcache_folio()
Changes since v3: https://lore.kernel.org/linux-trace-kernel/20250401202549.409271454@goodmis.org/
- Fix whitespace in patch 3
- Updated change log of patch 4 to include bug fix comment
- Removed the patch that enables persistent ring buffer from being
memory mapped to user space. That will be done in the next merge window.
Steven Rostedt (4):
tracing: Enforce the persistent ring buffer to be page aligned
tracing: Have reserve_mem use phys_to_virt() and separate from memmap buffer
tracing: Use vmap_page_range() to map memmap ring buffer
ring-buffer: Use flush_kernel_vmap_range() over flush_dcache_folio()
----
Documentation/admin-guide/kernel-parameters.txt | 2 +
Documentation/trace/debugging.rst | 2 +
kernel/trace/ring_buffer.c | 5 +-
kernel/trace/trace.c | 73 +++++++++++++++++--------
kernel/trace/trace.h | 1 +
5 files changed, 58 insertions(+), 25 deletions(-)
Powered by blists - more mailing lists