lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250331215051.21d77cab@gandalf.local.home>
Date: Mon, 31 Mar 2025 21:50:51 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Jann Horn <jannh@...gle.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
 linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org, Masami
 Hiramatsu <mhiramat@...nel.org>, Mark Rutland <mark.rutland@....com>,
 Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Andrew Morton
 <akpm@...ux-foundation.org>, Vincent Donnefort <vdonnefort@...gle.com>,
 Vlastimil Babka <vbabka@...e.cz>, Mike Rapoport <rppt@...nel.org>, Kees
 Cook <kees@...nel.org>, Tony Luck <tony.luck@...el.com>, "Guilherme G.
 Piccoli" <gpiccoli@...lia.com>, linux-hardening@...r.kernel.org, Matthew
 Wilcox <willy@...radead.org>
Subject: Re: [PATCH v2 1/2] tracing: ring-buffer: Have the ring buffer code
 do the vmap of physical memory

On Tue, 1 Apr 2025 03:28:20 +0200
Jann Horn <jannh@...gle.com> wrote:

> I think you probably need flushes on both sides, since you might have
> to first flush out the dirty cacheline you wrote through the kernel
> mapping, then discard the stale clean cacheline for the user mapping,
> or something like that? (Unless these VIVT cache architectures provide
> stronger guarantees on cache state than I thought.) But when you're
> adding data to the tracing buffers, I guess maybe you only want to
> flush the kernel mapping from the kernel, and leave flushing of the
> user mapping to userspace? I think if you're running in some random
> kernel context, you probably can't even reliably flush the right
> userspace context - see how for example vivt_flush_cache_range() does
> nothing if the MM being flushed is not running on the current CPU.

I'm assuming I need to flush both the kernel (get the updates out to
memory) and user space (so it can read those updates).

The paths are all done via system calls from user space, so it should be on
the same CPU. User space will do an ioctl() on the buffer file descriptor
asking for an update, the kernel will populate the page with that update,
and then user space will read the update after the ioctl() returns. All
very synchronous. Thus, we don't need to worry about updates from one CPU
happening on another CPU.

Even when it wants to read the buffer. The ioctl() will swap out the old
reader page with one of the write pages making it the new "reader" page,
where no more updates will happen on that page. The flush happens after
that and before going back to user space.

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ