[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250123040533.e7guez5drz7mk6es@jpoimboe>
Date: Wed, 22 Jan 2025 20:05:33 -0800
From: Josh Poimboeuf <jpoimboe@...nel.org>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc: x86@...nel.org, Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...nel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
linux-kernel@...r.kernel.org, Indu Bhagat <indu.bhagat@...cle.com>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>, Namhyung Kim <namhyung@...nel.org>,
Ian Rogers <irogers@...gle.com>,
Adrian Hunter <adrian.hunter@...el.com>,
linux-perf-users@...r.kernel.org, Mark Brown <broonie@...nel.org>,
linux-toolchains@...r.kernel.org, Jordan Rome <jordalgo@...a.com>,
Sam James <sam@...too.org>, linux-trace-kernel@...r.kernel.org,
Andrii Nakryiko <andrii.nakryiko@...il.com>,
Jens Remus <jremus@...ux.ibm.com>,
Florian Weimer <fweimer@...hat.com>,
Andy Lutomirski <luto@...nel.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Weinan Liu <wnliu@...gle.com>
Subject: Re: [PATCH v4 28/39] unwind_user/deferred: Add deferred unwinding
interface
On Wed, Jan 22, 2025 at 03:13:10PM -0500, Mathieu Desnoyers wrote:
> > +struct unwind_work {
> > + struct callback_head work;
> > + unwind_callback_t func;
> > + int pending;
> > +};
>
> This is a lot of information to keep around per instance.
>
> I'm not sure it would be OK to have a single unwind_work per perf-event
> for perf. I suspect it may need to be per perf-event X per-task if a
> perf-event can be associated to more than a single task (not sure ?).
For "perf record -g <command>", it seems to be one event per task.
Incidentally this is the mode where I did my perf testing :-/
But looking at it now, a global "perf record -g" appears to use one
event per CPU. So if a task requests an unwind and then schedules out
before returning to user space, any subsequent tasks trying to unwind on
that CPU would be blocked until the original task returned to user. So
yeah, that's definitely a problem.
Actually a per-CPU unwind_work descriptor could conceivably work if we
able to unwind at schedule() time.
But Steve pointed out that wouldn't work so well if the task isn't in
RUNNING state.
However... would it be a horrible idea for 'next' to unwind 'prev' after
the context switch???
> For LTTng, we'd have to consider something similar because of multi-session
> support. Either we'd have one unwind_work per-session X per-task, or we'd
> need to multiplex this internally within LTTng-modules. None of this is
> ideal in terms of memory footprint.
>
> We should look at what part of this information can be made static/global
> and what part is task-local, so we minimize the amount of redundant data
> per-task (memory footprint).
>
> AFAIU, most of that unwind_work information is global:
>
> - work,
> - func,
>
> And could be registered dynamically by the tracer when it enables
> tracing with an interest on stack walking.
>
> At registration, we can allocate a descriptor ID (with a limited bounded
> max number, configurable). This would associate a work+func to a given
> ID, and keep track of this in a global table (indexed by ID).
>
> I suspect that the only thing we really want to keep track of per-task
> is the pending bit, and what is the ID of the unwind_work associated.
> This could be kept, per-task, in either:
>
> - a bitmap of pending bits, indexed by ID, or
> - an array of pending IDs.
That's basically what I was doing before. The per-task state also had:
- 'struct callback_head work' for doing the task work. A single work
function was used to multiplex the callbacks, as opposed to the
current patches where each descriptor gets its own separate
task_work.
- 'void *privs[UNWIND_MAX_CALLBACKS]' opaque data pointers. Maybe
some callbacks don't need that, but perf needed it for the 'event'
pointer. For 32 max callbacks that's 256 bytes per task.
- 'u64 last_cookies[UNWIND_MAX_CALLBACKS]' to prevent a callback from
getting called twice. But actually that may have been overkill, it
should be fine to call the callback again with the cached stack
trace. The tracer could instead have its own policy for how to
handle dupes.
- 'unsigned int work_pending' to designate whether the task_work is
pending. Also probably not necessary, the pending bits could serve
the same purpose.
So it had more concurrency to deal with, to handle the extra per-task
state.
It also had a global array of callbacks, which used a mutex and SRCU to
coordinate between the register/unregister and the task work.
Another major issue was that it wasn't NMI-safe due to all the shared
state. So a tracer in NMI would have to schedule an IRQ to call
unwind_deferred_request(). Not only is that a pain for the tracers,
it's problematic in other ways:
- If the NMI occurred in schedule() with IRQs disabled, the IRQ would
actually interrupt the 'next' task. So the caller would have to
stash a 'task' pointer for the IRQ handler to read and pass to
unwind_deferred_request(). (similar to the task_work bug I found)
- Thus the deferred unwind interface would need to handle requests
from non-current, introducing a new set of concurrency issues.
- Also, while a tracer in NMI can unwind the kernel stack and send
that to a ring buffer immediately, it can't store the cookie along
with it, so there lie more tracer headaches.
Once I changed the interface to get rid of the global nastiness, all
those problems went away.
Of course that now introduces the new problem that each tracer (or
tracing event) needs some kind of per-task state. But otherwise this
new interface really simplifies things a *lot*.
Anyway, I don't have a good answer at the moment. Will marinate on it.
Maybe we could do something like allocate the unwind_work (or some
equivalent) on demand at the time of unwind request using GFP_NOWAIT or
GFP_ATOMIC or some such, then free it during the task work?
> Unregistration of unwind_work could iterate on all tasks and clear the
> pending bit or ID associated with the unregistered work, to make sure
> we don't trigger unrelated work after a re-use.
What the old unregister code did was to remove it from the global
callbacks array (with the careful use of mutex+SRCU to coordinate with
the task work). Then synchronize_srcu() before returning.
> > +/*
> > + * The context cookie is a unique identifier which allows post-processing to
> > + * correlate kernel trace(s) with user unwinds. The high 12 bits are the CPU
> > + * id; the lower 48 bits are a per-CPU entry counter.
> > + */
> > +static u64 ctx_to_cookie(u64 cpu, u64 ctx)
> > +{
> > + BUILD_BUG_ON(NR_CPUS > 65535);
>
> 2^12 = 4k, not 64k. Perhaps you mean to reserve 16 bits
> for cpu numbers ?
Yeah, here the code is right but the comment is wrong. It actually does
use 16 bits.
> > + return (ctx & ((1UL << 48) - 1)) | (cpu << 48);
>
> Perhaps use ilog2(NR_CPUS) instead for the number of bits to use
> rather than hard code 12 ?
I'm thinking I'd rather keep it simple by hard-coding the # of bits, so
as to avoid any surprises caused by edge cases.
--
Josh
Powered by blists - more mailing lists