[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241024044159.3156646-1-andrii@kernel.org>
Date: Wed, 23 Oct 2024 21:41:57 -0700
From: Andrii Nakryiko <andrii@...nel.org>
To: linux-trace-kernel@...r.kernel.org,
peterz@...radead.org,
oleg@...hat.com
Cc: rostedt@...dmis.org,
mhiramat@...nel.org,
mingo@...nel.org,
bpf@...r.kernel.org,
linux-kernel@...r.kernel.org,
jolsa@...nel.org,
paulmck@...nel.org,
Andrii Nakryiko <andrii@...nel.org>
Subject: [PATCH v3 tip/perf/core 0/2] SRCU-protected uretprobes hot path
Recently landed changes make uprobe entry hot code path makes use of RCU Tasks
Trace to avoid touching uprobe refcount, which at high frequency of uprobe
triggering leads to excessive cache line bouncing and limited scalability with
increased number of CPUs that simultaneously execute uprobe handlers.
This patch set adds return uprobe (uretprobe) side of this, this time
utilizing SRCU for the same reasons. Given the time between entry uprobe
activation (at which point uretprobe code hijacks user-space stack to get
activated on user function return) and uretprobe activation can be arbitrarily
long and is completely under control of user code, we need to protect
ourselves from too long or unbounded SRCU grace periods.
To that end we keep SRCU protection only for a limited time, and if user space
code takes longer to return, pending uretprobe instances are "downgraded" to
refcounted ones. This gives us best scalability and performance for
high-frequency uretprobes, and keeps upper bound on SRCU grace period duration
for low frequency uretprobes.
There are a bunch of synchronization issues between timer callback running in
IRQ handler and current thread executing uretprobe handlers, which is
abstracted away behind "hybrid lifetime uprobe" (hprobe) wrapper around uprobe
instance itself.
There is now a speculative try_get_uprobe() and, possibly, a compensating
put_uprobe() being done from the timer thread (softirq), so we need to make
sure that put_uprobe() is working well from any context. This is what patch #1
does, employing deferred work callback, and shifting all the locking to it.
v2->v3:
- rebased onto peterz/queue.git's perf/core on top of Jiri's changes;
- simplify hprobe states by utilizing HPROBE_GONE for NULL uprobe (Peter);
- hprobe_expire() can return uprobe with refcount, if requested (Peter);
- keep hprobe_init_leased() and hprobe_init_stable() to a) avoid srcu_idx
bikeshedding dependency and b) leased constructor shouldn't accept NULL
uprobe, so it's nice to be able to easily express and enforce that;
- patch #1 stays the same, we'll work on uprobe_delayed_lock separately;
v1->v2:
- dropped single-stepped uprobes changes to make this change a bit more
palatable to Oleg and get some good will from him :)
- fixed the bug with not calling __srcu_read_unlock when "expiring" leased
uprobe, but failing to get refcount;
- switched hprobe implementation to an explicit state machine, which seems
to make logic more straightforward, evidenced by this allowing me to spot
the above subtle LEASED -> GONE transition bug;
- re-ran uprobe-stress many-many times, it was instrumental for getting
confidence in implementation and spotting subtle bugs (including the above
one, once I modified timer logic to ran at fixed interval to increase the
probability of races with the normal uretprobe consumer code);
rfc->v1:
- made put_uprobe() work in any context, not just user context (Oleg);
- changed to unconditional mod_timer() usage to avoid races (Oleg).
- I kept single-stepped uprobe changes, as they have a simple use of all the
hprobe functionality developed in patch #1.
Andrii Nakryiko (2):
uprobes: allow put_uprobe() from non-sleepable softirq context
uprobes: SRCU-protect uretprobe lifetime (with timeout)
include/linux/uprobes.h | 54 ++++++-
kernel/events/uprobes.c | 309 +++++++++++++++++++++++++++++++++++-----
2 files changed, 322 insertions(+), 41 deletions(-)
--
2.43.5
Powered by blists - more mailing lists