lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241008002556.2332835-1-andrii@kernel.org>
Date: Mon,  7 Oct 2024 17:25:54 -0700
From: Andrii Nakryiko <andrii@...nel.org>
To: linux-trace-kernel@...r.kernel.org,
	peterz@...radead.org,
	oleg@...hat.com
Cc: rostedt@...dmis.org,
	mhiramat@...nel.org,
	mingo@...nel.org,
	bpf@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	jolsa@...nel.org,
	paulmck@...nel.org,
	Andrii Nakryiko <andrii@...nel.org>
Subject: [PATCH v2 tip/perf/core 0/2] SRCU-protected uretprobes hot path

Recently landed changes make uprobe entry hot code path makes use of RCU Tasks
Trace to avoid touching uprobe refcount, which at high frequency of uprobe
triggering leads to excessive cache line bouncing and limited scalability with
increased number of CPUs that simultaneously execute uprobe handlers.

This patch set adds return uprobe (uretprobe) side of this, this time
utilizing SRCU for the same reasons. Given the time between entry uprobe
activation (at which point uretprobe code hijacks user-space stack to get
activated on user function return) and uretprobe activation can be arbitrarily
long and is completely under control of user code, we need to protect
ourselves from too long or unbounded SRCU grace periods.

To that end we keep SRCU protection only for a limited time, and if user space
code takes longer to return, pending uretprobe instances are "downgraded" to
refcounted ones. This gives us best scalability and performance for
high-frequency uretprobes, and keeps upper bound on SRCU grace period duration
for low frequency uretprobes.

There are a bunch of synchronization issues between timer callback running in
IRQ handler and current thread executing uretprobe handlers, which is
abstracted away behind "hybrid lifetime uprobe" (hprobe) wrapper around uprobe
instance itself.

There is now a speculative try_get_uprobe() and, possibly, a compensating
put_uprobe() being done from the timer thread (softirq), so we need to make
sure that put_uprobe() is working well from any context. This is what patch #1
does, employing deferred work callback, and shifting all the locking to it.

v1->v2:
  - dropped single-stepped uprobes changes to make this change a bit more
    palatable to Oleg and get some good will from him :)
  - fixed the bug with not calling __srcu_read_unlock when "expiring" leased
    uprobe, but failing to get refcount;
  - switched hprobe implementation to an explicit state machine, which seems
    to make logic more straightforward, evidenced by this allowing me to spot
    the above subtle LEASED -> GONE transition bug;
  - re-ran uprobe-stress many-many times, it was instrumental for getting
    confidence in implementation and spotting subtle bugs (including the above
    one, once I modified timer logic to ran at fixed interval to increase the
    probability of races with the normal uretprobe consumer code);
rfc->v1:
  - made put_uprobe() work in any context, not just user context (Oleg);
  - changed to unconditional mod_timer() usage to avoid races (Oleg).
  - I kept single-stepped uprobe changes, as they have a simple use of all the
    hprobe functionality developed in patch #1.


Andrii Nakryiko (2):
  uprobes: allow put_uprobe() from non-sleepable softirq context
  uprobes: SRCU-protect uretprobe lifetime (with timeout)

 include/linux/uprobes.h |  54 ++++++-
 kernel/events/uprobes.c | 312 ++++++++++++++++++++++++++++++++++------
 2 files changed, 321 insertions(+), 45 deletions(-)

-- 
2.43.5


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ