[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220704150514.48816-1-elver@google.com>
Date: Mon, 4 Jul 2022 17:05:00 +0200
From: Marco Elver <elver@...gle.com>
To: elver@...gle.com, Peter Zijlstra <peterz@...radead.org>,
Frederic Weisbecker <frederic@...nel.org>,
Ingo Molnar <mingo@...nel.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...hat.com>,
Namhyung Kim <namhyung@...nel.org>,
Dmitry Vyukov <dvyukov@...gle.com>,
Michael Ellerman <mpe@...erman.id.au>,
linuxppc-dev@...ts.ozlabs.org, linux-perf-users@...r.kernel.org,
x86@...nel.org, linux-sh@...r.kernel.org,
kasan-dev@...glegroups.com, linux-kernel@...r.kernel.org
Subject: [PATCH v3 00/14] perf/hw_breakpoint: Optimize for thousands of tasks
The hw_breakpoint subsystem's code has seen little change in over 10
years. In that time, systems with >100s of CPUs have become common,
along with improvements to the perf subsystem: using breakpoints on
thousands of concurrent tasks should be a supported usecase.
The breakpoint constraints accounting algorithm is the major bottleneck
in doing so:
1. toggle_bp_slot() and fetch_bp_busy_slots() are O(#cpus * #tasks):
Both iterate through all CPUs and call task_bp_pinned(), which is
O(#tasks).
2. Everything is serialized on a global mutex, 'nr_bp_mutex'.
The series progresses with the simpler optimizations and finishes with
the more complex optimizations:
1. We first optimize task_bp_pinned() to only take O(1) on average.
2. Rework synchronization to allow concurrency when checking and
updating breakpoint constraints for tasks.
3. Eliminate the O(#cpus) loops in the CPU-independent case.
Along the way, smaller micro-optimizations and cleanups are done as they
seemed obvious when staring at the code (but likely insignificant).
The result is (on a system with 256 CPUs) that we go from:
| $> perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64
[ ^ more aggressive benchmark parameters took too long ]
| # Running 'breakpoint/thread' benchmark:
| # Created/joined 30 threads with 4 breakpoints and 64 parallelism
| Total time: 236.418 [sec]
|
| 123134.794271 usecs/op
| 7880626.833333 usecs/op/cpu
... to the following with all optimizations:
| $> perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64
| # Running 'breakpoint/thread' benchmark:
| # Created/joined 30 threads with 4 breakpoints and 64 parallelism
| Total time: 0.067 [sec]
|
| 35.292187 usecs/op
| 2258.700000 usecs/op/cpu
On the used test system, that's an effective speedup of ~3490x per op.
Which is on par with the theoretical ideal performance through
optimizations in hw_breakpoint.c (constraints accounting disabled), and
only 12% slower than no breakpoints at all.
Changelog
---------
v3:
* Fix typos.
* Introduce hw_breakpoint_is_used() for the test.
* Add WARN_ON in bp_blots_histogram_add().
* Don't use raw_smp_processor_id() in test.
* Apply Acked-by/Reviewed-by given in v2 for mostly unchanged patches.
v2: https://lkml.kernel.org/r/20220628095833.2579903-1-elver@google.com
* Add KUnit test suite.
* Remove struct bp_busy_slots and simplify functions.
* Add "powerpc/hw_breakpoint: Avoid relying on caller synchronization".
* Add "locking/percpu-rwsem: Add percpu_is_write_locked() and percpu_is_read_locked()".
* Use percpu-rwsem instead of rwlock.
* Use task_struct::perf_event_mutex instead of sharded mutex.
* Drop v1 "perf/hw_breakpoint: Optimize task_bp_pinned() if CPU-independent".
* Add "perf/hw_breakpoint: Introduce bp_slots_histogram".
* Add "perf/hw_breakpoint: Optimize max_bp_pinned_slots() for CPU-independent task targets".
* Add "perf/hw_breakpoint: Optimize toggle_bp_slot() for CPU-independent task targets".
* Apply Acked-by/Reviewed-by given in v1 for unchanged patches.
==> Speedup of ~3490x (vs. ~3315x in v1).
v1: https://lore.kernel.org/all/20220609113046.780504-1-elver@google.com/
Marco Elver (14):
perf/hw_breakpoint: Add KUnit test for constraints accounting
perf/hw_breakpoint: Provide hw_breakpoint_is_used() and use in test
perf/hw_breakpoint: Clean up headers
perf/hw_breakpoint: Optimize list of per-task breakpoints
perf/hw_breakpoint: Mark data __ro_after_init
perf/hw_breakpoint: Optimize constant number of breakpoint slots
perf/hw_breakpoint: Make hw_breakpoint_weight() inlinable
perf/hw_breakpoint: Remove useless code related to flexible
breakpoints
powerpc/hw_breakpoint: Avoid relying on caller synchronization
locking/percpu-rwsem: Add percpu_is_write_locked() and
percpu_is_read_locked()
perf/hw_breakpoint: Reduce contention with large number of tasks
perf/hw_breakpoint: Introduce bp_slots_histogram
perf/hw_breakpoint: Optimize max_bp_pinned_slots() for CPU-independent
task targets
perf/hw_breakpoint: Optimize toggle_bp_slot() for CPU-independent task
targets
arch/powerpc/kernel/hw_breakpoint.c | 53 ++-
arch/sh/include/asm/hw_breakpoint.h | 5 +-
arch/x86/include/asm/hw_breakpoint.h | 5 +-
include/linux/hw_breakpoint.h | 4 +-
include/linux/percpu-rwsem.h | 6 +
include/linux/perf_event.h | 3 +-
kernel/events/Makefile | 1 +
kernel/events/hw_breakpoint.c | 638 ++++++++++++++++++++-------
kernel/events/hw_breakpoint_test.c | 333 ++++++++++++++
kernel/locking/percpu-rwsem.c | 6 +
lib/Kconfig.debug | 10 +
11 files changed, 885 insertions(+), 179 deletions(-)
create mode 100644 kernel/events/hw_breakpoint_test.c
--
2.37.0.rc0.161.g10f37bed90-goog
Powered by blists - more mailing lists