[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANpmjNNtV_6kgoLv=VX3z_oM6ZEvWJNAOj9z4ADcymqmhc+crw@mail.gmail.com>
Date: Thu, 9 Jun 2022 16:55:46 +0200
From: Marco Elver <elver@...gle.com>
To: Dmitry Vyukov <dvyukov@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Frederic Weisbecker <frederic@...nel.org>,
Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...hat.com>,
Namhyung Kim <namhyung@...nel.org>,
linux-perf-users@...r.kernel.org, x86@...nel.org,
linux-sh@...r.kernel.org, kasan-dev@...glegroups.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/8] perf/hw_breakpoint: Optimize list of per-task breakpoints
On Thu, 9 Jun 2022 at 16:29, Dmitry Vyukov <dvyukov@...gle.com> wrote:
>
> On Thu, 9 Jun 2022 at 13:31, Marco Elver <elver@...gle.com> wrote:
> >
> > On a machine with 256 CPUs, running the recently added perf breakpoint
> > benchmark results in:
> >
> > | $> perf bench -r 30 breakpoint thread -b 4 -p 64 -t 64
> > | # Running 'breakpoint/thread' benchmark:
> > | # Created/joined 30 threads with 4 breakpoints and 64 parallelism
> > | Total time: 236.418 [sec]
> > |
> > | 123134.794271 usecs/op
> > | 7880626.833333 usecs/op/cpu
> >
> > The benchmark tests inherited breakpoint perf events across many
> > threads.
> >
> > Looking at a perf profile, we can see that the majority of the time is
> > spent in various hw_breakpoint.c functions, which execute within the
> > 'nr_bp_mutex' critical sections which then results in contention on that
> > mutex as well:
> >
> > 37.27% [kernel] [k] osq_lock
> > 34.92% [kernel] [k] mutex_spin_on_owner
> > 12.15% [kernel] [k] toggle_bp_slot
> > 11.90% [kernel] [k] __reserve_bp_slot
> >
> > The culprit here is task_bp_pinned(), which has a runtime complexity of
> > O(#tasks) due to storing all task breakpoints in the same list and
> > iterating through that list looking for a matching task. Clearly, this
> > does not scale to thousands of tasks.
> >
> > While one option would be to make task_struct a breakpoint list node,
> > this would only further bloat task_struct for infrequently used data.
>
> task_struct already has:
>
> #ifdef CONFIG_PERF_EVENTS
> struct perf_event_context *perf_event_ctxp[perf_nr_task_contexts];
> struct mutex perf_event_mutex;
> struct list_head perf_event_list;
> #endif
>
> Wonder if it's possible to use perf_event_mutex instead of the task_sharded_mtx?
> And possibly perf_event_list instead of task_bps_ht? It will contain
> other perf_event types, so we will need to test type as well, but on
> the positive side, we don't need any management of the separate
> container.
Hmm, yes, I looked at that but then decided against messing the
perf/core internals. The main issue I have with using perf_event_mutex
is that we might interfere with perf/core's locking rules as well as
interfere with other concurrent perf event additions. Using
perf_event_list is very likely a no-go because it requires reworking
perf/core as well.
I can already hear Peter shouting, but maybe I'm wrong. :-)
Powered by blists - more mailing lists