[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <FAD07921-FB10-4FDD-9A81-48300EE24F20@fb.com>
Date: Tue, 5 Nov 2019 23:51:42 +0000
From: Song Liu <songliubraving@...com>
To: open list <linux-kernel@...r.kernel.org>
CC: Kernel Team <Kernel-team@...com>,
"acme@...nel.org" <acme@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Jiri Olsa <jolsa@...nel.org>,
Alexey Budankov <alexey.budankov@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>, "Tejun Heo" <tj@...nel.org>
Subject: Re: [PATCH v6] perf: Sharing PMU counters across compatible events
More details on where I am heading...
> On Sep 18, 2019, at 10:23 PM, Song Liu <songliubraving@...com> wrote:
>
> This patch tries to enable PMU sharing. To make perf event scheduling
> fast, we use special data structures.
>
> An array of "struct perf_event_dup" is added to the perf_event_context,
> to remember all the duplicated events under this ctx. All the events
> under this ctx has a "dup_id" pointing to its perf_event_dup. Compatible
> events under the same ctx share the same perf_event_dup. The following
> figure shows a simplified version of the data structure.
>
> ctx -> perf_event_dup -> master
> ^
> |
> perf_event /|
> |
> perf_event /
>
> Connection among perf_event and perf_event_dup are built when events are
> added or removed from the ctx. So these are not on the critical path of
> schedule or perf_rotate_context().
>
> On the critical paths (add, del read), sharing PMU counters doesn't
> increase the complexity. Helper functions event_pmu_[add|del|read]() are
> introduced to cover these cases. All these functions have O(1) time
> complexity.
>
> We allocate a separate perf_event for perf_event_dup->master. This needs
> extra attention, because perf_event_alloc() may sleep. To allocate the
> master event properly, a new pointer, tmp_master, is added to perf_event.
> tmp_master carries a separate perf_event into list_[add|del]_event().
> The master event has valid ->ctx and holds ctx->refcount.
If we do GFP_ATOMIC in perf_event_alloc(), maybe with an extra option, we
don't need the tmp_master hack. So we only allocate master when we will
use it.
>
> Details about the handling of the master event is added to
> include/linux/perf_event.h, before struct perf_event_dup.
>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Arnaldo Carvalho de Melo <acme@...hat.com>
> Cc: Jiri Olsa <jolsa@...nel.org>
> Cc: Alexey Budankov <alexey.budankov@...ux.intel.com>
> Cc: Namhyung Kim <namhyung@...nel.org>
> Cc: Tejun Heo <tj@...nel.org>
> Signed-off-by: Song Liu <songliubraving@...com>
> ---
> include/linux/perf_event.h | 61 ++++++++
> kernel/events/core.c | 294 ++++++++++++++++++++++++++++++++++---
> 2 files changed, 332 insertions(+), 23 deletions(-)
>
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index 61448c19a132..a694e5eee80a 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -722,6 +722,12 @@ struct perf_event {
> #endif
>
> struct list_head sb_list;
> +
> + /* for PMU sharing */
> + struct perf_event *tmp_master;
> + int dup_id;
I guess we can get rid of dup_id here, and just have
struct perf_event *dup_master.
> + u64 dup_base_count;
> + u64 dup_base_child_count;
> #endif /* CONFIG_PERF_EVENTS */
> };
>
> @@ -731,6 +737,58 @@ struct perf_event_groups {
> u64 index;
> };
>
> +/*
> + * Sharing PMU across compatible events
> + *
> + * If two perf_events in the same perf_event_context are counting same
> + * hardware events (instructions, cycles, etc.), they could share the
> + * hardware PMU counter.
> + *
> + * When a perf_event is added to the ctx (list_add_event), it is compared
> + * against other events in the ctx. If they can share the PMU counter,
> + * a perf_event_dup is allocated to represent the sharing.
> + *
> + * Each perf_event_dup has a virtual master event, which is called by
> + * pmu->add() and pmu->del(). We cannot call perf_event_alloc() in
> + * list_add_event(), so it is allocated and carried by event->tmp_master
> + * into list_add_event().
> + *
> + * Virtual master in different cases/paths:
> + *
> + * < I > perf_event_open() -> close() path:
> + *
> + * 1. Allocated by perf_event_alloc() in sys_perf_event_open();
> + * 2. event->tmp_master->ctx assigned in perf_install_in_context();
> + * 3.a. if used by ctx->dup_events, freed in perf_event_release_kernel();
> + * 3.b. if not used by ctx->dup_events, freed in perf_event_open().
> + *
> + * < II > inherit_event() path:
> + *
> + * 1. Allocated by perf_event_alloc() in inherit_event();
> + * 2. tmp_master->ctx assigned in inherit_event();
> + * 3.a. if used by ctx->dup_events, freed in perf_event_release_kernel();
> + * 3.b. if not used by ctx->dup_events, freed in inherit_event().
> + *
> + * < III > perf_pmu_migrate_context() path:
> + * all dup_events removed during migration (no sharing after the move).
> + *
> + * < IV > perf_event_create_kernel_counter() path:
> + * not supported yet.
> + */
> +struct perf_event_dup {
> + /*
> + * master event being called by pmu->add() and pmu->del().
> + * This event is allocated with perf_event_alloc(). When
> + * attached to a ctx, this event should hold ctx->refcount.
> + */
> + struct perf_event *master;
> + /* number of events in the ctx that shares the master */
> + int total_event_count;
> + /* number of active events of the master */
> + int active_event_count;
> +};
And hopefully get rid of this.
Please let me know if this doesn't work.
Thanks,
Song
Powered by blists - more mailing lists