[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CF654118-59C1-46AA-B9DB-CA14D9FFACF7@fb.com>
Date: Fri, 10 Jan 2020 17:37:45 +0000
From: Song Liu <songliubraving@...com>
To: Peter Zijlstra <peterz@...radead.org>
CC: open list <linux-kernel@...r.kernel.org>,
Kernel Team <Kernel-team@...com>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Jiri Olsa <jolsa@...nel.org>,
Alexey Budankov <alexey.budankov@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>, Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH v9] perf: Sharing PMU counters across compatible events
Hi Peter,
Thanks for your review!
> On Jan 10, 2020, at 4:59 AM, Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Tue, Dec 17, 2019 at 09:59:48AM -0800, Song Liu wrote:
>
> This is starting to look good, find a few comments below.
>
>> include/linux/perf_event.h | 13 +-
>> kernel/events/core.c | 363 ++++++++++++++++++++++++++++++++-----
>> 2 files changed, 332 insertions(+), 44 deletions(-)
>>
>> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
>> index 6d4c22aee384..45a346ee33d2 100644
>> --- a/include/linux/perf_event.h
>> +++ b/include/linux/perf_event.h
>> @@ -547,7 +547,9 @@ enum perf_event_state {
>> PERF_EVENT_STATE_ERROR = -2,
>> PERF_EVENT_STATE_OFF = -1,
>> PERF_EVENT_STATE_INACTIVE = 0,
>> - PERF_EVENT_STATE_ACTIVE = 1,
>> + /* the hw PMC is enabled, but this event is not counting */
>> + PERF_EVENT_STATE_ENABLED = 1,
>> + PERF_EVENT_STATE_ACTIVE = 2,
>> };
>
> It's probably best to extend the comment above instead of adding a
> comment for one of the states.
Will update.
>
>>
>> struct file;
>
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 4ff86d57f9e5..7d4b6ac46de5 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -1657,6 +1657,181 @@ perf_event_groups_next(struct perf_event *event)
>> event = rb_entry_safe(rb_next(&event->group_node), \
>> typeof(*event), group_node))
>>
>> +static inline bool perf_event_can_share(struct perf_event *event)
>> +{
>> + /* only share hardware counting events */
>> + return !is_sampling_event(event);
>> + return !is_software_event(event) && !is_sampling_event(event);
>
> One of those return statements is too many; I'm thinking you meant to
> only have the second.
Exactly! The first line is for vm tests. Sorry for the confusion.
>
>> +}
>> +
[...]
>> + active_count = event->dup_active_count;
>> + perf_event_exit_dup_master(event);
>> +
>> + if (!count)
>> + return;
>> +
>> + if (count == 1) {
>> + /* no more sharing */
>> + new_master->dup_master = NULL;
>> + } else {
>> + perf_event_init_dup_master(new_master);
>> + new_master->dup_active_count = active_count;
>> + }
>> +
>> + if (active_count) {
>
> Would it make sense to do something like:
>
> new_master->hw.idx = event->hw.idx;
>
> That should ensure x86_schedule_events() can do with the fast path;
> after all, we're adding back the 'same' event. If we do this; this wants
> a comment though.
I think this make sense for x86, but maybe not as much for other architectures.
For example, it is most likely a no-op for RISC-V. Maybe we can add a new API
to struct pmu, like "void copy_hw_config(struct perf_event *, struct perf_event *)".
For x86, it will be like
void x86_copy_hw_config(from, to) {
to->hw.idx = from->hw.idx;
}
>
>> + WARN_ON_ONCE(event->pmu->add(new_master, PERF_EF_START));
>
> For consistency that probably ought to be:
>
> new_master->pmu->add(new_master, PERF_EF_START);
Will fix.
>
>> + if (new_master->state == PERF_EVENT_STATE_INACTIVE)
>> + new_master->state = PERF_EVENT_STATE_ENABLED;
>
> If this really should not be perf_event_set_state() we need a comment
> explaining why -- I think I see, but it's still early and I've not had
> nearly enough tea to wake me up.
Will add comment.
[...]
>>
>> @@ -2242,9 +2494,9 @@ static void __perf_event_disable(struct perf_event *event,
>> }
>>
>> if (event == event->group_leader)
>> - group_sched_out(event, cpuctx, ctx);
>> + group_sched_out(event, cpuctx, ctx, true);
>> else
>> - event_sched_out(event, cpuctx, ctx);
>> + event_sched_out(event, cpuctx, ctx, true);
>>
>> perf_event_set_state(event, PERF_EVENT_STATE_OFF);
>> }
>
> So the above event_sched_out(.remove_dup) is very inconsistent with the
> below ctx_resched(.event_add_dup).
[...]
>> @@ -2810,7 +3069,7 @@ static void __perf_event_enable(struct perf_event *event,
>> if (ctx->task)
>> WARN_ON_ONCE(task_ctx != ctx);
>>
>> - ctx_resched(cpuctx, task_ctx, get_event_type(event));
>> + ctx_resched(cpuctx, task_ctx, get_event_type(event), event);
>> }
>>
>> /*
>
> We basically need:
>
> * perf_event_setup_dup() after add_event_to_ctx(), but before *sched_in()
> - perf_install_in_context()
> - perf_event_enable()
> - inherit_event()
>
> * perf_event_remove_dup() after *sched_out(), but before list_del_event()
> - perf_remove_from_context()
> - perf_event_disable()
>
> AFAICT we can do that without changing *sched_out() and ctx_resched(),
> with probably less lines changed over all.
We currently need these changes to sched_out() and ctx_resched() because we
only do setup_dup() and remove_dup() when the whole ctx is scheduled out.
Maybe this is not really necessary? I am not sure whether simpler code need
more reschedules. Let me take a closer look...
>
>> @@ -4051,6 +4310,9 @@ static void __perf_event_read(void *info)
>>
>> static inline u64 perf_event_count(struct perf_event *event)
>> {
>> + if (event->dup_master == event)
>> + return local64_read(&event->master_count) +
>> + atomic64_read(&event->master_child_count);
>
> Wants {}
Will fix.
Thanks again,
Song
Powered by blists - more mailing lists