[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <951F0EE1-5DCC-46CA-8891-39A891512CEE@fb.com>
Date: Fri, 22 Nov 2019 19:50:06 +0000
From: Song Liu <songliubraving@...com>
To: Jiri Olsa <jolsa@...hat.com>
CC: open list <linux-kernel@...r.kernel.org>,
Kernel Team <Kernel-team@...com>,
David Carrillo Cisneros <davidca@...com>,
"Peter Zijlstra" <peterz@...radead.org>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Jiri Olsa <jolsa@...nel.org>,
Alexey Budankov <alexey.budankov@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>, "Tejun Heo" <tj@...nel.org>
Subject: Re: [PATCH v7] perf: Sharing PMU counters across compatible events
> On Nov 22, 2019, at 11:33 AM, Jiri Olsa <jolsa@...hat.com> wrote:
>
> On Fri, Nov 15, 2019 at 03:55:04PM -0800, Song Liu wrote:
>> This patch tries to enable PMU sharing. When multiple perf_events are
>> counting the same metric, they can share the hardware PMU counter. We
>> call these events as "compatible events".
>>
>> The PMU sharing are limited to events within the same perf_event_context
>> (ctx). When a event is installed or enabled, search the ctx for compatible
>> events. This is implemented in perf_event_setup_dup(). One of these
>> compatible events are picked as the master (stored in event->dup_master).
>> Similarly, when the event is removed or disabled, perf_event_remove_dup()
>> is used to clean up sharing.
>>
>> A new state PERF_EVENT_STATE_ENABLED is introduced for the master event.
>> This state is used when the slave event is ACTIVE, but the master event
>> is not.
>>
>> On the critical paths (add, del read), sharing PMU counters doesn't
>> increase the complexity. Helper functions event_pmu_[add|del|read]() are
>> introduced to cover these cases. All these functions have O(1) time
>> complexity.
>>
>> Cc: Peter Zijlstra <peterz@...radead.org>
>> Cc: Arnaldo Carvalho de Melo <acme@...hat.com>
>> Cc: Jiri Olsa <jolsa@...nel.org>
>> Cc: Alexey Budankov <alexey.budankov@...ux.intel.com>
>> Cc: Namhyung Kim <namhyung@...nel.org>
>> Cc: Tejun Heo <tj@...nel.org>
>> Signed-off-by: Song Liu <songliubraving@...com>
>>
>> ---
>> Changes in v7:
>> Major rewrite to avoid allocating extra master event.
>
> hi,
> what is this based on? I can't apply it on tip/master:
>
> Applying: perf: Sharing PMU counters across compatible events
> error: patch failed: include/linux/perf_event.h:722
> error: include/linux/perf_event.h: patch does not apply
> Patch failed at 0001 perf: Sharing PMU counters across compatible events
> hint: Use 'git am --show-current-patch' to see the failed patch
> When you have resolved this problem, run "git am --continue".
> If you prefer to skip this patch, run "git am --skip" instead.
> To restore the original branch and stop patching, run "git am --abort".
I was using Linus's master branch. This one is specifically based on
commit 96b95eff4a591dbac582c2590d067e356a18aacb
Merge: 4e84608c7836 80591e61a0f7
Author: Linus Torvalds <torvalds@...ux-foundation.org>
Date: 8 days ago
Merge tag 'kbuild-fixes-v5.4-3' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull Kbuild fixes from Masahiro Yamada:
- fix build error when compiling SPARC VDSO with CONFIG_COMPAT=y
- pass correct --arch option to Sparse
* tag 'kbuild-fixes-v5.4-3' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
kbuild: tell sparse about the $ARCH
sparc: vdso: fix build error of vdso32
>
> also I'm getting this when trying to see/apply plain text patch:
>
> [jolsa@...l-r440-01 linux-perf]$ git am --show-current-patch | tail
> =09=09for_each_sibling_event(sibling, group_leader) {
> =09=09=09perf_remove_from_context(sibling, 0);
> =09=09=09put_ctx(gctx);
> +=09=09=09WARN_ON_ONCE(sibling->dup_master);
> =09=09}
> =20
> =09=09/*
> --=20
I also get these =09, =20 issues. I am not sure how to fix them. Attaching
the patch here to see whether it fixes it.
Thanks!
Song
>From bb83b28deca52a2376d8b9d0b1d54e7fec797aa9 Mon Sep 17 00:00:00 2001
From: Song Liu <songliubraving@...com>
Date: Wed, 6 Jun 2018 23:24:10 -0700
Subject: [PATCH v7] perf: Sharing PMU counters across compatible events
This patch tries to enable PMU sharing. When multiple perf_events are
counting the same metric, they can share the hardware PMU counter. We
call these events as "compatible events".
The PMU sharing are limited to events within the same perf_event_context
(ctx). When a event is installed or enabled, search the ctx for compatible
events. This is implemented in perf_event_setup_dup(). One of these
compatible events are picked as the master (stored in event->dup_master).
Similarly, when the event is removed or disabled, perf_event_remove_dup()
is used to clean up sharing.
A new state PERF_EVENT_STATE_ENABLED is introduced for the master event.
This state is used when the slave event is ACTIVE, but the master event
is not.
On the critical paths (add, del read), sharing PMU counters doesn't
increase the complexity. Helper functions event_pmu_[add|del|read]() are
introduced to cover these cases. All these functions have O(1) time
complexity.
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Arnaldo Carvalho de Melo <acme@...hat.com>
Cc: Jiri Olsa <jolsa@...nel.org>
Cc: Alexey Budankov <alexey.budankov@...ux.intel.com>
Cc: Namhyung Kim <namhyung@...nel.org>
Cc: Tejun Heo <tj@...nel.org>
Signed-off-by: Song Liu <songliubraving@...com>
---
Changes in v7:
Major rewrite to avoid allocating extra master event.
---
include/linux/perf_event.h | 14 +-
kernel/events/core.c | 319 ++++++++++++++++++++++++++++++++++---
2 files changed, 309 insertions(+), 24 deletions(-)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 68ccc5b1913b..bb05b178841d 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -522,7 +522,9 @@ enum perf_event_state {
PERF_EVENT_STATE_ERROR = -2,
PERF_EVENT_STATE_OFF = -1,
PERF_EVENT_STATE_INACTIVE = 0,
- PERF_EVENT_STATE_ACTIVE = 1,
+ /* the hw PMC is enabled, but this event is not counting */
+ PERF_EVENT_STATE_ENABLED = 1,
+ PERF_EVENT_STATE_ACTIVE = 2,
};
struct file;
@@ -722,6 +724,16 @@ struct perf_event {
#endif
struct list_head sb_list;
+
+ /* for PMU sharing */
+ struct perf_event *dup_master;
+ /* check event_sync_dup_count() for the use of dup_base_* */
+ u64 dup_base_count;
+ u64 dup_base_child_count;
+ /* when this event is master, read from master*count */
+ local64_t master_count;
+ atomic64_t master_child_count;
+ int dup_active_count;
#endif /* CONFIG_PERF_EVENTS */
};
diff --git a/kernel/events/core.c b/kernel/events/core.c
index aec8dba2bea4..00b1e19e70fd 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1657,6 +1657,139 @@ perf_event_groups_next(struct perf_event *event)
event = rb_entry_safe(rb_next(&event->group_node), \
typeof(*event), group_node))
+static inline bool perf_event_can_share(struct perf_event *event)
+{
+ /* only share hardware counting events */
+ return !is_software_event(event) && !is_sampling_event(event);
+}
+
+/*
+ * Returns whether the two events can share a PMU counter.
+ *
+ * Note: This function does NOT check perf_event_can_share() for
+ * the two events, they should be checked before this function
+ */
+static inline bool perf_event_compatible(struct perf_event *event_a,
+ struct perf_event *event_b)
+{
+ return event_a->attr.type == event_b->attr.type &&
+ event_a->attr.config == event_b->attr.config &&
+ event_a->attr.config1 == event_b->attr.config1 &&
+ event_a->attr.config2 == event_b->attr.config2;
+}
+
+/* prepare the dup_master, this event is its own dup_master */
+static void perf_event_init_dup_master(struct perf_event *event)
+{
+ event->dup_master = event;
+ /*
+ * dup_master->count is used by the hw PMC, and shared with other
+ * events, so we have to read from dup_master->master_count. Copy
+ * event->count to event->master_count.
+ *
+ * Same logic for child_count and master_child_count.
+ */
+ local64_set(&event->master_count, local64_read(&event->count));
+ atomic64_set(&event->master_child_count,
+ atomic64_read(&event->child_count));
+
+ event->dup_active_count = 0;
+}
+
+/* tear down dup_master, no more sharing for this event */
+static void perf_event_exit_dup_master(struct perf_event *event)
+{
+ WARN_ON_ONCE(event->dup_active_count);
+
+ event->dup_master = NULL;
+ /* restore event->count and event->child_count */
+ local64_set(&event->count, local64_read(&event->master_count));
+ atomic64_set(&event->child_count,
+ atomic64_read(&event->master_child_count));
+}
+
+/* After adding a event to the ctx, try find compatible event(s). */
+static void perf_event_setup_dup(struct perf_event *event,
+ struct perf_event_context *ctx)
+
+{
+ struct perf_event *tmp;
+
+ if (event->dup_master ||
+ event->state != PERF_EVENT_STATE_INACTIVE ||
+ !perf_event_can_share(event))
+ return;
+
+ /* look for dup with other events */
+ list_for_each_entry(tmp, &ctx->event_list, event_entry) {
+ WARN_ON_ONCE(tmp->state > PERF_EVENT_STATE_INACTIVE);
+
+ if (tmp == event ||
+ tmp->state != PERF_EVENT_STATE_INACTIVE ||
+ !perf_event_can_share(tmp) ||
+ !perf_event_compatible(event, tmp))
+ continue;
+
+ /* first dup, pick tmp as the master */
+ if (!tmp->dup_master)
+ perf_event_init_dup_master(tmp);
+
+ event->dup_master = tmp->dup_master;
+ break;
+ }
+}
+
+/* Remove dup_master for the event */
+static void perf_event_remove_dup(struct perf_event *event,
+ struct perf_event_context *ctx)
+
+{
+ struct perf_event *tmp, *new_master;
+ int count;
+
+ /* no sharing */
+ if (!event->dup_master)
+ return;
+
+ WARN_ON_ONCE(event->state != PERF_EVENT_STATE_INACTIVE &&
+ event->state != PERF_EVENT_STATE_OFF);
+
+ /* this event is not the master */
+ if (event->dup_master != event) {
+ event->dup_master = NULL;
+ return;
+ }
+
+ /* this event is the master */
+ perf_event_exit_dup_master(event);
+ count = 0;
+ new_master = NULL;
+ list_for_each_entry(tmp, &ctx->event_list, event_entry) {
+ WARN_ON_ONCE(tmp->state > PERF_EVENT_STATE_INACTIVE);
+ if (tmp->dup_master == event) {
+ count++;
+ if (!new_master)
+ new_master = tmp;
+ }
+ }
+
+ if (!count)
+ return;
+
+ if (count == 1) {
+ /* no more sharing */
+ new_master->dup_master = NULL;
+ return;
+ }
+
+ perf_event_init_dup_master(new_master);
+
+ /* switch to new_master */
+ list_for_each_entry(tmp, &ctx->event_list, event_entry)
+ if (tmp->dup_master == event)
+ tmp->dup_master = new_master;
+}
+
/*
* Add an event from the lists for its context.
* Must be called with ctx->mutex and ctx->lock held.
@@ -1689,6 +1822,7 @@ list_add_event(struct perf_event *event, struct perf_event_context *ctx)
ctx->nr_stat++;
ctx->generation++;
+ perf_event_setup_dup(event, ctx);
}
/*
@@ -1861,6 +1995,7 @@ list_del_event(struct perf_event *event, struct perf_event_context *ctx)
if (!(event->attach_state & PERF_ATTACH_CONTEXT))
return;
+ perf_event_remove_dup(event, ctx);
event->attach_state &= ~PERF_ATTACH_CONTEXT;
list_update_cgroup_event(event, ctx, false);
@@ -2069,6 +2204,98 @@ event_filter_match(struct perf_event *event)
perf_cgroup_match(event) && pmu_filter_match(event);
}
+/* PMU sharing aware version of event->pmu->add() */
+static int event_pmu_add(struct perf_event *event,
+ struct perf_event_context *ctx)
+{
+ struct perf_event *master;
+ int ret;
+
+ /* no sharing, just do event->pmu->add() */
+ if (!event->dup_master)
+ return event->pmu->add(event, PERF_EF_START);
+
+ master = event->dup_master;
+
+ if (!master->dup_active_count) {
+ ret = event->pmu->add(master, PERF_EF_START);
+ if (ret)
+ return ret;
+
+ if (master != event)
+ perf_event_set_state(master, PERF_EVENT_STATE_ENABLED);
+ }
+
+ master->dup_active_count++;
+ master->pmu->read(master);
+ event->dup_base_count = local64_read(&master->count);
+ event->dup_base_child_count = atomic64_read(&master->child_count);
+ return 0;
+}
+
+/*
+ * sync data count from dup->master to event, called on event_pmu_read()
+ * and event_pmu_del()
+ */
+static void event_sync_dup_count(struct perf_event *event,
+ struct perf_event *master)
+{
+ u64 new_count;
+ u64 new_child_count;
+
+ WARN_ON_ONCE(event->state != PERF_EVENT_STATE_ACTIVE);
+
+ event->pmu->read(master);
+ new_count = local64_read(&master->count);
+ new_child_count = atomic64_read(&master->child_count);
+
+ if (event == master) {
+ local64_add(new_count - event->dup_base_count,
+ &event->master_count);
+ atomic64_add(new_child_count - event->dup_base_child_count,
+ &event->master_child_count);
+ } else {
+ local64_add(new_count - event->dup_base_count, &event->count);
+ atomic64_add(new_child_count - event->dup_base_child_count,
+ &event->child_count);
+ }
+
+ /* save dup_base_* for next sync */
+ event->dup_base_count = new_count;
+ event->dup_base_child_count = new_child_count;
+}
+
+/* PMU sharing aware version of event->pmu->del() */
+static void event_pmu_del(struct perf_event *event,
+ struct perf_event_context *ctx)
+{
+ struct perf_event *master;
+
+ if (event->dup_master == NULL) {
+ event->pmu->del(event, 0);
+ return;
+ }
+
+ master = event->dup_master;
+ event_sync_dup_count(event, master);
+ if (--master->dup_active_count == 0) {
+ event->pmu->del(master, 0);
+ perf_event_set_state(master, PERF_EVENT_STATE_INACTIVE);
+ } else if (master == event) {
+ perf_event_set_state(master, PERF_EVENT_STATE_ENABLED);
+ }
+}
+
+/* PMU sharing aware version of event->pmu->read() */
+static void event_pmu_read(struct perf_event *event)
+{
+ if (event->dup_master == NULL) {
+ event->pmu->read(event);
+ return;
+ }
+ event_sync_dup_count(event, event->dup_master);
+}
+
static void
event_sched_out(struct perf_event *event,
struct perf_cpu_context *cpuctx,
@@ -2091,7 +2318,7 @@ event_sched_out(struct perf_event *event,
perf_pmu_disable(event->pmu);
- event->pmu->del(event, 0);
+ event_pmu_del(event, ctx);
event->oncpu = -1;
if (READ_ONCE(event->pending_disable) >= 0) {
@@ -2140,6 +2367,14 @@ group_sched_out(struct perf_event *group_event,
#define DETACH_GROUP 0x01UL
+static void ctx_sched_out(struct perf_event_context *ctx,
+ struct perf_cpu_context *cpuctx,
+ enum event_type_t event_type);
+
+static void ctx_resched(struct perf_cpu_context *cpuctx,
+ struct perf_event_context *task_ctx,
+ enum event_type_t event_type);
+
/*
* Cross CPU call to remove a performance event
*
@@ -2153,13 +2388,17 @@ __perf_remove_from_context(struct perf_event *event,
void *info)
{
unsigned long flags = (unsigned long)info;
+ bool resched = (event->dup_master == event);
if (ctx->is_active & EVENT_TIME) {
update_context_time(ctx);
update_cgrp_time_from_cpuctx(cpuctx);
}
- event_sched_out(event, cpuctx, ctx);
+ if (resched)
+ ctx_sched_out(ctx, cpuctx, EVENT_ALL);
+ else
+ event_sched_out(event, cpuctx, ctx);
if (flags & DETACH_GROUP)
perf_group_detach(event);
list_del_event(event, ctx);
@@ -2171,6 +2410,9 @@ __perf_remove_from_context(struct perf_event *event,
cpuctx->task_ctx = NULL;
}
}
+ if (resched)
+ ctx_resched(cpuctx, cpuctx->task_ctx,
+ EVENT_ALL | (ctx->task ? 0 : EVENT_CPU));
}
/*
@@ -2226,6 +2468,16 @@ static void __perf_event_disable(struct perf_event *event,
update_cgrp_time_from_event(event);
}
+ if (event->dup_master == event) {
+ /* disabling master, resched all */
+ ctx_sched_out(ctx, cpuctx, EVENT_ALL);
+ perf_event_remove_dup(event, ctx);
+ perf_event_set_state(event, PERF_EVENT_STATE_OFF);
+ ctx_resched(cpuctx, cpuctx->task_ctx,
+ EVENT_ALL | (ctx->task ? 0 : EVENT_CPU));
+ return;
+ }
+
if (event == event->group_leader)
group_sched_out(event, cpuctx, ctx);
else
@@ -2364,7 +2616,7 @@ event_sched_in(struct perf_event *event,
perf_log_itrace_start(event);
- if (event->pmu->add(event, PERF_EF_START)) {
+ if (event_pmu_add(event, ctx)) {
perf_event_set_state(event, PERF_EVENT_STATE_INACTIVE);
event->oncpu = -1;
ret = -EAGAIN;
@@ -2478,9 +2730,6 @@ static void add_event_to_ctx(struct perf_event *event,
perf_group_attach(event);
}
-static void ctx_sched_out(struct perf_event_context *ctx,
- struct perf_cpu_context *cpuctx,
- enum event_type_t event_type);
static void
ctx_sched_in(struct perf_event_context *ctx,
struct perf_cpu_context *cpuctx,
@@ -2625,9 +2874,13 @@ static int __perf_install_in_context(void *info)
#endif
if (reprogram) {
- ctx_sched_out(ctx, cpuctx, EVENT_TIME);
+ int event_type = perf_event_can_share(event) ? EVENT_ALL : 0;
+
+ /* if perf_event_can_share() resched EVENT_ALL */
+ ctx_sched_out(ctx, cpuctx, event_type);
add_event_to_ctx(event, ctx);
- ctx_resched(cpuctx, task_ctx, get_event_type(event));
+ ctx_resched(cpuctx, task_ctx,
+ event_type | (ctx->task ? 0 : EVENT_CPU));
} else {
add_event_to_ctx(event, ctx);
}
@@ -2745,21 +2998,26 @@ static void __perf_event_enable(struct perf_event *event,
{
struct perf_event *leader = event->group_leader;
struct perf_event_context *task_ctx;
+ int was_active;
+ int event_type;
if (event->state >= PERF_EVENT_STATE_INACTIVE ||
event->state <= PERF_EVENT_STATE_ERROR)
return;
+ event_type = perf_event_can_share(event) ? EVENT_ALL : EVENT_TIME;
+ was_active = ctx->is_active;
if (ctx->is_active)
- ctx_sched_out(ctx, cpuctx, EVENT_TIME);
+ ctx_sched_out(ctx, cpuctx, event_type);
perf_event_set_state(event, PERF_EVENT_STATE_INACTIVE);
+ perf_event_setup_dup(event, ctx);
- if (!ctx->is_active)
+ if (!was_active)
return;
if (!event_filter_match(event)) {
- ctx_sched_in(ctx, cpuctx, EVENT_TIME, current);
+ ctx_sched_in(ctx, cpuctx, event_type, current);
return;
}
@@ -2767,8 +3025,8 @@ static void __perf_event_enable(struct perf_event *event,
* If the event is in a group and isn't the group leader,
* then don't put it on unless the group is on.
*/
- if (leader != event && leader->state != PERF_EVENT_STATE_ACTIVE) {
- ctx_sched_in(ctx, cpuctx, EVENT_TIME, current);
+ if (leader != event && leader->state <= PERF_EVENT_STATE_INACTIVE) {
+ ctx_sched_in(ctx, cpuctx, event_type, current);
return;
}
@@ -2776,7 +3034,8 @@ static void __perf_event_enable(struct perf_event *event,
if (ctx->task)
WARN_ON_ONCE(task_ctx != ctx);
- ctx_resched(cpuctx, task_ctx, get_event_type(event));
+ /* if perf_event_can_share() resched EVENT_ALL */
+ ctx_resched(cpuctx, task_ctx, get_event_type(event) | event_type);
}
/*
@@ -3115,7 +3374,7 @@ static void __perf_event_sync_stat(struct perf_event *event,
* don't need to use it.
*/
if (event->state == PERF_EVENT_STATE_ACTIVE)
- event->pmu->read(event);
+ event_pmu_read(event);
perf_event_update_time(event);
@@ -3979,14 +4238,14 @@ static void __perf_event_read(void *info)
goto unlock;
if (!data->group) {
- pmu->read(event);
+ event_pmu_read(event);
data->ret = 0;
goto unlock;
}
pmu->start_txn(pmu, PERF_PMU_TXN_READ);
- pmu->read(event);
+ event_pmu_read(event);
for_each_sibling_event(sub, event) {
if (sub->state == PERF_EVENT_STATE_ACTIVE) {
@@ -3994,7 +4253,7 @@ static void __perf_event_read(void *info)
* Use sibling's PMU rather than @event's since
* sibling could be on different (eg: software) PMU.
*/
- sub->pmu->read(sub);
+ event_pmu_read(sub);
}
}
@@ -4006,6 +4265,9 @@ static void __perf_event_read(void *info)
static inline u64 perf_event_count(struct perf_event *event)
{
+ if (event->dup_master == event)
+ return local64_read(&event->master_count) +
+ atomic64_read(&event->master_child_count);
return local64_read(&event->count) + atomic64_read(&event->child_count);
}
@@ -4064,9 +4326,12 @@ int perf_event_read_local(struct perf_event *event, u64 *value,
* oncpu == -1).
*/
if (event->oncpu == smp_processor_id())
- event->pmu->read(event);
+ event_pmu_read(event);
- *value = local64_read(&event->count);
+ if (event->dup_master == event)
+ *value = local64_read(&event->master_count);
+ else
+ *value = local64_read(&event->count);
if (enabled || running) {
u64 now = event->shadow_ctx_time + perf_clock();
u64 __enabled, __running;
@@ -6288,7 +6553,7 @@ static void perf_output_read_group(struct perf_output_handle *handle,
if ((leader != event) &&
(leader->state == PERF_EVENT_STATE_ACTIVE))
- leader->pmu->read(leader);
+ event_pmu_read(leader);
values[n++] = perf_event_count(leader);
if (read_format & PERF_FORMAT_ID)
@@ -6301,7 +6566,7 @@ static void perf_output_read_group(struct perf_output_handle *handle,
if ((sub != event) &&
(sub->state == PERF_EVENT_STATE_ACTIVE))
- sub->pmu->read(sub);
+ event_pmu_read(sub);
values[n++] = perf_event_count(sub);
if (read_format & PERF_FORMAT_ID)
@@ -9566,7 +9831,7 @@ static enum hrtimer_restart perf_swevent_hrtimer(struct hrtimer *hrtimer)
if (event->state != PERF_EVENT_STATE_ACTIVE)
return HRTIMER_NORESTART;
- event->pmu->read(event);
+ event_pmu_read(event);
perf_sample_data_init(&data, 0, event->hw.last_period);
regs = get_irq_regs();
@@ -11202,9 +11467,17 @@ SYSCALL_DEFINE5(perf_event_open,
perf_remove_from_context(group_leader, 0);
put_ctx(gctx);
+ /*
+ * move_group only happens to sw events, from sw ctx to hw
+ * ctx. The sw events should not have valid dup_master. So
+ * it is not necessary to handle dup_events.
+ */
+ WARN_ON_ONCE(group_leader->dup_master);
+
for_each_sibling_event(sibling, group_leader) {
perf_remove_from_context(sibling, 0);
put_ctx(gctx);
+ WARN_ON_ONCE(sibling->dup_master);
}
/*
--
2.17.1
Powered by blists - more mailing lists