lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 12 Dec 2019 18:01:41 +0000
From:   Song Liu <songliubraving@...com>
To:     Peter Zijlstra <peterz@...radead.org>
CC:     open list <linux-kernel@...r.kernel.org>,
        Kernel Team <Kernel-team@...com>,
        Arnaldo Carvalho de Melo <acme@...hat.com>,
        Jiri Olsa <jolsa@...nel.org>,
        Alexey Budankov <alexey.budankov@...ux.intel.com>,
        Namhyung Kim <namhyung@...nel.org>, Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH v8] perf: Sharing PMU counters across compatible events

Hi Peter,

> On Dec 12, 2019, at 8:00 AM, Song Liu <songliubraving@...com> wrote:
> 
> 
> 
>> On Dec 12, 2019, at 7:45 AM, Song Liu <songliubraving@...com> wrote:
>> 
>> 
>> 
>>> On Dec 12, 2019, at 7:39 AM, Peter Zijlstra <peterz@...radead.org> wrote:
>>> 
>>> On Fri, Dec 06, 2019 at 04:24:47PM -0800, Song Liu wrote:
>>> 
>>>> @@ -2174,6 +2410,14 @@ __perf_remove_from_context(struct perf_event *event,
>>>> 		update_cgrp_time_from_cpuctx(cpuctx);
>>>> 	}
>>>> 
>>>> +	if (event->dup_master == event) {
>>>> +		if (ctx->is_active)
>>>> +			ctx_resched(cpuctx, cpuctx->task_ctx,
>>>> +				    get_event_type(event), NULL, event);
>>>> +		else
>>>> +			perf_event_remove_dup(event, ctx);
>>>> +	}
>>>> +
>>>> 	event_sched_out(event, cpuctx, ctx);
>>>> 	if (flags & DETACH_GROUP)
>>>> 		perf_group_detach(event);
>>>> @@ -2241,6 +2485,14 @@ static void __perf_event_disable(struct perf_event *event,
>>>> 		update_cgrp_time_from_event(event);
>>>> 	}
>>>> 
>>>> +	if (event->dup_master == event) {
>>>> +		if (ctx->is_active)
>>>> +			ctx_resched(cpuctx, cpuctx->task_ctx,
>>>> +				    get_event_type(event), NULL, event);
>>>> +		else
>>>> +			perf_event_remove_dup(event, ctx);
>>>> +	}
>>>> +
>>>> 	if (event == event->group_leader)
>>>> 		group_sched_out(event, cpuctx, ctx);
>>>> 	else
>>> 
>>>> @@ -2544,7 +2793,9 @@ static void perf_event_sched_in(struct perf_cpu_context *cpuctx,
>>>> */
>>>> static void ctx_resched(struct perf_cpu_context *cpuctx,
>>>> 			struct perf_event_context *task_ctx,
>>>> -			enum event_type_t event_type)
>>>> +			enum event_type_t event_type,
>>>> +			struct perf_event *event_add_dup,
>>>> +			struct perf_event *event_del_dup)
>>>> {
>>>> 	enum event_type_t ctx_event_type;
>>>> 	bool cpu_event = !!(event_type & EVENT_CPU);
>>>> @@ -2574,6 +2825,18 @@ static void ctx_resched(struct perf_cpu_context *cpuctx,
>>>> 	else if (ctx_event_type & EVENT_PINNED)
>>>> 		cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
>>>> 
>>>> +	if (event_add_dup) {
>>>> +		if (event_add_dup->ctx->is_active)
>>>> +			ctx_sched_out(event_add_dup->ctx, cpuctx, EVENT_ALL);
>>>> +		perf_event_setup_dup(event_add_dup, event_add_dup->ctx);
>>>> +	}
>>>> +
>>>> +	if (event_del_dup) {
>>>> +		if (event_del_dup->ctx->is_active)
>>>> +			ctx_sched_out(event_del_dup->ctx, cpuctx, EVENT_ALL);
>>>> +		perf_event_remove_dup(event_del_dup, event_del_dup->ctx);
>>>> +	}
>>>> +
>>>> 	perf_event_sched_in(cpuctx, task_ctx, current);
>>>> 	perf_pmu_enable(cpuctx->ctx.pmu);
>>>> }
>>> 
>>> Yuck!
>>> 
>>> Why do you do a full reschedule when you take out a master?
>> 
>> If there is active slave using this master, we need to schedule out
>> them before removing the master. 
>> 
>> We can improve the check though. We only need to do it if the master
>> is in state PERF_EVENT_STATE_ENABLED. 
>> 
>> Or we can add a different function to only schedule out slaves. 
> 
> It is tricky to only schedule out slaves, because the slave could be in
> a group. If we don't reschedule all events, we need to make sure that
> "swapping master" always succeed. 

What would you suggest for this one? Maybe we can keep this as-is and 
optimize later? 

Thanks,
Song

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ