lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 8 Oct 2019 14:03:15 +0000
From:   Song Liu <songliubraving@...com>
To:     Peter Zijlstra <peterz@...radead.org>
CC:     open list <linux-kernel@...r.kernel.org>,
        Kernel Team <Kernel-team@...com>,
        "stable@...r.kernel.org" <stable@...r.kernel.org>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH] perf/core: fix corner case in perf_rotate_context()



> On Oct 8, 2019, at 2:35 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> 
> On Wed, Oct 02, 2019 at 11:43:17PM -0700, Song Liu wrote:
>> This is a rare corner case, but it does happen:
>> 
>> In perf_rotate_context(), when the first cpu flexible event fail to
>> schedule, cpu_rotate is 1, while cpu_event is NULL.
> 
> You're failing to explain how we get here in the first place.

On Intel CPU, it could be trigger by something like:

perf stat -e ref-cycles:D,ref-cycles,cycles -C 5 -I 1000

I will add this to v2. 

> 
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 4655adbbae10..50021735f367 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -3775,6 +3775,13 @@ static void rotate_ctx(struct perf_event_context *ctx, struct perf_event *event)
>> 	if (ctx->rotate_disable)
>> 		return;
>> 
>> +	/* if no event specified, try to rotate the first event */
>> +	if (!event)
>> +		event = rb_entry_safe(rb_first(&ctx->flexible_groups.tree),
>> +				      typeof(*event), group_node);
>> +	if (!event)
>> +		return;
>> +
>> 	perf_event_groups_delete(&ctx->flexible_groups, event);
>> 	perf_event_groups_insert(&ctx->flexible_groups, event);
>> }
> 
> I don't think that is a very nice solution; would not something simpler
> like this be sufficient? (although possible we should rename that
> function now).

Yes, this is much cleaner. Thanks!

Let me submit v2 based on this. 

Thanks,
Song

> 
> And it needs more comments.
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 4655adbbae10..772a9854eb3a 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -3782,8 +3797,17 @@ static void rotate_ctx(struct perf_event_context *ctx, struct perf_event *event)
> static inline struct perf_event *
> ctx_first_active(struct perf_event_context *ctx)
> {
> -	return list_first_entry_or_null(&ctx->flexible_active,
> -					struct perf_event, active_list);
> +	struct perf_event *event;
> +
> +	event = list_first_entry_or_null(&ctx->flexible_active,
> +					 struct perf_event, active_list);
> +
> +	if (!event) {
> +		event = rb_entry_safe(rb_first(&ctx->flexible_groups.tree),
> +				      typeof(*event), group_node);
> +	}
> +
> +	return event;
> }
> 
> static bool perf_rotate_context(struct perf_cpu_context *cpuctx)


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ