lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 23 Aug 2017 20:23:22 +0300
From:   Alexey Budankov <alexey.budankov@...ux.intel.com>
To:     Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>
Cc:     Andi Kleen <ak@...ux.intel.com>, Kan Liang <kan.liang@...el.com>,
        Dmitri Prokhorov <Dmitry.Prohorov@...el.com>,
        Valery Cherepennikov <valery.cherepennikov@...el.com>,
        Mark Rutland <mark.rutland@....com>,
        Stephane Eranian <eranian@...gle.com>,
        David Carrillo-Cisneros <davidcc@...gle.com>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v7 1/2] perf/core: use rb trees for pinned/flexible groups

On 23.08.2017 14:17, Alexander Shishkin wrote:
> Alexey Budankov <alexey.budankov@...ux.intel.com> writes:
> 
>> @@ -3091,61 +3231,55 @@ static void cpu_ctx_sched_out(struct perf_cpu_context *cpuctx,
>>  }
>>  
>>  static void
>> -ctx_pinned_sched_in(struct perf_event_context *ctx,
>> -		    struct perf_cpu_context *cpuctx)
>> +ctx_pinned_sched_in(struct perf_event *event,
>> +		   struct perf_cpu_context *cpuctx,
>> +		   struct perf_event_context *ctx)
> 
> If you're doing this, you also need to rename the function, because it
> now schedules in one event and not a context. But better just keep it as
> is.
> 
>>  {
>> -	struct perf_event *event;
>> -
>> -	list_for_each_entry(event, &ctx->pinned_groups, group_entry) {
> 
> Why not put your new iterator here in place of the old one, instead of
> moving things around? Because what follows is hard to read and is also
> completely unnecessary:
> 
>> -		if (event->state <= PERF_EVENT_STATE_OFF)
>> -			continue;
>> -		if (!event_filter_match(event))
>> -			continue;
>> +	if (event->state <= PERF_EVENT_STATE_OFF)
>> +		return;
>> +	if (!event_filter_match(event))
>> +		return;
> 
> like this,
> 
>>  
>> -		/* may need to reset tstamp_enabled */
>> -		if (is_cgroup_event(event))
>> -			perf_cgroup_mark_enabled(event, ctx);
>> +	/* may need to reset tstamp_enabled */
>> +	if (is_cgroup_event(event))
>> +		perf_cgroup_mark_enabled(event, ctx);
> 
> or this
> 
>>  
>> -		if (group_can_go_on(event, cpuctx, 1))
>> -			group_sched_in(event, cpuctx, ctx);
>> +	if (group_can_go_on(event, cpuctx, 1))
>> +		group_sched_in(event, cpuctx, ctx);
> 
> etc, etc.
> 
>> @@ -3156,7 +3290,8 @@ ctx_sched_in(struct perf_event_context *ctx,
>>  	     struct task_struct *task)
>>  {
>>  	int is_active = ctx->is_active;
>> -	u64 now;
> 
> Why?

Shortened the scope/span/life time of this variable.
Declared, defined and initialized closer to the place of employment.

> 
>> +	struct perf_event *event;
>> +	struct rb_node *node;
>>  
>>  	lockdep_assert_held(&ctx->lock);
>>  
>> @@ -3175,7 +3310,7 @@ ctx_sched_in(struct perf_event_context *ctx,
>>  
>>  	if (is_active & EVENT_TIME) {
>>  		/* start ctx time */
>> -		now = perf_clock();
>> +		u64 now = perf_clock();
> 
> Why?> 
>>  		ctx->timestamp = now;
>>  		perf_cgroup_set_timestamp(task, ctx);
>>  	}
>> @@ -3185,11 +3320,19 @@ ctx_sched_in(struct perf_event_context *ctx,
>>  	 * in order to give them the best chance of going on.
>>  	 */
>>  	if (is_active & EVENT_PINNED)
>> -		ctx_pinned_sched_in(ctx, cpuctx);
>> +		perf_event_groups_for_each(event, node,
>> +				&ctx->pinned_groups, group_node,
>> +				group_list, group_entry)
>> +			ctx_pinned_sched_in(event, cpuctx, ctx);
> 
> So this perf_event_groups_for_each() can just move into
> ctx_*_sched_in(), can't it?

Yes. Makes sense. Addressed it in v8. Thanks!

> 
> Regards,
> --
> Alex
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ