lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130701124407.GB21726@dyad.programming.kicks-ass.net>
Date:	Mon, 1 Jul 2013 14:44:07 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Lei Wen <leiwen@...vell.com>
Cc:	Paul Turner <pjt@...gle.com>, Alex Shi <alex.shi@...el.com>,
	Ingo Molnar <mingo@...e.hu>, mingo@...hat.com,
	Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [V2 1/2] sched: add trace events for task and rq usage tracking

On Mon, Jul 01, 2013 at 08:33:21PM +0800, Lei Wen wrote:
> Since we could track task in the entity level now, we may want to
> investigate tasks' running status by recording the trace info, so that
> could make some tuning if needed.

Why would I want to merge this?


> +	trace_sched_task_weighted_load(task_of(se), se->avg.load_avg_contrib, se->load.weight);
> +	trace_sched_task_weighted_load(task_of(se), se->avg.load_avg_contrib, se->load.weight);

> +		trace_sched_cfs_rq_runnable_load(cpu_of(rq_of(cfs_rq)),
> +				cfs_rq->runnable_load_avg, cfs_rq->load.weight);

> +		trace_sched_cfs_rq_blocked_load(cpu_of(rq_of(cfs_rq)),
> +				cfs_rq->blocked_load_avg,
> +				cfs_rq->blocked_load_avg + cfs_rq->runnable_load_avg);

> +	trace_sched_cfs_rq_blocked_load(cpu_of(rq_of(cfs_rq)),
> +			cfs_rq->blocked_load_avg,
> +			cfs_rq->blocked_load_avg + cfs_rq->runnable_load_avg);

> +		trace_sched_cfs_rq_blocked_load(cpu_of(rq_of(cfs_rq)),
> +				cfs_rq->blocked_load_avg,
> +				cfs_rq->blocked_load_avg + cfs_rq->runnable_load_avg);

> +	trace_sched_cfs_rq_runnable_load(cpu_of(rq_of(cfs_rq)),
> +			cfs_rq->runnable_load_avg, cfs_rq->load.weight);

> +		trace_sched_cfs_rq_blocked_load(cpu_of(rq_of(cfs_rq)),
> +				cfs_rq->blocked_load_avg,
> +				cfs_rq->blocked_load_avg + cfs_rq->runnable_load_avg);

> +		trace_sched_cfs_rq_blocked_load(cpu_of(rq_of(cfs_rq)),
> +				cfs_rq->blocked_load_avg,
> +				cfs_rq->blocked_load_avg + cfs_rq->runnable_load_avg);

> +		trace_sched_cfs_rq_blocked_load(cpu_of(rq_of(cfs_rq)),
> +				cfs_rq->blocked_load_avg,
> +				cfs_rq->blocked_load_avg + cfs_rq->runnable_load_avg);

You're not lazy enough by far, you seem to delight in endless repetition :/

How about you first convince me we actually want to merge this; big hint,
there's a significant lack of tracepoints in the entire balancer.

Secondly; WTH didn't you do:

  trace_sched_task_weighted_load(se);
  trace_sched_cfs_rq_runnable_load(cfs_rq);
  trace_sched_cfs_rq_blocked_load(cfs_rq);

The tracepoints themselves could very well extract whatever they want from
that; no need to actually write it out.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ