lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <1354785141.4593.109.camel@marge.simpson.net> Date: Thu, 06 Dec 2012 10:12:21 +0100 From: Mike Galbraith <bitbucket@...ine.de> To: Alex Shi <alex.shi@...el.com> Cc: Alex Shi <lkml.alex@...il.com>, Ingo Molnar <mingo@...nel.org>, Peter Zijlstra <peterz@...radead.org>, Paul Turner <pjt@...gle.com>, lkml <linux-kernel@...r.kernel.org>, Vincent Guittot <vincent.guittot@...aro.org>, Preeti U Murthy <preeti@...ux.vnet.ibm.com>, Andrew Morton <akpm@...ux-foundation.org>, Arjan van de Ven <arjan@...ux.intel.com>, Tejun Heo <tj@...nel.org> Subject: Re: weakness of runnable load tracking? On Thu, 2012-12-06 at 16:06 +0800, Alex Shi wrote: > >> > >> Hi Paul & Ingo: > >> > >> In a short word of this issue: burst forking/waking tasks have no time > >> accumulate the load contribute, their runnable load are taken as zero. > >> that make select_task_rq do a wrong decision on which group is idlest. > > > > As you pointed out above, new tasks can (and imho should) be born with > > full weight. Tasks _may_ become thin, but they're all born hungry. > > Thanks for comments. I think so. :) > > > >> There is still 3 kinds of solution is helpful for this issue. > >> > >> a, set a unzero minimum value for the long time sleeping task. but it > >> seems unfair for other tasks these just sleep a short while. > >> > >> b, just use runnable load contrib in load balance. Still using > >> nr_running to judge idlest group in select_task_rq_fair. but that may > >> cause a bit more migrations in future load balance. > >> > >> c, consider both runnable load and nr_running in the group: like in the > >> searching domain, the nr_running number increased a certain number, like > >> double of the domain span, in a certain time. we will think it's a burst > >> forking/waking happened, then just count the nr_running as the idlest > >> group criteria. > >> > >> IMHO, I like the 3rd one a bit more. as to the certain time to judge if > >> a burst happened, since we will calculate the runnable avg at very tick, > >> so if increased nr_running is beyond sd->span_weight in 2 ticks, means > >> burst happening. What's your opinion of this? > >> > >> Any comments are appreciated! > > > > IMHO, for fork and bursty wake balancing, the only thing meaningful is > > the here and now state of runqueues tasks are being dumped into. > > > > Just because tasks are historically short running, you don't necessarily > > want to take a gaggle and wedge them into a too small group just to even > > out load averages. If there was a hole available that you passed up by > > using average load, you lose utilization. I can see how this load > > tracking stuff can average out to a win on a ~heavily loaded box, but > > bursty stuff I don't see how it can do anything but harm, so imho, the > > user should choose which is best for his box, instantaneous or history. > > Do you mean the system administrator need to do this choice? That's my gut feeling just from pondering potential pitfalls. > It's may a hard decision. :) Yup, very hard. > Any suggestions of decision basis? Same as most buttons.. poke it and <cringe> see what happens :) > > WRT burst detection: any window you define can be longer than the burst. > > Maybe we can define 2 waking on same cpu in 1 tick is a burst happened, > and if the cpu had taken a waking task. we'd better skip this cpu. :) > Anyway, the hard point is we can not predict future. No matter what the metric, you'll be reacting after the fact. Somebody needs to code up that darn omniscience algorithm. In a pinch, a simple undo the past will suffice :) -Mike -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists