lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1465922407.3626.21.camel@gmail.com>
Date:	Tue, 14 Jun 2016 18:40:07 +0200
From:	Mike Galbraith <umgwanakikbuti@...il.com>
To:	Dietmar Eggemann <dietmar.eggemann@....com>,
	Peter Zijlstra <peterz@...radead.org>
Cc:	Yuyang Du <yuyang.du@...el.com>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [rfc patch] sched/fair: Use instantaneous load for fork/exec
 balancing

On Tue, 2016-06-14 at 15:14 +0100, Dietmar Eggemann wrote:

> IMHO, the hackbench performance "boost" w/o 0905f04eb21f is due to the
> fact that a new task gets all it's load decayed (making it a small task)
> in the __update_load_avg() call in remove_entity_load_avg() because its
> se->avg.last_update_time value is 0 which creates a huge time difference
> comparing it to cfs_rq->avg.last_update_time. The patch 0905f04eb21f
> avoids this and thus the task stays big se->avg.load_avg = 1024.

I don't care much at all about the hackbench "regression" in its own
right, and what causes it, for me, bottom line is that there are cases
where we need to be able to resolve, and can't, simply because we're
looking at a fuzzy (rippling) reflection.

In general, the fuzz helps us to not be so spastic.  I'm not sure that
we really really need to care all that much, because I strongly suspect
that it's only gonna make any difference at all in corner cases, but
there are real world cases that matter.  I know for fact that schbench
(facebook) which is at least based on a real world load fails early due
to us stacking tasks due to that fuzzy view of reality.  In that case,
it's because the fuzz consists of a high amplitude aging sawtooth..
find idlest* sees a collection of pesudo-random numbers, effectively,
the fates pick idlest via lottery, get it wrong often enough that a big
box _never_ reaches full utilization before we stack tasks, putting an
end to the latency game.  For generic loads, the smoothing works, but
for some corners, it blows chunks.  Fork/exec seemed like a spot where
you really can't go wrong by looking at clear unadulterated reality.

	-Mike

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ