lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 26 Apr 2017 17:30:20 -0700
From:   Tejun Heo <tj@...nel.org>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
        Chris Mason <clm@...com>, kernel-team@...com
Subject: Re: [PATCH 2/2] sched/fair: Always propagate runnable_load_avg

Hello, Vincent.

On Wed, Apr 26, 2017 at 12:21:52PM +0200, Vincent Guittot wrote:
> > This is from the follow-up patch.  I was confused.  Because we don't
> > propagate decays, we still should decay the runnable_load_avg;
> > otherwise, we end up accumulating errors in the counter.  I'll drop
> > the last patch.
> 
> Ok, the runnable_load_avg goes back to 0 when I drop patch 3. But i
> see  runnable_load_avg sometimes significantly higher than load_avg
> which is normally not possible as load_avg = runnable_load_avg +
> sleeping task's load_avg

So, while load_avg would eventually converge on runnable_load_avg +
blocked load_avg given stable enough workload for long enough,
runnable_load_avg jumping above load avg temporarily is expected,
AFAICS.  That's the whole point of it, a sum closely tracking what's
currently on the cpu so that we can pick the cpu which has the most on
it now.  It doesn't make sense to try to pick threads off of a cpu
which is generally loaded but doesn't have much going on right now,
after all.

> Then, I just have the opposite behavior on my platform. I see a
> increase of latency at p99 with your patches.
> My platform is a hikey : 2x4 cores ARM and I have used schbench -m 2
> -t 4 -s 10000 -c 15000 -r 30 so I have 1 worker thread per CPU which
> is similar to what you are doing on your platform
>
> With v4.11-rc8. I have run 10 times the test and get consistent results
...
> *99.0000th: 539
...
> With your patches i see an increase of the latency for p99. I run 10
> *99.0000th: 2034

I see.  This is surprising given that at least the purpose of the
patch is restoring cgroup behavior to match !cgroup one.  I could have
totally messed it up tho.  Hmm... there are several ways forward I
guess.

* Can you please double check that the higher latencies w/ the patch
  is reliably reproducible?  The test machines that I use have
  variable management load.  They never dominate the machine but are
  enough to disturb the results so that to drawing out a reliable
  pattern takes a lot of repeated runs.  I'd really appreciate if you
  could double check that the pattern is reliable with different run
  patterns (ie. instead of 10 consecutive runs after another,
  interleaved).

* Is the board something easily obtainable?  It'd be the eaisest for
  me to set up the same environment and reproduce the problem.  I
  looked up hikey boards on amazon but couldn't easily find 2x4 core
  ones.  If there's something I can easily buy, please point me to it.
  If there's something I can loan, that'd be great too.

* If not, I'll try to clean up the debug patches I have and send them
  your way to get more visiblity but given these things tend to be
  very iterative, it might take quite a few back and forth.

Thanks!

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ