lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sun, 04 Sep 2016 20:46:51 +0200
From:   Giovanni Gherdovich <ggherdovich@...e.cz>
To:     Peter Zijlstra <peterz@...radead.org>,
        Stanislaw Gruszka <sgruszka@...hat.com>
Cc:     linux-kernel@...r.kernel.org,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Mike Galbraith <mgalbraith@...e.de>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Rik van Riel <riel@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Wanpeng Li <wanpeng.li@...mail.com>,
        Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH 1/3] sched/cputime: Improve scalability of
 times()/clock_gettime() on 32 bit cpus

On Thu, 2016-09-01 at 12:29 +0200, Peter Zijlstra wrote:
> On Thu, Sep 01, 2016 at 12:07:34PM +0200, Stanislaw Gruszka wrote:
> > 
> > On Thu, Sep 01, 2016 at 11:49:06AM +0200, Peter Zijlstra wrote:
> > > 
> > > You're now making rather hot paths slower to benefit a rather
> > > slow path, that too is backwards.
> > 
> > Ok, you have right, I made update_curr() slower (a bit I think,
> > since this new seqcount primitive should be in the same cache line
> > as other things).
> 
> seqcount adds 2 smp_wmb(), which on ARM, are not free (it is
> possible to do with just 1 FWIW).
> 
> > 
> > But do we don't care about inconsistency of accessing of 64 bit
> > variable on 32 bit processors (see patch 3) ? I know this is
> > unlikely scenario to get inconsistency, but I assume it's still
> > possible, or not?
> 
> Its actually quite possible. We've observed it a fair few
> times. 64bit variables are 2 32bit stores/loads and getting
> interleaved data is quite possible.
> 

I think leaving the 32bit benchmark numbers where they are, in the
interest of not perturbing the update_curr() path, is the right call
here. task_rq_lock() may hurt the thread_group_cputime() path but the
seqcount alternate strategy could impact other scheduler-related
workloads.

> > 
> > If not, I can get rid of read_sum_exec_runtime() and just read
> > sum_exec_runtime without task_rq_lock() protection on
> > thread_group_cputime() . That would make the benchmark happy.
> 
> I think this benchmark is misguided. Just accept that O(nr_threads)
> is expensive, same with process wide itimer, just don't use them
> when you care about performance.

As you say, the results of the "poundtime" benchmark have to be read
with a grain of salt, and probably I should put them in
perspective. In a sentence: a low number of threads represents real
world scenarios more faithfully, obviously. We run it in a framework
(Mel Gorman's MMTests) which stresses the box from 2 to 4*num_cpus
threads as it does with many other workloads where num_thread is a
parameter.

We're spraying all over the input space just to see if anything
interesting happens. If we see a regression in some obscure corner
case, that's not necessarily a bug -- sometimes it's just not
interesting or the trade-offs aren't worth fixing it.

"poundtime" first appeared on LKML in 2009 as test case for a
functional bug where a process' time wasn't monotonic; it was then
reused by Rik van Riel in 2014 as a performance workload, see
https://marc.info/?i=1408133138-22048-1-git-send-email-riel@redhat.com

The slightly edited version we use at SUSE in MMTest is in the
changelog of 6075620b0590 "sched/cputime: Mitigate performance
regression in times()/clock_gettime()".


Giovanni

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ