lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 31 Oct 2022 15:37:21 -0700
From:   Andrei Vagin <avagin@...gle.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        linux-kernel@...r.kernel.org, Andrei Vagin <avagin@...il.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Valentin Schneider <vschneid@...hat.com>
Subject: Re: [PATCH] sched: consider WF_SYNC to find idle siblings

On Mon, Oct 31, 2022 at 5:57 AM Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Thu, Oct 27, 2022 at 01:26:03PM -0700, Andrei Vagin wrote:
> > From: Andrei Vagin <avagin@...il.com>
> >
> > WF_SYNC means that the waker goes to sleep after wakeup, so the current
> > cpu can be considered idle if the waker is the only process that is
> > running on it.
> >
> > The perf pipe benchmark shows that this change reduces the average time
> > per operation from 8.8 usecs/op to 3.7 usecs/op.
> >
> > Before:
> >  $ ./tools/perf/perf bench sched pipe
> >  # Running 'sched/pipe' benchmark:
> >  # Executed 1000000 pipe operations between two processes
> >
> >      Total time: 8.813 [sec]
> >
> >        8.813985 usecs/op
> >          113456 ops/sec
> >
> > After:
> >  $ ./tools/perf/perf bench sched pipe
> >  # Running 'sched/pipe' benchmark:
> >  # Executed 1000000 pipe operations between two processes
> >
> >      Total time: 3.743 [sec]
> >
> >        3.743971 usecs/op
> >          267096 ops/sec
>
> But what; if anything, does it do for the myrad of other benchmarks we
> run?

I've run these set of benchmarks:
* perf bench sched messaging
* perf bench epoll all
* perf bench futex all
* schbench
* tbench
*  kernel compilation

Results look the same with and without this change for all benchmarks except
tbench.  tbench shows improvements when a number of processes is less than a
number of cpu-s.

Here are results from my test host with 8 cpu-s.

$ tbench_srv &  "tbench" "-t" "15" "1" "127.0.0.1"
Before: Throughput 260.498 MB/sec  1 clients  1 procs  max_latency=1.301 ms
After:  Throughput 462.047 MB/sec  1 clients  1 procs  max_latency=1.066 ms

$ tbench_srv &  "tbench" "-t" "15" "4" "127.0.0.1"
Before: Throughput 733.44 MB/sec  4 clients  4 procs  max_latency=0.935 ms
After:  Throughput 1778.94 MB/sec  4 clients  4 procs  max_latency=0.882 ms

$ tbench_srv &  "tbench" "-t" "15" "8" "127.0.0.1"
Before: Throughput 1965.41 MB/sec  8 clients  8 procs  max_latency=2.145 ms
After:  Throughput 2002.96 MB/sec  8 clients  8 procs  max_latency=1.881 ms

$ tbench_srv &  "tbench" "-t" "15" "32" "127.0.0.1"
Before: Throughput 1881.79 MB/sec  32 clients  32 procs  max_latency=16.365 ms
After:  Throughput 1891.87 MB/sec  32 clients  32 procs  max_latency=4.050 ms

Let me know if you want to see results for any other specific benchmark.

Thanks,
Andrei

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ