lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87va9dyl8y.fsf@concordia.ellerman.id.au>
Date:   Wed, 18 Jul 2018 00:45:01 +1000
From:   Michael Ellerman <mpe@...erman.id.au>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Paul McKenney <paulmck@...ux.vnet.ibm.com>,
        Alan Stern <stern@...land.harvard.edu>,
        andrea.parri@...rulasolutions.com,
        Will Deacon <will.deacon@....com>,
        Akira Yokosawa <akiyks@...il.com>,
        Boqun Feng <boqun.feng@...il.com>,
        Daniel Lustig <dlustig@...dia.com>,
        David Howells <dhowells@...hat.com>,
        Jade Alglave <j.alglave@....ac.uk>,
        Luc Maranget <luc.maranget@...ia.fr>,
        Nick Piggin <npiggin@...il.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire

Linus Torvalds <torvalds@...ux-foundation.org> writes:
> On Mon, Jul 16, 2018 at 7:40 AM Michael Ellerman <mpe@...erman.id.au> wrote:
...
>> I guess arguably it's not a very macro benchmark, but we have a
>> context_switch benchmark in the tree[1] which we often use to tune
>> things, and it degrades badly. It just spins up two threads and has them
>> ping-pong using yield.
>
> I hacked that up to run on x86, and it only is about 5% locking
> overhead in my profiles. It's about 18% __switch_to, and a lot of
> system call entry/exit, but not a lot of locking.

Interesting. I don't see anything as high as 18%, it's more spread out:

     7.81%  context_switch  [kernel.kallsyms]  [k] cgroup_rstat_updated
     7.60%  context_switch  [kernel.kallsyms]  [k] system_call_exit
     5.91%  context_switch  [kernel.kallsyms]  [k] __switch_to
     5.69%  context_switch  [kernel.kallsyms]  [k] __sched_text_start
     5.61%  context_switch  [kernel.kallsyms]  [k] _raw_spin_lock
     4.15%  context_switch  [kernel.kallsyms]  [k] system_call
     3.76%  context_switch  [kernel.kallsyms]  [k] finish_task_switch

And it doesn't change much before/after the spinlock change.

(I should work out how to turn that cgroup stuff off.)

I tried uninlining spin_unlock() and that makes it a bit clearer.

Before:
     9.67%  context_switch  [kernel.kallsyms]  [k] _raw_spin_lock
     7.74%  context_switch  [kernel.kallsyms]  [k] cgroup_rstat_updated
     7.39%  context_switch  [kernel.kallsyms]  [k] system_call_exit
     5.84%  context_switch  [kernel.kallsyms]  [k] __sched_text_start
     4.83%  context_switch  [kernel.kallsyms]  [k] __switch_to
     4.08%  context_switch  [kernel.kallsyms]  [k] system_call
     <snip 16 lines>
     1.24%  context_switch  [kernel.kallsyms]  [k] arch_spin_unlock	<--

After:
     8.69%  context_switch  [kernel.kallsyms]  [k] _raw_spin_lock
     7.01%  context_switch  [kernel.kallsyms]  [k] cgroup_rstat_updated
     6.76%  context_switch  [kernel.kallsyms]  [k] system_call_exit
     5.59%  context_switch  [kernel.kallsyms]  [k] arch_spin_unlock	<--
     5.10%  context_switch  [kernel.kallsyms]  [k] __sched_text_start
     4.36%  context_switch  [kernel.kallsyms]  [k] __switch_to
     3.80%  context_switch  [kernel.kallsyms]  [k] system_call


I was worried spectre/meltdown mitigations might be confusing things, but not
really, updated numbers with them off are higher but the delta is about the
same in percentage terms:

	  | lwsync/lwsync | lwsync/sync | Change     | Change %
	  +---------------+-------------+------------+----------
Average   |    47,938,888 |  43,655,184 | -4,283,703 |   -9.00%


> I'm actually surprised it is even that much locking, since it seems to
> be single-cpu, so there should be no contention and the lock (which
> seems to be
>
>         rq = this_rq();
>         rq_lock(rq, &rf);
>
> in do_sched_yield()) should stay local to the cpu.
>
> And for you the locking is apparently even _more_ noticeable.

> But yes, a 10% regression on that context switch thing is huge. You
> shouldn't do ping-pong stuff, but people kind of do.

Yeah.

There also seem to be folks who have optimised the rest of their stack pretty
hard, and therefore care about context switch performance because it's pure
overhead and they're searching for every cycle.

So although this test is not a real workload it's a proxy for something people
do complain to us about.

cheers

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ