[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55AC8CC4.1020802@monom.org>
Date: Mon, 20 Jul 2015 07:53:08 +0200
From: Daniel Wagner <wagi@...om.org>
To: Peter Zijlstra <peterz@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Ingo Molnar <mingo@...nel.org>, Oleg Nesterov <oleg@...hat.com>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Tejun Heo <tj@...nel.org>, Ingo Molnar <mingo@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
der.herr@...r.at, Davidlohr Bueso <dave@...olabs.net>,
Rik van Riel <riel@...hat.com>,
Al Viro <viro@...iv.linux.org.uk>,
Jeff Layton <jlayton@...chiereds.net>
Subject: Re: [RFC][PATCH 00/13] percpu rwsem -v2
On 07/02/2015 11:41 AM, Peter Zijlstra wrote:
> On Wed, Jul 01, 2015 at 02:54:59PM -0700, Linus Torvalds wrote:
>> On Tue, Jun 30, 2015 at 10:57 PM, Daniel Wagner <wagi@...om.org> wrote:
>>>
>>> And an attempt at visualization:
>>>
>>> http://monom.org/posix01/sweep-4.1.0-02756-ge3d06bd.png
>>> http://monom.org/posix01/sweep-4.1.0-02769-g6ce2591.png
>>
>> Ugh. The old numbers look (mostly) fairly tight, and then the new ones
>> are all over the map, and usually much worse.
>>
>> We've seen this behavior before when switching from a non-sleeping
>> lock to a sleeping one. The sleeping locks have absolutely horrible
>> behavior when they get contended, and spend tons of CPU time on the
>> sleep/wakeup management,
>
> Right, I'm just not seeing how any of that would happen here :/ The read
> side would only ever block on reading /proc/$something and I'm fairly
> sure that benchmark doesn't actually touch that file.
>
> In any case, I will look into this, I've just not had time yet..
I did some more testing and found out that the slow path of percpu_down_read()
is never taken (as expected). The only change left is the exchange from a
percpu arch_spinlock_t spinlocks to percpu spinlock_t spinlocks.
Turning them back into arch_spinlock_t gives almost the same numbers as
with spinlock_t.
Then Peter suggested to change the code to
preempt_disable();
spin_unlock();
preempt_enable_no_resched();
to verify if arch_spin_lock() is buggy and does not disable preemption
and we see a lock holder preemption on non virt setups.
Here all the numbers and plots:
- base line
http://monom.org/posix01-4/tip-4.1.0-02756-ge3d06bd.png
http://monom.org/posix01-4/tip-4.1.0-02756-ge3d06bd.txt
- arch_spinlock_t
http://monom.org/posix01-4/arch_spintlock_t-4.1.0-02769-g6ce2591-dirty.png
http://monom.org/posix01-4/arch_spintlock_t-4.1.0-02769-g6ce2591-dirty.txt
http://monom.org/posix01-4/arch_spintlock_t-4.1.0-02769-g6ce2591-dirty.patch
- no resched
http://monom.org/posix01-4/no_resched-4.1.0-02770-g4d518cf.png
http://monom.org/posix01-4/no_resched-4.1.0-02770-g4d518cf.txt
http://monom.org/posix01-4/no_resched-4.1.0-02770-g4d518cf.patch
cheers,
daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists