[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54D9CFC7.5020007@linux.vnet.ibm.com>
Date: Tue, 10 Feb 2015 15:00:47 +0530
From: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>,
Jeremy Fitzhardinge <jeremy@...p.org>,
Sasha Levin <sasha.levin@...cle.com>,
Davidlohr Bueso <dave@...olabs.net>
CC: Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Peter Anvin <hpa@...or.com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Waiman Long <waiman.long@...com>,
Dave Jones <davej@...hat.com>, Oleg Nesterov <oleg@...hat.com>,
the arch/x86 maintainers <x86@...nel.org>,
Paul Gortmaker <paul.gortmaker@...driver.com>,
Andi Kleen <ak@...ux.intel.com>,
Jason Wang <jasowang@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
KVM list <kvm@...r.kernel.org>,
virtualization <virtualization@...ts.linux-foundation.org>,
xen-devel@...ts.xenproject.org, Rik van Riel <riel@...hat.com>,
Christian Borntraeger <borntraeger@...ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrey Ryabinin <a.ryabinin@...sung.com>
Subject: Re: [PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/10/2015 06:23 AM, Linus Torvalds wrote:
> On Mon, Feb 9, 2015 at 4:02 AM, Peter Zijlstra <peterz@...radead.org> wrote:
>> On Mon, Feb 09, 2015 at 03:04:22PM +0530, Raghavendra K T wrote:
>>> So we have 3 choices,
>>> 1. xadd
>>> 2. continue with current approach.
>>> 3. a read before unlock and also after that.
>>
>> For the truly paranoid we have probe_kernel_address(), suppose the lock
>> was in module space and the module just got unloaded under us.
>
> That's much too expensive.
>
> The xadd shouldn't be noticeably more expensive than the current
> "add_smp()". Yes, "lock xadd" used to be several cycles slower than
> just "lock add" on some early cores, but I think these days it's down
> to a single-cycle difference, which is not really different from doing
> a separate load after the add.
>
> The real problem with xadd used to be that we always had to do magic
> special-casing for i386, but that's one of the reasons we dropped
> support for original 80386.
>
> So I think Raghavendra's last version (which hopefully fixes the
> lockup problem that Sasha reported) together with changing that
V2 did pass the stress, but getting confirmation Sasha would help.
> add_smp(&lock->tickets.head, TICKET_LOCK_INC);
> if (READ_ONCE(lock->tickets.tail) & TICKET_SLOWPATH_FLAG) ..
>
> into something like
>
> val = xadd((&lock->ticket.head_tail, TICKET_LOCK_INC << TICKET_SHIFT);
> if (unlikely(val & TICKET_SLOWPATH_FLAG)) ...
>
> would be the right thing to do. Somebody should just check that I got
> that shift right, and that the tail is in the high bytes (head really
> needs to be high to work, if it's in the low byte(s) the xadd would
> overflow from head into tail which would be wrong).
Unfortunately xadd could result in head overflow as tail is high.
The other option was repeated cmpxchg which is bad I believe.
Any suggestions?
( I have the V3 with Oleg's suggestion and performance numbers but
without this getting resolved, It will be one unnecessary iteration).
How about getting rid off SLOW_PATH_FLAG in spinlock (i.e. use it only
as hint for paravirt), but do unlock_kick whenever we see that
(tail-head) > TICKET_LOCK_INC?. (but this also may need cmpxchg in loop
in unlock but we will be able to get rid of clear slowpath logic)
Only problem is we may do unnecessary kicks even in 1x load.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists