[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150212141819.GA11633@redhat.com>
Date: Thu, 12 Feb 2015 15:18:19 +0100
From: Oleg Nesterov <oleg@...hat.com>
To: Jeremy Fitzhardinge <jeremy@...p.org>
Cc: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Sasha Levin <sasha.levin@...cle.com>,
Davidlohr Bueso <dave@...olabs.net>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Peter Anvin <hpa@...or.com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Waiman Long <waiman.long@...com>,
Dave Jones <davej@...hat.com>,
the arch/x86 maintainers <x86@...nel.org>,
Paul Gortmaker <paul.gortmaker@...driver.com>,
Andi Kleen <ak@...ux.intel.com>,
Jason Wang <jasowang@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
KVM list <kvm@...r.kernel.org>,
virtualization <virtualization@...ts.linux-foundation.org>,
xen-devel@...ts.xenproject.org, Rik van Riel <riel@...hat.com>,
Christian Borntraeger <borntraeger@...ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrey Ryabinin <a.ryabinin@...sung.com>
Subject: Re: [PATCH] x86 spinlock: Fix memory corruption on completing
completions
On 02/11, Jeremy Fitzhardinge wrote:
>
> On 02/11/2015 09:24 AM, Oleg Nesterov wrote:
> > I agree, and I have to admit I am not sure I fully understand why
> > unlock uses the locked add. Except we need a barrier to avoid the race
> > with the enter_slowpath() users, of course. Perhaps this is the only
> > reason?
>
> Right now it needs to be a locked operation to prevent read-reordering.
> x86 memory ordering rules state that all writes are seen in a globally
> consistent order, and are globally ordered wrt reads *on the same
> addresses*, but reads to different addresses can be reordered wrt to writes.
>
> So, if the unlocking add were not a locked operation:
>
> __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */
>
> if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
> __ticket_unlock_slowpath(lock, prev);
>
> Then the read of lock->tickets.tail can be reordered before the unlock,
> which introduces a race:
Yes, yes, thanks, but this is what I meant. We need a barrier. Even if
"Every store is a release" as Linus mentioned.
> This *might* be OK, but I think it's on dubious ground:
>
> __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */
>
> /* read overlaps write, and so is ordered */
> if (unlikely(lock->head_tail & (TICKET_SLOWPATH_FLAG << TICKET_SHIFT))
> __ticket_unlock_slowpath(lock, prev);
>
> because I think Intel and AMD differed in interpretation about how
> overlapping but different-sized reads & writes are ordered (or it simply
> isn't architecturally defined).
can't comment, I simply so not know how the hardware works.
> If the slowpath flag is moved to head, then it would always have to be
> locked anyway, because it needs to be atomic against other CPU's RMW
> operations setting the flag.
Yes, this is true.
But again, if we want to avoid the read-after-unlock, we need to update
this lock and read SLOWPATH atomically, it seems that we can't avoid the
locked insn.
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists