[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151112135024.GQ3972@linux.vnet.ibm.com>
Date: Thu, 12 Nov 2015 05:50:24 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Boqun Feng <boqun.feng@...il.com>
Cc: Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, mingo@...nel.org,
linux-kernel@...r.kernel.org, corbet@....net, mhocko@...nel.org,
dhowells@...hat.com, torvalds@...ux-foundation.org,
will.deacon@....com
Subject: Re: [PATCH 4/4] locking: Introduce smp_cond_acquire()
On Wed, Nov 11, 2015 at 06:34:56PM +0800, Boqun Feng wrote:
> On Wed, Nov 11, 2015 at 05:39:40PM +0800, Boqun Feng wrote:
> > Hi Oleg,
> >
> > On Tue, Nov 03, 2015 at 06:59:58PM +0100, Oleg Nesterov wrote:
> > [snip]
> > >
> > > Unfortunately this doesn't look exactly right...
> > >
> > > spin_unlock_wait() is not equal to "while (locked) relax", the latter
> > > is live-lockable or at least sub-optimal: we do not really need to spin
> >
> > Just be curious, should spin_unlock_wait() semantically be an ACQUIRE?
>
> Hmm.. I guess I was wrong, it doesn't need to be an ACQUIRE, it needs
> only to use the control dependency to order the load of lock state and
> stores following it.
I must say that spin_unlock_wait() and friends are a bit tricky.
There are only a few uses in drivers/ata/libata-eh.c, ipc/sem.c,
kernel/exit.c, kernel/sched/completion.c, and kernel/task_work.c.
Almost all of these seem to assume that spin_unlock_wait() provides
no ordering. We might have just barely enough uses to produce useful
abstractions, but my guess is that it would not hurt to wait.
> > Because spin_unlock_wait() is used for us to wait for a certain lock to
> > RELEASE so that we can do something *after* we observe the RELEASE.
> > Considering the follow example:
> >
> > CPU 0 CPU 1
> > ============================ ===========================
> > { X = 0 }
> > WRITE_ONCE(X, 1);
> > spin_unlock(&lock);
> > spin_unlock_wait(&lock)
> > r1 = READ_ONCE(X);
> >
> > If spin_unlock_wait() is not an ACQUIRE, r1 can be 0 in this case,
> > right? Am I missing something subtle here? Or spin_unlock_wait() itself
> > doesn't have the ACQUIRE semantics, but it should always come with a
> > smp_mb() *following* it to achieve the ACQUIRE semantics? However in
> > do_exit(), an smp_mb() is preceding raw_spin_unlock_wait() rather than
> > following, which makes me confused... could you explain that? Thank you
> > ;-)
> >
>
> But still, there is one suspicious use of smp_mb() in do_exit():
>
> /*
> * The setting of TASK_RUNNING by try_to_wake_up() may be delayed
> * when the following two conditions become true.
> * - There is race condition of mmap_sem (It is acquired by
> * exit_mm()), and
> * - SMI occurs before setting TASK_RUNINNG.
> * (or hypervisor of virtual machine switches to other guest)
> * As a result, we may become TASK_RUNNING after becoming TASK_DEAD
> *
> * To avoid it, we have to wait for releasing tsk->pi_lock which
> * is held by try_to_wake_up()
> */
> smp_mb();
> raw_spin_unlock_wait(&tsk->pi_lock);
>
> /* causes final put_task_struct in finish_task_switch(). */
> tsk->state = TASK_DEAD;
> tsk->flags |= PF_NOFREEZE; /* tell freezer to ignore us */
> schedule();
>
> Seems like smp_mb() doesn't need here? Because the control dependency
> already orders load of tsk->pi_lock and store of tsk->state, and this
> control dependency order guarantee pairs with the spin_unlock(->pi_lock)
> in try_to_wake_up() to avoid data race on ->state.
The exit() path is pretty heavyweight, so I suspect that an extra smp_mb()
is down in the noise. Or are you saying that this is somehow unsafe?
Thanx, Paul
> Regards,
> Boqun
>
> > Regards,
> > Boqun
> >
> > > until we observe !spin_is_locked(), we only need to synchronize with the
> > > current owner of this lock. Once it drops the lock we can proceed, we
> > > do not care if another thread takes the same lock right after that.
> > >
> > > Oleg.
> > >
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists