[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160721194735.GA1881@linux-80c1.suse>
Date: Thu, 21 Jul 2016 12:47:35 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>, tglx@...utronix.de,
Manfred Spraul <manfred@...orfullife.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v4] ipc/msg: Implement lockless pipelined wakeups
On Thu, 21 Jul 2016, Sebastian Andrzej Siewior wrote:
>* Davidlohr Bueso | 2016-07-20 17:16:12 [-0700]:
>
>>Just as with expunge_all and the E2BIG case, could you remove that explicit
>>barrier (B) and just rely on wake_q_add?
>
>Just did. So we have just a smp_rmb() on the reader side and the
>comment talks about smb_wmb() and at the spot where we should have the
>smb_wmb we have a comment why we don't have one :)
>For my understanding: we need that smp_rmb() to ensure that everything
>past that cmpxchg() is visible on all other CPUs so we don't have the
>wakeup before we r_msg reads != -EAGAIN, right?
Hmm I'm having second thoughts about the need for barrier (A). As you know,
originally we had it to prevent races with do_exit() from the receiver thread
if the r_msg was set before doing the wakeup, we could face a use-after-free
scenario.
Now, by delaying the wakeup, the receiver task should always see whatever r_msg
is set to by the waker, even if we get reordered with wake_q_add() as the
actual wakeup_process() does not occur yet, and hence the receiver is still
blocked while this is going on -- iow, we avoid entirely the need to explicitly
wait until pipelined_send/expunge_all are done. Similarly, "barrier (B)" simply
serves to pair with wake_up_q() such that we don't miss wakeups, but that's
always handled by the wake_q machinery anyway.
So if this is the case (which is how you had it in some previous version of this
patch), we can get rid of the pair diagram altogether as well. Manfred, Peter, does
all this make sense to you guys?
Thanks,
Davidlohr
Powered by blists - more mailing lists