[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1234866453.4744.58.camel@laptop>
Date: Tue, 17 Feb 2009 11:27:33 +0100
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Nick Piggin <npiggin@...e.de>
Cc: Oleg Nesterov <oleg@...hat.com>,
Jens Axboe <jens.axboe@...cle.com>,
Suresh Siddha <suresh.b.siddha@...el.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...e.hu>,
Rusty Russell <rusty@...tcorp.com.au>,
Steven Rostedt <rostedt@...dmis.org>,
linux-kernel@...r.kernel.org
Subject: Re: Q: smp.c && barriers (Was: [PATCH 1/4] generic-smp: remove
single ipi fallback for smp_call_function_many())
On Tue, 2009-02-17 at 11:11 +0100, Nick Piggin wrote:
> But in that case, cpu0 should see list_empty and send another IPI,
> because our load of list_empty has moved before the unlock of the
> lock, so there can't be another item concurrently put on the list.
Suppose a first smp_call_function_single()
So cpu0 does:
spin_lock(dst->lock);
ipi = list_empty(dst->list);
list_add_tail(data->list, dst->list);
spin_unlock(dst->lock);
if (ipi) /* true */
send_single_ipi(cpu);
then cpu1 does:
while (!list_empty(q->list))
and observes no entries, quits the ipi handler, and stuff is stuck.
cpu0 will observe a non-empty queue and will not raise another ipi, cpu1
got the ipi, but observed no work and hence will not remove it.
> But hmm, why even bother with all this complexity? Why not just
> remove the outer loop completely? Do the lock and the list_replace_init
> unconditionally. It would turn tricky lockless code into simple locked
> code... we've already taken an interrupt anyway, so chances are pretty
> high that we have work here to do, right?
Well, that's a practical suggestion, and I agree.
It was just fun arguing with Oleg ;-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists