[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141020152144.GF24595@windriver.com>
Date: Mon, 20 Oct 2014 11:21:44 -0400
From: Paul Gortmaker <paul.gortmaker@...driver.com>
To: Steven Rostedt <rostedt@...dmis.org>
CC: Peter Zijlstra <peterz@...radead.org>,
<linux-rt-users@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [PATCH 3/7] wait.[ch]: Introduce the simple waitqueue (swait)
implementation
[Re: [PATCH 3/7] wait.[ch]: Introduce the simple waitqueue (swait) implementation] On 18/10/2014 (Sat 19:05) Steven Rostedt wrote:
> On Sat, 2014-10-18 at 23:34 +0200, Peter Zijlstra wrote:
>
> > Same comment as before, that is an unbounded loop in a non preemptible
> > section and therefore violates RT design principles.
> >
> > We actually did talk about ways of fixing that.
>
> Right, and we should slap Paul for not showing up for it ;-)
And miss turkey day? ;-)
>
> The decision that we came up with was to splice the current list onto a
> local list variable. And then we could go into a loop releasing the lock
> and grabbing it again. Each time pop a waiter off the list and doing the
> work of only one task at a time. This prevents doing large amounts of
> wake ups under a spinlock. The splice is required to only wake up those
> that are on the list when the wake up is called. This prevents waking up
> a task twice because it woke up, removed itself, and then added itself
> again. We must keep the semantics that a wake up only wakes up a task
> once.
OK, amusing enough, when we were actively discussing this some time ago,
I'd played with something similar -- I'd created a shadow list, and then
abstracted out the lock/unlock, so that we could call a synchronize_wait
on the unlock operations. What I didn't do, was try and use the same
lock for the shadow list and the main one, and lockdep never let me live
that down, so as tglx would say, I shoved it all in the horror closet.
I'd like to hear more details on what you had in mind here, so I don't
go chasing down the wrong road. So the local list head gets all the
items (via list_cut or moves?) and then that local list is spliced onto
the (now temporarily empty) main list head? (presumably all under lock)
What would need to be done as an unwind at the end of processing the
local list head before it disappears from existence? Anything?
>
> >
> > Also, I'm not entirely sure we want to do the cwait thing, it looks
> > painful.
>
> Yeah, I have to think about that some more too. I'm currently sitting in
> the airport waiting for my final leg of my flight. After 18 hours of
> travel, it is probably not too wise to review this work in my current
> state ;-)
The alignment/parallel of existing mainline wait code seemed like the
consensus back ages ago when this was being discussed on IRC, but if
that has since changed, then I can adapt or abandon as required. I long
ago learned that the time spent on something has no correlation to its
fitness or probability of being ready for addition to mainline. :-)
Thanks,
Paul.
--
>
> -- Steve
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists