[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090126214516.GA22142@elte.hu>
Date: Mon, 26 Jan 2009 22:45:16 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: oleg@...hat.com, a.p.zijlstra@...llo.nl, rusty@...tcorp.com.au,
travis@....com, mingo@...hat.com, davej@...hat.com,
cpufreq@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] work_on_cpu: Use our own workqueue.
* Andrew Morton <akpm@...ux-foundation.org> wrote:
> On Mon, 26 Jan 2009 22:27:27 +0100
> Ingo Molnar <mingo@...e.hu> wrote:
>
> >
> > * Andrew Morton <akpm@...ux-foundation.org> wrote:
> >
> > > > So if it's generic it ought to be implemented in a generic way - not a
> > > > "dont use from any codepath that has a lock held that might
> > > > occasionally also be held in a keventd worklet". (which is a totally
> > > > unmaintainable proposition and which would just cause repeat bugs
> > > > again and again.)
> > >
> > > That's different. The core fault here lies in the keventd workqueue
> > > handling code. If we're flushing work A then we shouldn't go and
> > > block behind unrelated work B.
> >
> > the blocking is inherent in the concept of "a queue of worklets
> > handled by a single thread".
> >
> > If a worklet is blocked then all other work performed by that thread
> > is blocked as well. So by waiting on a piece of work in the queue, we
> > wait for all prior work queued up there as well.
> >
> > The only way to decouple that and to make them independent (and hence
> > independently flushable) is to create more parallel flows of
> > execution: i.e. by creating another thread (another workqueue).
> >
>
> Nope. As I said, the caller of flush_work() can detach the work item
> and run it directly.
that would change the concept of execution but indeed it would be
interesting to try. It's outside the scope of late -rcs i guess, but
worthwile nevertheless.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists