[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080611160815.GA150@tv-sign.ru>
Date: Wed, 11 Jun 2008 20:08:15 +0400
From: Oleg Nesterov <oleg@...sign.ru>
To: Max Krasnyansky <maxk@...lcomm.com>
Cc: Peter Zijlstra <peterz@...radead.org>, mingo@...e.hu,
Andrew Morton <akpm@...ux-foundation.org>,
David Rientjes <rientjes@...gle.com>,
Paul Jackson <pj@....com>, menage@...gle.com,
linux-kernel@...r.kernel.org, Mark Hounschell <dmarkh@....rr.com>
Subject: Re: workqueue cpu affinity
On 06/10, Max Krasnyansky wrote:
>
> Here is some backgound on this. Full cpu isolation requires some tweaks to the
> workqueue handling. Either the workqueue threads need to be moved (which is my
> current approach), or work needs to be redirected when it's submitted.
_IF_ we have to do this, I think it is much better to move cwq->thread.
> Peter Zijlstra wrote:
> > The advantage of creating a more flexible or fine-grained flush is that
> > large machine also profit from it.
> I agree, our current workqueue flush scheme is expensive because it has to
> schedule on each online cpu. So yes improving flush makes sense in general.
Yes, it is easy to implement flush_work(struct work_struct *work) which
only waits for that work, so it can't hang unless it was enqueued on the
isolated cpu.
But in most cases it is enough to just do
if (cancel_work_sync(work))
work->func(work);
Or we can add flush_workqueue_cpus(struct workqueue_struct *wq, cpumask_t *cpu_map).
But I don't think we should change the behaviour of flush_workqueue().
> This will require a bit of surgery across the entire tree. There is a lot of
> code that calls flush_scheduled_work()
Almost all of them should be changed to use cancel_work_sync().
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists