[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <14561.1239873018@redhat.com>
Date: Thu, 16 Apr 2009 10:10:18 +0100
From: David Howells <dhowells@...hat.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: dhowells@...hat.com, Oleg Nesterov <oleg@...hat.com>,
Trond.Myklebust@...app.com, serue@...ibm.com, steved@...hat.com,
viro@...iv.linux.org.uk, Daire.Byrne@...mestore.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] slow_work_thread() should do the exclusive wait
Andrew Morton <akpm@...ux-foundation.org> wrote:
> The patch itself is a little worrisome. The wake-all semantics are
> very good at covering up little race bugs. And switching to wake-once
> is a great way of exposing hitherto-unsuspected races.
It's something I'm intending to test, once I get MN10300 working again (which
for some reason it isn't).
> I wonder if slow_work_cull_timeout() should have some sort of barrier,
> so the write is suitably visible to the woken thread.
That's an interesting question. Should wake_up() imply a barrier of any sort,
I wonder. Well, __wake_up() does impose a barrier as it uses a spinlock, but
I wonder if that's sufficient.
> Bearing in mind that the thread might _already_ have been woken by someone
> else?
If the thread is woken by someone else, there must be work for it to do, in
which case it wouldn't be culled anyway.
> off-topic: afacit the code will cull a maximum of one thread per five
> seconds. But the rate of thread _creation_ is, afacit, unbound. Are
> there scenarios in which we can get a runaway thread count?
The maximum number of threads is limited (slow_work_max_threads).
David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists