[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070109093302.GE589@in.ibm.com>
Date: Tue, 9 Jan 2007 15:03:02 +0530
From: Srivatsa Vaddagiri <vatsa@...ibm.com>
To: Andrew Morton <akpm@...l.org>
Cc: Oleg Nesterov <oleg@...sign.ru>,
David Howells <dhowells@...hat.com>,
Christoph Hellwig <hch@...radead.org>,
Ingo Molnar <mingo@...e.hu>,
Linus Torvalds <torvalds@...l.org>,
linux-kernel@...r.kernel.org, Gautham shenoy <ego@...ibm.com>
Subject: Re: [PATCH] flush_cpu_workqueue: don't flush an empty ->worklist
On Mon, Jan 08, 2007 at 09:26:56PM -0800, Andrew Morton wrote:
> That's not correct. freeze_processes() will freeze *all* processes.
I am not arguing whether all processes will be frozen. However my question was
on the freeze point. Let me ask the question with an example:
rtasd thread (arch/powerpc/platforms/pseries/rtasd.c) executes this simple
loop:
static int rtasd(void *unused)
{
i = first_cpu(cpu_online_map);
while (1) {
set_cpus_allowed(current, cpumask_of_cpu(i)); /* can block */
/* we should now be running on cpu i */
do_something_on_a_cpu(i);
/* sleep for some time */
i = next_cpu(cpu, cpu_online_map);
}
}
This thread makes absolutely -no- calls to try_to_freeze() in its lifetime.
1. Does this mean that the thread can't be frozen? (lets say that the
thread's PF_NOFREEZE is not set)
AFAICS it can still be frozen by sending it a signal and have the signal
delivery code call try_to_freeze() ..
2. If the thread can be frozen at any arbitrary point of its execution, then I
dont see what prevents cpu_online_map from changing under the feet of rtasd
thread,
In other words, we would have failed to provide the ability to *block*
hotplug operations from happening concurrently.
> All of them are forced to enter refrigerator().
^^^^^^
*forced*, yes ..that's the point of concern ..
Warm regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists