[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200810241404.35932.rusty@rustcorp.com.au>
Date: Fri, 24 Oct 2008 14:04:35 +1100
From: Rusty Russell <rusty@...tcorp.com.au>
To: ego@...ibm.com
Cc: Oleg Nesterov <oleg@...sign.ru>, linux-kernel@...r.kernel.org,
travis@....com, Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH 1/7] work_on_cpu: helper for doing task on a CPU.
On Friday 24 October 2008 01:36:05 Gautham R Shenoy wrote:
> OK, how about doing the following? That will solve the problem
> of deadlock you pointed out in patch 6.
>
> get_online_cpus();
> if (likely(per_cpu(cpu_state, cpuid) == CPU_ONLINE)) {
> schedule_work_on(cpu, &wfc.work);
> flush_work(&wfc.work);
> } else if (per_cpu(cpu_state, cpuid) != CPU_DEAD)) {
> /*
> * We're the CPU-Hotplug thread. Call the
> * function synchronously so that we don't
> * deadlock with any pending work-item blocked
> * on get_online_cpus()
> */
> cpumask_t orignal_mask = current->cpus_allowed;
> set_cpus_allowed_ptr(current, &cpumask_of_cpu(cpu);
> wfc.ret = fn(arg);
> set_cpus_allowed_ptr(current, &original_mask);
> }
Hi Gautham, Oleg,
Unfortunately that's exactly what I'm trying to get away from: another cpumask
on the stack :(
The cpu hotplug thread is just whoever wrote 0 to "online" in sys. And in
fact it already plays with its cpumask, which should be fixed too.
I think we should BUG_ON(per_cpu(cpu_state, cpuid) != CPU_DEAD) to ensure we
never use work_on_cpu in the hotplug cpu path. Then we use
smp_call_function() for that hard intel_cacheinfo case. Finally, we fix the
cpu hotplug path to use schedule_work_on() itself rather than playing games
with cpumask.
If you agree, I'll spin the patches...
Thanks for the brainpower,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists