[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1204193987.6243.29.camel@lappy>
Date: Thu, 28 Feb 2008 11:19:47 +0100
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Max Krasnyanskiy <maxk@...lcomm.com>
Cc: Ingo Molnar <mingo@...e.hu>, Thomas Gleixner <tglx@...utronix.de>,
Oleg Nesterov <oleg@...sign.ru>,
Steven Rostedt <rostedt@...dmis.org>,
Paul Jackson <pj@....com>, linux-kernel@...r.kernel.org
Subject: Re: [RFC/PATCH 0/4] CPUSET driven CPU isolation
On Wed, 2008-02-27 at 15:38 -0800, Max Krasnyanskiy wrote:
> Peter Zijlstra wrote:
> > My vision on the direction we should take wrt cpu isolation.
> General impressions:
> - "cpu_system_map" is %100 identical to the "~cpu_isolated_map" as in my
> patches. It's updated from different place but functionally wise it's very
> much the same. I guess you did not like the 'isolated' name ;-). As I
> mentioned before I'm not hung up on the name so it's cool :).
Ah, but you miss that cpu_system_map doesn't do the one thing the
cpu_isolated_map ever did, prevent sched_domains from forming on those
cpus.
Which is a major point.
> - Updating cpu_system_map from cpusets
> There are a couple of things that I do not like about this approach:
> 1. We lost the ability to isolate CPUs at boot. Which means slower boot times
> for me (ie before I can start my apps I need to create cpuset, etc). Not a big
> deal, I can live with it.
I'm sure those few lines in rc.local won't grow your boot time by a
significant amount of time.
That said, we should look into a replacement for the boot time parameter
(if only to decide not to do it) because of backward compatibility.
> 2. We now need another notification mechanism to propagate the updates to the
> cpu_system_map. That by itself is not a big deal. The big deal is that now we
> need to basically audit the kernel and make sure that everything affected must
> have proper notifier and react to the mask changes.
> For example your current patch does not move the timers and I do not think it
> makes sense to go and add notifier for the timers. I think the better approach
> is to use CPU hotplug here. ie Enforce the rule that cpu_system_map is updated
> only when CPU is off-line.
> By bringing CPU down first we get a lot of features for free. All the kernel
> threads, timers, softirqs, HW irqs, workqueues, etc are properly
> terminated/moved/canceled/etc. This gives us a very clean state when we bring
> the CPU back online with "system" bit cleared (or "isolated" bit set like in
> my patches). I do not see a good reason for reimplementing that functionality
> via system_map notifiers.
I'm not convinced cpu hotplug notifiers are the right thing here. Sure
we could easily iterate the old and new system map and call the matching
cpu hotplug notifiers, but they seem overly complex to me.
The audit would be a good idea anyway. If we do indeed end up with a 1:1
mapping of whatever cpu hotplug does, then well, perhaps you're right.
> I'll comment more on the individual patches.
>
> > Next on the list would be figuring out a nice solution to the workqueue
> > flush issue.
> Do not forget the "stop machine", or more specifically module loading/unloading.
No, the full stop machine thing, there are more interesting users than
module loading. But I'm not too interested in solving this particular
problem atm, I have too much on my plate as it is.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists