[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877id5v1fg.fsf@basil.nowhere.org>
Date: Wed, 04 Jun 2008 14:18:11 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Max Krasnyansky <maxk@...lcomm.com>
Cc: Paul Jackson <pj@....com>, ioe-lkml@...eria.de, sivanich@....com,
a.p.zijlstra@...llo.nl, linux-kernel@...r.kernel.org,
kernel@...ivas.org, dfults@....com, devik@....cz, dino@...ibm.com,
emmanuel.pacaud@...v-poitiers.fr, deweerdt@...e.fr, mingo@...e.hu,
colpatch@...ibm.com, nickpiggin@...oo.com.au, rostedt@...dmis.org,
oleg@...sign.ru, paulmck@...ibm.com, menage@...gle.com,
rddunlap@...l.org, suresh.b.siddha@...el.com, tglx@...utronix.de
Subject: Re: Inquiry: Should we remove "isolcpus= kernel boot option? (may have realtime uses)
Max Krasnyansky <maxk@...lcomm.com> writes:
> We've seen exactly two replies with usage examples. Dimitri's case is legit
> but can be handled much better (as in it not only avoids timers, but any other
> kernel work) with cpu hotplug and cpusets. Ingo's case is bogus because it
> does not actually do what he needs. There is a much better way to do exactly
> what he needs which involves only cpu hotplug and has nothing to do with the
> scheduler and such.
One example I've seen in the past is that someone wanted to isolate a node
completely from any memory traffic to avoid performance disturbance
for memory intensive workloads.
Right now the system boot could put pages from some daemon in there before any
cpusets are set up and there's no easy way to get them away again
(short of migratepages for all running pids, but that's pretty ugly and won't
cover kernel level allocations and also can mess up locality)
Given the use case wants more a "isolnodes", but given that there
tends to be enough free memory at boot "isolcpus" tended to work.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists