[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0000013def390189-5508c590-e421-49e7-9eab-a3924489091a-000000@email.amazonses.com>
Date: Tue, 9 Apr 2013 14:35:23 +0000
From: Christoph Lameter <cl@...ux.com>
To: Paul Gortmaker <paul.gortmaker@...driver.com>
cc: Ingo Molnar <mingo@...nel.org>,
Frederic Weisbecker <fweisbec@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Chris Metcalf <cmetcalf@...era.com>,
Geoff Levand <geoff@...radead.org>,
Gilad Ben Yossef <gilad@...yossef.com>,
Hakan Akkan <hakanakkan@...il.com>,
Kevin Hilman <khilman@...aro.org>,
Li Zhong <zhong@...ux.vnet.ibm.com>,
Namhyung Kim <namhyung.kim@....com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 4/4] nohz: New option to force all CPUs in full dynticks
range
On Tue, 9 Apr 2013, Paul Gortmaker wrote:
> > 2. Avoid the setting of cpus entirely? If full nohz mode is desired
> > then pick one cpu (f.e. the first one or the one that is used for xtime
> > updates) and then make all other cpus nohz. Set the affinity mask for the
> > rcuoXXX threads to that cpu.
>
> I can imagine people with multi socket systems wanting to have
> a system partitioned with one "normal" core per physical socket,
> for timekeeping, RCU threads, etc, but #2 would prevent that.
That is a good point. The kernel needs to run per node threads for I/O and
reclaim which could have their home there. Just had some bad experience
with latency introduced by the block layer redirecting I/O to the first
processor of a node. Maybe we can adopt that convention and relocate
kernel processing as much as possible to the first processor of a
node?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists