[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0908260943090.9924@gentwo.org>
Date: Wed, 26 Aug 2009 09:47:15 -0400 (EDT)
From: Christoph Lameter <cl@...ux-foundation.org>
To: raz ben yehuda <raziebe@...il.com>
cc: Peter Zijlstra <peterz@...radead.org>,
Chris Friesen <cfriesen@...tel.com>,
Mike Galbraith <efault@....de>, riel@...hat.com, mingo@...e.hu,
andrew motron <akpm@...ux-foundation.org>,
wiseman@...s.biu.ac.il, lkml <linux-kernel@...r.kernel.org>,
linux-rt-users@...r.kernel.org
Subject: Re: RFC: THE OFFLINE SCHEDULER
On Wed, 26 Aug 2009, raz ben yehuda wrote:
> How will the kernel is going to handle 32 processors machines ? These
> numbers are no longer a science-fiction.
The kernel is already running on 4096 processor machines. Dont worry about
that.
> What i am suggesting is merely a different approach of how to handle
> multiple core systems. instead of thinking in processes, threads and so
> on i am thinking in services. Why not take a processor and define this
> processor to do just firewalling ? encryption ? routing ? transmission ?
> video processing... and so on...
I think that is a valuable avenue to explore. What we do so far is
treating each processor equally. Dedicating a processor has benefits in
terms of cache hotness and limits OS noise.
Most of the large processor configurations already partition the system
using cpusets in order to limit the disturbance by OS processing. A set of
cpus is used for OS activities and system daemons are put into that set.
But what can be done is limited because the OS threads as well as
interrupt and timer processing etc cannot currently be moved. The ideas
that you are proposing are particularly usedful for applications that
require low latencies and cannot tolerate OS noise easily (Infiniband MPI
base jobs f.e.)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists