[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0702201514470.17866@schroedinger.engr.sgi.com>
Date: Tue, 20 Feb 2007 15:19:19 -0800 (PST)
From: Christoph Lameter <clameter@....com>
To: Max Krasnyansky <maxk@...lcomm.com>
cc: linux-kernel@...r.kernel.org
Subject: Re: SLAB cache reaper on isolated cpus
On Tue, 20 Feb 2007, Max Krasnyansky wrote:
> Suppose I need to isolate a CPU. We already support at the scheduler and
> irq levels (irq affinity). But I want to go a bit further and avoid
> doing kernel work on isolated cpus as much as possible. For example I
> would not want to schedule work queues and stuff on them. Currently
> there are just a few users of the schedule_delayed_work_on(). cpufreq
> (don't care for isolation purposes), oprofile (same here) and slab. For
> the slab it'd be nice to run the reaper on some other CPU. But you're
> saying that locking depends on CPU pinning. Is there any other option
> besides disabling cache reap ? Is there a way for example to constraint
> the slabs on CPU X to not exceed N megs ?
There is no way to constrain the amount of slab work. In order to make the
above work we would have to disable the per cpu caches for a certain cpu.
Then there would be no need to run the cache reaper at all.
To some extend such functionality already exists. F.e. kmalloc_node()
already bypasses the per cpu caches (most of the time). kmalloc_node will
have to take a spinlock on a shared cacheline on each invocation. kmalloc
does only touch per cpu data during regular operations. Thus kmalloc() is much
faster than kmalloc_node() and the cachelines for kmalloc() can be kept in
the per cpu cache.
If we could disable all per cpu caches for certain cpus then you could
make this work. All slab OS interference would be off the processor.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists