[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20071004195323.464f1c99@laptopd505.fenrus.org>
Date: Thu, 4 Oct 2007 19:53:23 -0700
From: Arjan van de Ven <arjan@...radead.org>
To: Christoph Lameter <clameter@....com>
Cc: Matthew Wilcox <matthew@....cx>,
David Miller <davem@...emloft.net>, willy@...ux.intel.com,
nickpiggin@...oo.com.au, hch@....de, mel@...net.ie,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
dgc@....com, jens.axboe@...cle.com, suresh.b.siddha@...el.com
Subject: Re: SLUB performance regression vs SLAB
On Thu, 4 Oct 2007 19:43:58 -0700 (PDT)
Christoph Lameter <clameter@....com> wrote:
> So there could still be page struct contention left if multiple
> processors frequently and simultaneously free to the same slab and
> that slab is not the per cpu slab of a cpu. That could be addressed
> by optimizing the object free handling further to not touch the page
> struct even if we miss the per cpu slab.
>
> That get_partial* is far up indicates contention on the list lock
> that should be addressable by either increasing the slab size or by
> changing the object free handling to batch in some form.
>
> This is an SMP system right? 2 cores with 4 cpus each? The main loop
> is always hitting on the same slabs? Which slabs would this be? Am I
> right in thinking that one process allocates objects and then lets
> multiple other processors do work and then the allocated object is
> freed from a cpu that did not allocate the object? If neighboring
> objects in one slab are allocated on one cpu and then are almost
> simultaneously freed from a set of different cpus then this may be
> explain the situation. -
one of the characteristics of the application in use is the following:
all cores submit IO (which means they allocate various scsi and block
structures on all cpus).. but only 1 will free it (the one the IRQ is
bound to). SO it's allocate-on-one-free-on-another at a high rate.
That is assuming this is the IO slab; that's a bit of an assumption
obviously (it's one of the slab things that are hot, but it's a complex
workload, there could be others)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists