[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1323845054.2846.18.camel@edumazet-laptop>
Date: Wed, 14 Dec 2011 07:44:14 +0100
From: Eric Dumazet <eric.dumazet@...il.com>
To: "Alex,Shi" <alex.shi@...el.com>
Cc: David Rientjes <rientjes@...gle.com>,
Christoph Lameter <cl@...ux.com>,
"penberg@...nel.org" <penberg@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: RE: [PATCH 1/3] slub: set a criteria for slub node partial adding
Le mercredi 14 décembre 2011 à 14:06 +0800, Alex,Shi a écrit :
> On Wed, 2011-12-14 at 10:36 +0800, David Rientjes wrote:
> > On Tue, 13 Dec 2011, David Rientjes wrote:
> >
> > > > > {
> > > > > n->nr_partial++;
> > > > > - if (tail == DEACTIVATE_TO_TAIL)
> > > > > - list_add_tail(&page->lru, &n->partial);
> > > > > - else
> > > > > - list_add(&page->lru, &n->partial);
> > > > > + list_add_tail(&page->lru, &n->partial);
> > > > > }
> > > > >
> >
> > 2 machines (one netserver, one netperf) both with 16 cores, 64GB memory
> > with netperf-2.4.5 comparing Linus' -git with and without this patch:
> >
> > threads SLUB SLUB+patch
> > 16 116614 117213 (+0.5%)
> > 32 216436 215065 (-0.6%)
> > 48 299991 299399 (-0.2%)
> > 64 373753 374617 (+0.2%)
> > 80 435688 435765 (UNCH)
> > 96 494630 496590 (+0.4%)
> > 112 546766 546259 (-0.1%)
> >
> > This suggests the difference is within the noise, so this patch neither
> > helps nor hurts netperf on my setup, as expected.
>
> Thanks for the data. Real netperf is hard to give enough press on SLUB.
> but as I mentioned before, I also didn't find real performance change on
> my loopback netperf testing.
>
> I retested hackbench again. about 1% performance increase still exists
> on my 2 sockets SNB/WSM and 4 sockets NHM. and no performance drop for
> other machines.
>
> Christoph, what's comments you like to offer for the results or for this
> code change?
I believe far more aggressive mechanism is needed to help these
workloads.
Please note that the COLD/HOT page concept is not very well used in
kernel, because its not really obvious that some decisions are always
good (or maybe this is not well known)
We should try to batch things a bit, instead of doing a very small unit
of work in slow path.
We now have a very fast fastpath, but inefficient slow path.
SLAB has a litle cache per cpu, we could add one to SLUB for freed
objects, not belonging to current slab. This could avoid all these
activate/deactivate overhead.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists