[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1323845812.16790.8307.camel@debian>
Date: Wed, 14 Dec 2011 14:56:52 +0800
From: "Alex,Shi" <alex.shi@...el.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: David Rientjes <rientjes@...gle.com>,
Christoph Lameter <cl@...ux.com>,
"penberg@...nel.org" <penberg@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: RE: [PATCH 1/3] slub: set a criteria for slub node partial adding
> > Thanks for the data. Real netperf is hard to give enough press on SLUB.
> > but as I mentioned before, I also didn't find real performance change on
> > my loopback netperf testing.
> >
> > I retested hackbench again. about 1% performance increase still exists
> > on my 2 sockets SNB/WSM and 4 sockets NHM. and no performance drop for
> > other machines.
> >
> > Christoph, what's comments you like to offer for the results or for this
> > code change?
>
> I believe far more aggressive mechanism is needed to help these
> workloads.
>
> Please note that the COLD/HOT page concept is not very well used in
> kernel, because its not really obvious that some decisions are always
> good (or maybe this is not well known)
Hope Christoph know everything of SLUB. :)
>
> We should try to batch things a bit, instead of doing a very small unit
> of work in slow path.
>
> We now have a very fast fastpath, but inefficient slow path.
>
> SLAB has a litle cache per cpu, we could add one to SLUB for freed
> objects, not belonging to current slab. This could avoid all these
> activate/deactivate overhead.
Maybe worth to try or maybe Christoph had studied this?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists