[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.00.0910161141280.21328@chino.kir.corp.google.com>
Date: Fri, 16 Oct 2009 11:43:23 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Christoph Lameter <cl@...ux-foundation.org>
cc: Mel Gorman <mel@....ul.ie>, Pekka Enberg <penberg@...helsinki.fi>,
Tejun Heo <tj@...nel.org>, linux-kernel@...r.kernel.org,
Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
Zhang Yanmin <yanmin_zhang@...ux.intel.com>
Subject: Re: [this_cpu_xx V6 7/7] this_cpu: slub aggressive use of this_cpu
operations in the hotpaths
On Fri, 16 Oct 2009, Christoph Lameter wrote:
> > TCP_STREAM stresses a few specific caches:
> >
> > ALLOC_FASTPATH ALLOC_SLOWPATH FREE_FASTPATH FREE_SLOWPATH
> > kmalloc-256 3868530 3450592 95628 7223491
> > kmalloc-1024 2440434 429 2430825 10034
> > kmalloc-4096 3860625 1036723 85571 4811779
> >
> > This demonstrates that freeing to full (or partial) slabs causes a lot of
> > pain since the fastpath normally can't be utilized and that's probably
> > beyond the scope of this patchset.
> >
> > It's also different from the cpu slab thrashing issue I identified with
> > the TCP_RR benchmark and had a patchset to somewhat improve. The
> > criticism was the addition of an increment to a fastpath counter in struct
> > kmem_cache_cpu which could probably now be much cheaper with these
> > optimizations.
>
> Can you redo the patch?
>
Sure, but it would be even more inexpensive if we can figure out why the
irqless patch is hanging my netserver machine within the first 60 seconds
on the TCP_RR benchmark. I guess nobody else has reproduced that yet.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists