[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150928122444.15409.10498.stgit@canyon>
Date: Mon, 28 Sep 2015 14:26:04 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>
Cc: netdev@...r.kernel.org, Jesper Dangaard Brouer <brouer@...hat.com>,
Alexander Duyck <alexander.duyck@...il.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Christoph Lameter <cl@...ux.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: [PATCH 0/7] Further optimizing SLAB/SLUB bulking
Most important part of this patchset is the introducing of what I call
detached freelist, for improving SLUB performance of object freeing in
the "slowpath" of kmem_cache_free_bulk.
Previous patchset V2 thread:
http://thread.gmane.org/gmane.linux.kernel.mm/137469
Not using V3 tag as patch titles have changed and I've merged some
patches. This was joint work with Alexander Duyck while still at Red Hat.
Notes for patches:
* First two patches (from Christoph) are already in AKPM MMOTS.
* Patch 3 is trivial
* Patch 4 is a repost, implements bulking for SLAB.
- http://thread.gmane.org/gmane.linux.kernel.mm/138220
* Patch 5 and 6 are the important patches
- Patch 5 handle "freelists" in slab_free() and __slab_free().
- Patch 6 intro detached freelists, and significant performance improvement
Patches should be ready for the MM-tree, as I'm now handling kmem
debug support.
Based on top of commit 519f526d39 in net-next, but I've tested it
applies on top of mmotm-2015-09-18-16-08.
The benchmarking tools are avail here:
https://github.com/netoptimizer/prototype-kernel/tree/master/kernel/mm
See: slab_bulk_test0{1,2,3}.c
This patchset is part of my network stack use-case. I'll post the
network side of the patchset as soon as I've cleaned it up, rebased it
on net-next and re-run all the benchmarks.
---
Christoph Lameter (2):
slub: create new ___slab_alloc function that can be called with irqs disabled
slub: Avoid irqoff/on in bulk allocation
Jesper Dangaard Brouer (5):
slub: mark the dangling ifdef #else of CONFIG_SLUB_DEBUG
slab: implement bulking for SLAB allocator
slub: support for bulk free with SLUB freelists
slub: optimize bulk slowpath free by detached freelist
slub: do prefetching in kmem_cache_alloc_bulk()
mm/slab.c | 87 ++++++++++++++-----
mm/slub.c | 276 +++++++++++++++++++++++++++++++++++++++++++++----------------
2 files changed, 267 insertions(+), 96 deletions(-)
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
Author of http://www.iptv-analyzer.org
LinkedIn: http://www.linkedin.com/in/brouer
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists