[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1456466484-3442-13-git-send-email-iamjoonsoo.kim@lge.com>
Date: Fri, 26 Feb 2016 15:01:19 +0900
From: js1304@...il.com
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: [PATCH v2 12/17] mm/slab: do not change cache size if debug pagealloc isn't possible
From: Joonsoo Kim <iamjoonsoo.kim@....com>
We can fail to setup off slab in some conditions. Even in this case,
debug pagealloc increases cache size to PAGE_SIZE in advance and it is
waste because debug pagealloc cannot work for it when it isn't the off
slab. To improve this situation, this patch checks first that this cache
with increased size is suitable for off slab. It actually increases cache
size when it is suitable for off-slab, so possible waste is removed.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Christoph Lameter <cl@...ux.com>
Cc: Pekka Enberg <penberg@...nel.org>
Cc: David Rientjes <rientjes@...gle.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Jesper Dangaard Brouer <brouer@...hat.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
---
mm/slab.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index 9b56685..21aad9d 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2206,10 +2206,17 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
*/
if (debug_pagealloc_enabled() && (flags & SLAB_POISON) &&
!slab_early_init && size >= kmalloc_size(INDEX_NODE) &&
- size >= 256 && cachep->object_size > cache_line_size() &&
- size < PAGE_SIZE) {
- cachep->obj_offset += PAGE_SIZE - size;
- size = PAGE_SIZE;
+ size >= 256 && cachep->object_size > cache_line_size()) {
+ if (size < PAGE_SIZE || size % PAGE_SIZE == 0) {
+ size_t tmp_size = ALIGN(size, PAGE_SIZE);
+
+ if (set_off_slab_cache(cachep, tmp_size, flags)) {
+ flags |= CFLGS_OFF_SLAB;
+ cachep->obj_offset += tmp_size - size;
+ size = tmp_size;
+ goto done;
+ }
+ }
}
#endif
--
1.9.1
Powered by blists - more mailing lists