lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 26 Feb 2016 15:01:21 +0900 From: js1304@...il.com To: Andrew Morton <akpm@...ux-foundation.org> Cc: Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>, David Rientjes <rientjes@...gle.com>, Jesper Dangaard Brouer <brouer@...hat.com>, Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, Joonsoo Kim <iamjoonsoo.kim@....com> Subject: [PATCH v2 14/17] mm/slab: factor out slab list fixup code From: Joonsoo Kim <iamjoonsoo.kim@....com> Slab list should be fixed up after object is detached from the slab and this happens at two places. They do exactly same thing. They will be changed in the following patch, so, to reduce code duplication, this patch factor out them and make it common function. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com> Cc: Christoph Lameter <cl@...ux.com> Cc: Pekka Enberg <penberg@...nel.org> Cc: David Rientjes <rientjes@...gle.com> Cc: Joonsoo Kim <iamjoonsoo.kim@....com> Cc: Jesper Dangaard Brouer <brouer@...hat.com> Signed-off-by: Andrew Morton <akpm@...ux-foundation.org> --- mm/slab.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index ab43d9f..95e5d63 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2723,6 +2723,17 @@ static void *cache_free_debugcheck(struct kmem_cache *cachep, void *objp, #define cache_free_debugcheck(x,objp,z) (objp) #endif +static inline void fixup_slab_list(struct kmem_cache *cachep, + struct kmem_cache_node *n, struct page *page) +{ + /* move slabp to correct slabp list: */ + list_del(&page->lru); + if (page->active == cachep->num) + list_add(&page->lru, &n->slabs_full); + else + list_add(&page->lru, &n->slabs_partial); +} + static struct page *get_first_slab(struct kmem_cache_node *n) { struct page *page; @@ -2796,12 +2807,7 @@ retry: ac_put_obj(cachep, ac, slab_get_obj(cachep, page)); } - /* move slabp to correct slabp list: */ - list_del(&page->lru); - if (page->active == cachep->num) - list_add(&page->lru, &n->slabs_full); - else - list_add(&page->lru, &n->slabs_partial); + fixup_slab_list(cachep, n, page); } must_grow: @@ -3067,13 +3073,8 @@ retry: obj = slab_get_obj(cachep, page); n->free_objects--; - /* move slabp to correct slabp list: */ - list_del(&page->lru); - if (page->active == cachep->num) - list_add(&page->lru, &n->slabs_full); - else - list_add(&page->lru, &n->slabs_partial); + fixup_slab_list(cachep, n, page); spin_unlock(&n->list_lock); goto done; -- 1.9.1
Powered by blists - more mailing lists