[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e5805e2b-9076-0f0c-8d3f-5fdc1520861a@virtuozzo.com>
Date: Fri, 23 Mar 2018 18:49:53 +0300
From: Kirill Tkhai <ktkhai@...tuozzo.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: linux-mm@...ck.org, Matthew Wilcox <mawilcox@...rosoft.com>,
linux-kernel@...r.kernel.org,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [PATCH 3/4] mm: Add free()
On 23.03.2018 18:14, Matthew Wilcox wrote:
> On Fri, Mar 23, 2018 at 04:33:24PM +0300, Kirill Tkhai wrote:
>>> + page = virt_to_head_page(ptr);
>>> + if (likely(PageSlab(page)))
>>> + return kmem_cache_free(page->slab_cache, (void *)ptr);
>>
>> It seems slab_cache is not generic for all types of slabs. SLOB does not care about it:
>
> Oof. I was sure I checked that. You're quite right that it doesn't ...
> this should fix that problem:
>
> diff --git a/mm/slob.c b/mm/slob.c
> index 623e8a5c46ce..96339420c6fc 100644
> --- a/mm/slob.c
> +++ b/mm/slob.c
> @@ -266,7 +266,7 @@ static void *slob_page_alloc(struct page *sp, size_t size, int align)
> /*
> * slob_alloc: entry point into the slob allocator.
> */
> -static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
> +static void *slob_alloc(size_t size, gfp_t gfp, int align, int node, void *c)
> {
> struct page *sp;
> struct list_head *prev;
> @@ -324,6 +324,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
> sp->units = SLOB_UNITS(PAGE_SIZE);
> sp->freelist = b;
> INIT_LIST_HEAD(&sp->lru);
> + sp->slab_cache = c;
> set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE));
> set_slob_page_free(sp, slob_list);
> b = slob_page_alloc(sp, size, align);
> @@ -440,7 +441,7 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller)
> if (!size)
> return ZERO_SIZE_PTR;
>
> - m = slob_alloc(size + align, gfp, align, node);
> + m = slob_alloc(size + align, gfp, align, node, NULL);
>
> if (!m)
> return NULL;
> @@ -544,7 +545,7 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
> fs_reclaim_release(flags);
>
> if (c->size < PAGE_SIZE) {
> - b = slob_alloc(c->size, flags, c->align, node);
> + b = slob_alloc(c->size, flags, c->align, node, c);
> trace_kmem_cache_alloc_node(_RET_IP_, b, c->object_size,
> SLOB_UNITS(c->size) * SLOB_UNIT,
> flags, node);
> @@ -600,6 +601,8 @@ static void kmem_rcu_free(struct rcu_head *head)
>
> void kmem_cache_free(struct kmem_cache *c, void *b)
> {
> + if (!c)
> + return kfree(b);
> kmemleak_free_recursive(b, c->flags);
> if (unlikely(c->flags & SLAB_TYPESAFE_BY_RCU)) {
> struct slob_rcu *slob_rcu;
>
>> Also, using kmem_cache_free() for kmalloc()'ed memory will connect them hardly,
>> and this may be difficult to maintain in the future.
>
> I think the win from being able to delete all the little RCU callbacks
> that just do a kmem_cache_free() is big enough to outweigh the
> disadvantage of forcing slab allocators to support kmem_cache_free()
> working on kmalloced memory.
>
>> One more thing, there is
>> some kasan checks on the main way of kfree(), and there is no guarantee they
>> reflected in kmem_cache_free() identical.
>
> Which function are you talking about here?
>
> slub calls slab_free() for both kfree() and kmem_cache_free().
> slab calls __cache_free() for both kfree() and kmem_cache_free().
> Each of them do their kasan handling in the called function.
Maybe not KASAN, I never dived deeply into sl[*]b. But they look like
just three different functions, doing different actions...
Kirill
Powered by blists - more mailing lists