lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <be7ee3a6-9b3c-b436-f042-82bd3c416acc@suse.cz>
Date:   Mon, 25 Oct 2021 11:36:53 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Rustam Kovhaev <rkovhaev@...il.com>, cl@...ux.com,
        penberg@...nel.org, rientjes@...gle.com, iamjoonsoo.kim@....com,
        akpm@...ux-foundation.org, corbet@....net
Cc:     djwong@...nel.org, david@...morbit.com,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        linux-doc@...r.kernel.org, gregkh@...uxfoundation.org,
        viro@...iv.linux.org.uk, dvyukov@...gle.com
Subject: Re: [PATCH v2] slob: add size header to all allocations

On 10/23/21 08:41, Rustam Kovhaev wrote:
> Let's prepend both kmalloc() and kmem_cache_alloc() allocations with the
> size header.
> It simplifies the slab API and guarantees that both kmem_cache_alloc()
> and kmalloc() memory could be freed by kfree().
> 
> meminfo right after the system boot, without the patch:
> Slab:              35456 kB
> 
> the same, with the patch:
> Slab:              36160 kB
> 
> Link: https://lore.kernel.org/lkml/20210929212347.1139666-1-rkovhaev@gmail.com
> Signed-off-by: Rustam Kovhaev <rkovhaev@...il.com>

Seems overal correct to me, thanks! I'll just suggest some improvements:

> ---
> v2:
>  - Allocate compound pages in slob_alloc_node()
>  - Use slob_free_pages() in kfree()
>  - Update documentation
> 
>  Documentation/core-api/memory-allocation.rst |   4 +-
>  mm/slob.c                                    | 114 +++++++++----------
>  2 files changed, 55 insertions(+), 63 deletions(-)
> 
> diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst
> index 5954ddf6ee13..fea0ed11a7c5 100644
> --- a/Documentation/core-api/memory-allocation.rst
> +++ b/Documentation/core-api/memory-allocation.rst
> @@ -172,5 +172,5 @@ wrappers can allocate memory from that cache.
>  
>  When the allocated memory is no longer needed it must be freed. You can
>  use kvfree() for the memory allocated with `kmalloc`, `vmalloc` and
> -`kvmalloc`. The slab caches should be freed with kmem_cache_free(). And
> -don't forget to destroy the cache with kmem_cache_destroy().
> +`kvmalloc`. The slab caches can be freed with kmem_cache_free() or kvfree().
> +And don't forget to destroy the cache with kmem_cache_destroy().

I would phrase it like this (improves also weird wording "The slab caches
should be freed with..." prior to your patch, etc.):

When the allocated memory is no longer needed it must be freed. Objects
allocated by `kmalloc` can be freed by `kfree` or `kvfree`.
Objects allocated by `kmem_cache_alloc` can be freed with `kmem_cache_free`
or also by `kfree` or `kvfree`.
Memory allocated by `vmalloc` can be freed with `vfree` or `kvfree`.
Memory allocated by `kvmalloc` can be freed with `kvfree`.
Caches created by `kmem_cache_create` should be freed with with
`kmem_cache_destroy`.

> -static void slob_free_pages(void *b, int order)
> +static void slob_free_pages(struct page *sp, int order)
>  {
> -	struct page *sp = virt_to_page(b);
> -
> -	if (current->reclaim_state)
> -		current->reclaim_state->reclaimed_slab += 1 << order;
> +	if (PageSlab(sp)) {
> +		__ClearPageSlab(sp);
> +		page_mapcount_reset(sp);
> +		if (current->reclaim_state)
> +			current->reclaim_state->reclaimed_slab += 1 << order;
> +	}
>  
>  	mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
>  			    -(PAGE_SIZE << order));
> @@ -247,9 +244,7 @@ static void *slob_page_alloc(struct page *sp, size_t size, int align,
>  		/*
>  		 * 'aligned' will hold the address of the slob block so that the
>  		 * address 'aligned'+'align_offset' is aligned according to the
> -		 * 'align' parameter. This is for kmalloc() which prepends the
> -		 * allocated block with its size, so that the block itself is
> -		 * aligned when needed.
> +		 * 'align' parameter.
>  		 */
>  		if (align) {
>  			aligned = (slob_t *)
> @@ -373,25 +368,28 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node,
>  	}
>  	if (unlikely(gfp & __GFP_ZERO))
>  		memset(b, 0, size);
> +	/* Write size in the header */
> +	*(unsigned int *)b = size - align_offset;
> +	b = (void *)b + align_offset;
>  	return b;

I would just "return (void *)b + align_offset;" here,  no need to update 'b'.

>  }
>  
>  /*
>   * slob_free: entry point into the slob allocator.
>   */
> -static void slob_free(void *block, int size)
> +static void slob_free(void *block)
>  {
>  	struct page *sp;
> -	slob_t *prev, *next, *b = (slob_t *)block;
> +	int align_offset = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);

This patch adds a number of these in several functions, it was just
__do_kmalloc_node(). It's compile-time constant so I would just #define it
somewhere at the top of slob.c, e.g. something like:

#if ARCH_KMALLOC_MINALIGN < ARCH_SLAB_MINALIGN
#define SLOB_HDR_SIZE ARCH_SLAB_MINALIGN
#else
#define SLOB_HDR_SIZE ARCH_KMALLOC_MINALIGN
#endif

> +	void *hdr = block - align_offset;
> +	unsigned int size = *(unsigned int *)hdr + align_offset;
> +	slob_t *prev, *next, *b = hdr;

IMHO this is too subtle to put in the declaration. I would move these
assignments below the declarations.

That way you can also ditch 'hdr' and just do a 'block -= SLOB_HDR_SIZE;';

>  	slobidx_t units;
>  	unsigned long flags;
>  	struct list_head *slob_list;
>  
> -	if (unlikely(ZERO_OR_NULL_PTR(block)))
> -		return;
> -	BUG_ON(!size);
> -
> -	sp = virt_to_page(block);
> +	BUG_ON(!size || size >= PAGE_SIZE);
> +	sp = virt_to_page(hdr);
>  	units = SLOB_UNITS(size);
>  
>  	spin_lock_irqsave(&slob_lock, flags);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ