lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 22 Apr 2022 21:40:48 +0900
From:   Hyeonggon Yoo <42.hyeyoo@...il.com>
To:     Vlastimil Babka <vbabka@...e.cz>
Cc:     linux-mm@...ck.org, Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Marco Elver <elver@...gle.com>,
        Matthew WilCox <willy@...radead.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v1 09/15] mm/slab: kmalloc: pass requests larger than
 order-1 page to page allocator

On Thu, Mar 24, 2022 at 07:08:27PM +0100, Vlastimil Babka wrote:
> On 3/8/22 12:41, Hyeonggon Yoo wrote:
> > There is not much benefit for serving large objects in kmalloc().
> > Let's pass large requests to page allocator like SLUB for better
> > maintenance of common code.
> > 
> > [ vbabka@...e.cz: Enable and disable irq around free_large_kmalloc().
> >   Do not lose NUMA locality in __do_kmalloc_node().
> >   Use folio_slab(folio)->slab_cache instead of virt_to_cache().
> >   Remove large sizes in __kmalloc_index(). ]

A bit late to reply but better late than never...

> 
> Thanks for the mention but that's generally only done like this if I took
> your patch and made those changes myself. But I just suggested them. Small
> suggested changes like this are usually just mentioned in e.g. v1->v2
> changelogs.

Ah, okay. I didn't know about the convention. thanks for letting me
know!

> > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
> > ---
> >  include/linux/slab.h | 23 +++++-----------------
> >  mm/slab.c            | 45 ++++++++++++++++++++++++++++++--------------
> >  mm/slab.h            |  3 +++
> >  mm/slab_common.c     | 25 +++++++++++++++++-------
> >  mm/slub.c            | 19 -------------------
> >  5 files changed, 57 insertions(+), 58 deletions(-)
> > 
> > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > index dfcc8301d969..9ced225a3ea3 100644
> > --- a/include/linux/slab.h
> > +++ b/include/linux/slab.h
> > @@ -226,27 +226,17 @@ void kmem_dump_obj(void *object);
> >  
> >  #ifdef CONFIG_SLAB
> >  /*
> > - * The largest kmalloc size supported by the SLAB allocators is
> > - * 32 megabyte (2^25) or the maximum allocatable page order if that is
> > - * less than 32 MB.
> > - *
> > - * WARNING: Its not easy to increase this value since the allocators have
> > - * to do various tricks to work around compiler limitations in order to
> > - * ensure proper constant folding.
> > + * SLAB and SLUB directly allocates requests fitting in to an order-1 page
> > + * (PAGE_SIZE*2).  Larger requests are passed to the page allocator.
> >   */
> > -#define KMALLOC_SHIFT_HIGH	((MAX_ORDER + PAGE_SHIFT - 1) <= 25 ? \
> > -				(MAX_ORDER + PAGE_SHIFT - 1) : 25)
> > -#define KMALLOC_SHIFT_MAX	KMALLOC_SHIFT_HIGH
> > +#define KMALLOC_SHIFT_HIGH	(PAGE_SHIFT + 1)
> > +#define KMALLOC_SHIFT_MAX	(MAX_ORDER + PAGE_SHIFT - 1)
> >  #ifndef KMALLOC_SHIFT_LOW
> >  #define KMALLOC_SHIFT_LOW	5
> >  #endif
> >  #endif
> >  
> >  #ifdef CONFIG_SLUB
> > -/*
> > - * SLUB directly allocates requests fitting in to an order-1 page
> > - * (PAGE_SIZE*2).  Larger requests are passed to the page allocator.
> > - */
> >  #define KMALLOC_SHIFT_HIGH	(PAGE_SHIFT + 1)
> >  #define KMALLOC_SHIFT_MAX	(MAX_ORDER + PAGE_SHIFT - 1)
> >  #ifndef KMALLOC_SHIFT_LOW
> > @@ -398,10 +388,6 @@ static __always_inline unsigned int __kmalloc_index(size_t size,
> >  	if (size <= 512 * 1024) return 19;
> >  	if (size <= 1024 * 1024) return 20;
> >  	if (size <=  2 * 1024 * 1024) return 21;
> > -	if (size <=  4 * 1024 * 1024) return 22;
> > -	if (size <=  8 * 1024 * 1024) return 23;
> > -	if (size <=  16 * 1024 * 1024) return 24;
> > -	if (size <=  32 * 1024 * 1024) return 25;
> >  
> >  	if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES) && size_is_constant)
> >  		BUILD_BUG_ON_MSG(1, "unexpected size in kmalloc_index()");
> > @@ -411,6 +397,7 @@ static __always_inline unsigned int __kmalloc_index(size_t size,
> >  	/* Will never be reached. Needed because the compiler may complain */
> >  	return -1;
> >  }
> > +static_assert(PAGE_SHIFT <= 20);
> >  #define kmalloc_index(s) __kmalloc_index(s, true)
> >  #endif /* !CONFIG_SLOB */
> >  
> > diff --git a/mm/slab.c b/mm/slab.c
> > index 6ebf509bf2de..f0041f0125ba 100644
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -3568,7 +3568,7 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller)
> >  	void *ret;
> >  
> >  	if (unlikely(size > KMALLOC_MAX_CACHE_SIZE))
> > -		return NULL;
> > +		return kmalloc_large_node(size, flags, node);
> 
> Similar issue with caller not traced.
>

Actually I moved tracepoint into kmalloc_large_node(),
but the problem I think was I write patches hard to review.

in v2 I split some patches to be more reviewable. thanks!!

> >  	cachep = kmalloc_slab(size, flags);
> >  	if (unlikely(ZERO_OR_NULL_PTR(cachep)))
> >  		return cachep;
> > @@ -3642,15 +3642,25 @@ void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p)
> >  {
> >  	struct kmem_cache *s;
> >  	size_t i;
> > +	struct folio *folio;
> >  
> >  	local_irq_disable();
> >  	for (i = 0; i < size; i++) {
> >  		void *objp = p[i];
> >  
> > -		if (!orig_s) /* called via kfree_bulk */
> > -			s = virt_to_cache(objp);
> > -		else
> > +		if (!orig_s) {
> > +			folio = virt_to_folio(objp);
> > +			/* called via kfree_bulk */
> > +			if (!folio_test_slab(folio)) {
> > +				local_irq_enable();
> > +				free_large_kmalloc(folio, objp);
> > +				local_irq_disable();
> > +				continue;
> > +			}
> > +			s = folio_slab(folio)->slab_cache;
> > +		} else
> >  			s = cache_from_obj(orig_s, objp);
> > +
> >  		if (!s)
> >  			continue;
> >  
> > @@ -3679,20 +3689,25 @@ void kfree(const void *objp)
> >  {
> >  	struct kmem_cache *c;
> >  	unsigned long flags;
> > +	struct folio *folio;
> > +	void *x = (void *) objp;
> 
> I think you don't need to add 'x', just do the cast while calling
> free_large_kmalloc(), same as done for __cache_free().
>

in fact also SLUB's kfree defines x. But your suggestion sounds better.
Anyway did it in v2. thanks!

> >  
> >  	trace_kfree(_RET_IP_, objp);
> >  
> >  	if (unlikely(ZERO_OR_NULL_PTR(objp)))
> >  		return;
> > -	local_irq_save(flags);
> > -	kfree_debugcheck(objp);
> > -	c = virt_to_cache(objp);
> > -	if (!c) {
> > -		local_irq_restore(flags);
> > +
> > +	folio = virt_to_folio(objp);
> > +	if (!folio_test_slab(folio)) {
> > +		free_large_kmalloc(folio, x);
> >  		return;
> >  	}
> > -	debug_check_no_locks_freed(objp, c->object_size);
> >  
> > +	c = folio_slab(folio)->slab_cache;
> > +
> > +	local_irq_save(flags);
> > +	kfree_debugcheck(objp);
> > +	debug_check_no_locks_freed(objp, c->object_size);
> >  	debug_check_no_obj_freed(objp, c->object_size);
> >  	__cache_free(c, (void *)objp, _RET_IP_);
> >  	local_irq_restore(flags);

-- 
Thanks,
Hyeonggon

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ