[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220430115027.GC24925@ip-172-31-27-201.ap-northeast-1.compute.internal>
Date: Sat, 30 Apr 2022 11:50:28 +0000
From: Hyeonggon Yoo <42.hyeyoo@...il.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Marco Elver <elver@...gle.com>,
Matthew WilCox <willy@...radead.org>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 13/23] mm/slab: kmalloc: pass requests larger than
order-1 page to page allocator
On Wed, Apr 27, 2022 at 10:10:00AM +0200, Vlastimil Babka wrote:
> On 4/14/22 10:57, Hyeonggon Yoo wrote:
> > There is not much benefit for serving large objects in kmalloc().
> > Let's pass large requests to page allocator like SLUB for better
> > maintenance of common code.
> >
> > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
>
> Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
>
> Some nits:
>
> > @@ -3607,15 +3607,25 @@ void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p)
> > {
> > struct kmem_cache *s;
> > size_t i;
> > + struct folio *folio;
> >
> > local_irq_disable();
> > for (i = 0; i < size; i++) {
> > void *objp = p[i];
>
> folio can be declared here
> could probably move 's' too, and 'i' to the for () thanks to gnu11
>
Right!
> >
> > - if (!orig_s) /* called via kfree_bulk */
> > - s = virt_to_cache(objp);
> > - else
> > + if (!orig_s) {
> > + folio = virt_to_folio(objp);
> > + /* called via kfree_bulk */
> > + if (!folio_test_slab(folio)) {
> > + local_irq_enable();
> > + free_large_kmalloc(folio, objp);
> > + local_irq_disable();
> > + continue;
> > + }
> > + s = folio_slab(folio)->slab_cache;
> > + } else
> > s = cache_from_obj(orig_s, objp);
>
> This should now use { } brackets per kernel style.
>
Yes. will do both in v3.
> > +
> > if (!s)
> > continue;
> >
Powered by blists - more mailing lists