[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y39YR5nn6aUs2KRW@hyeyoo>
Date: Thu, 24 Nov 2022 20:40:55 +0900
From: Hyeonggon Yoo <42.hyeyoo@...il.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Christoph Lameter <cl@...ux.com>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Pekka Enberg <penberg@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>, patches@...ts.linux.dev,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 05/12] mm, slub: lower the default slub_max_order with
CONFIG_SLUB_TINY
On Mon, Nov 21, 2022 at 06:11:55PM +0100, Vlastimil Babka wrote:
> With CONFIG_SLUB_TINY we want to minimize memory overhead. By lowering
> the default slub_max_order we can make slab allocations use smaller
> pages. However depending on object sizes, order-0 might not be the best
> due to increased fragmentation. When testing on a 8MB RAM k210 system by
> Damien Le Moal [1], slub_max_order=1 had the best results, so use that
> as the default for CONFIG_SLUB_TINY.
>
> [1] https://lore.kernel.org/all/6a1883c4-4c3f-545a-90e8-2cd805bcf4ae@opensource.wdc.com/
>
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
> ---
> mm/slub.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 917b79278bad..bf726dd00f7d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3888,7 +3888,8 @@ EXPORT_SYMBOL(kmem_cache_alloc_bulk);
> * take the list_lock.
> */
> static unsigned int slub_min_order;
> -static unsigned int slub_max_order = PAGE_ALLOC_COSTLY_ORDER;
> +static unsigned int slub_max_order =
> + IS_ENABLED(CONFIG_SLUB_TINY) ? 1 : PAGE_ALLOC_COSTLY_ORDER;
> static unsigned int slub_min_objects;
>
> /*
> --
> 2.38.1
>
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
--
Thanks,
Hyeonggon
Powered by blists - more mailing lists