[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZQqatka0zPkk0B44@feng-clx>
Date: Wed, 20 Sep 2023 15:09:42 +0800
From: Feng Tang <feng.tang@...el.com>
To: Vlastimil Babka <vbabka@...e.cz>
CC: David Rientjes <rientjes@...gle.com>,
Christoph Lameter <cl@...ux.com>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
Jay Patel <jaypatel@...ux.ibm.com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Pekka Enberg <penberg@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"patches@...ts.linux.dev" <patches@...ts.linux.dev>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/4] mm/slub: simplify the last resort slab order
calculation
On Wed, Sep 20, 2023 at 08:38:05AM +0200, Vlastimil Babka wrote:
> On 9/19/23 09:56, Feng Tang wrote:
> > Hi Vlastimil,
> >
> > On Fri, Sep 08, 2023 at 10:53:04PM +0800, Vlastimil Babka wrote:
> >> If calculate_order() can't fit even a single large object within
> >> slub_max_order, it will try using the smallest necessary order that may
> >> exceed slub_max_order but not MAX_ORDER.
> >>
> >> Currently this is done with a call to calc_slab_order() which is
> >> unecessary. We can simply use get_order(size). No functional change.
> >>
> >> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
> >> ---
> >> mm/slub.c | 2 +-
> >> 1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/mm/slub.c b/mm/slub.c
> >> index f7940048138c..c6e694cb17b9 100644
> >> --- a/mm/slub.c
> >> +++ b/mm/slub.c
> >> @@ -4193,7 +4193,7 @@ static inline int calculate_order(unsigned int size)
> >> /*
> >> * Doh this slab cannot be placed using slub_max_order.
> >> */
> >> - order = calc_slab_order(size, 1, MAX_ORDER, 1);
> >> + order = get_order(size);
> >
> >
> > This patchset is a nice cleanup, and my previous test all looked fine.
> > And one 'slub_min_order' setup reminded by Christopher [1] doesn't
> > work as not taking affect with this 1/4 patch.
>
> Hmm I see. Well the trick should keep working if you pass both
> slab_min_order=9 slab_max_order=9 ? Maybe Christopher actually does that,
> but didn't type it fully in the mail.
Yes, that's possible. And "slub_min_order=9" alone also works to make
all slab's order be 9, as current code's final fallback will try
MAX_ORDER inside caculate_order():
order = calc_slab_order(size, 1, MAX_ORDER, 1);
though the dmesg looks strange:
SLUB: HWalign=64, Order=9-3, MinObjects=0, CPUs=16, Nodes=1
>
> > The root cause seems to be, in current kernel, the 'slub_max_order'
> > is not ajusted accordingly with 'slub_min_order', so there is case
> > that 'slub_min_order' is bigger than the default 'slub_max_order' (3).
> >
> > And it could be fixed by the below patch
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 1c91f72c7239..dbe950783105 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -4702,6 +4702,9 @@ static int __init setup_slub_min_order(char *str)
> > {
> > get_option(&str, (int *)&slub_min_order);
> >
> > + if (slub_min_order > slub_max_order)
> > + slub_max_order = slub_min_order;
> > +
> > return 1;
> > }
>
> Sounds like a good idea. Would also do analogous thing in setup_slub_max_order.
Yes.
> > Though the formal fix may also need to cover case like this kind of
> > crazy setting "slub_min_order=6 slub_max_order=5"
>
> Doing both should cover even this, and AFAICS how param processing works the
> last one passed would "win" so it would set min=max=5 in that case. That's
> probably the most sane way we can handle such scenarios.
Agree. The latter setting should take privilage. My test code also
does this way.
> Want to set a full patch or should I finalize it? I would put it as a new
> 1/5 before the rest. Thanks!
I can try to make a patch with more detail in commit log, and resend. Thanks
for the review!
Thanks,
Feng
>
> > [1]. https://lore.kernel.org/lkml/21a0ba8b-bf05-0799-7c78-2a35f8c8d52a@os.amperecomputing.com/
> >
> > Thanks,
> > Feng
> >
> >> if (order <= MAX_ORDER)
> >> return order;
> >> return -ENOSYS;
> >> --
> >> 2.42.0
> >>
> >>
>
Powered by blists - more mailing lists