[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<PH0PR11MB5192481170379F35BF292E82EC3D2@PH0PR11MB5192.namprd11.prod.outlook.com>
Date: Wed, 3 Apr 2024 00:10:42 +0000
From: "Song, Xiongwei" <Xiongwei.Song@...driver.com>
To: Vlastimil Babka <vbabka@...e.cz>,
"rientjes@...gle.com"
<rientjes@...gle.com>,
"cl@...ux.com" <cl@...ux.com>,
"penberg@...nel.org"
<penberg@...nel.org>,
"iamjoonsoo.kim@....com" <iamjoonsoo.kim@....com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"roman.gushchin@...ux.dev" <roman.gushchin@...ux.dev>,
"42.hyeyoo@...il.com"
<42.hyeyoo@...il.com>
CC: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>,
"chengming.zhou@...ux.dev"
<chengming.zhou@...ux.dev>
Subject: RE: [PATCH 1/4] mm/slub: remove the check of
!kmem_cache_has_cpu_partial()
>
> On 3/31/24 4:19 AM, xiongwei.song@...driver.com wrote:
> > From: Xiongwei Song <xiongwei.song@...driver.com>
> >
> > The check of !kmem_cache_has_cpu_partial(s) with
> > CONFIG_SLUB_CPU_PARTIAL enabled here is always false. We have known the
> > result by calling kmem_cacke_debug(). Here we can remove it.
>
> Could we be more obvious. We have already checked kmem_cache_debug() earlier
> and if it was true, the we either continued or broke from the loop so we
> can't reach this code in that case and don't need to check
> kmem_cache_debug() as part of kmem_cache_has_cpu_partial() again.
Ok, looks better. Will update.
Thanks,
Xiongwei
>
> > Signed-off-by: Xiongwei Song <xiongwei.song@...driver.com>
> > ---
> > mm/slub.c | 3 +--
> > 1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 1bb2a93cf7b6..059922044a4f 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -2610,8 +2610,7 @@ static struct slab *get_partial_node(struct kmem_cache *s,
> > partial_slabs++;
> > }
> > #ifdef CONFIG_SLUB_CPU_PARTIAL
> > - if (!kmem_cache_has_cpu_partial(s)
> > - || partial_slabs > s->cpu_partial_slabs / 2)
> > + if (partial_slabs > s->cpu_partial_slabs / 2)
> > break;
> > #else
> > break;
Powered by blists - more mailing lists