[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod6usxk99KFhQVXGxBadsYpUyQ3QuwfSDa_sbqSLjBEgnA@mail.gmail.com>
Date: Thu, 29 Jul 2021 07:03:17 -0700
From: Shakeel Butt <shakeelb@...gle.com>
To: Kefeng Wang <wangkefeng.wang@...wei.com>
Cc: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...e.com>, Roman Gushchin <guro@...com>,
Wang Hai <wanghai38@...wei.com>,
Muchun Song <songmuchun@...edance.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] slub: fix unreclaimable slab stat for bulk free
On Wed, Jul 28, 2021 at 11:52 PM Kefeng Wang <wangkefeng.wang@...wei.com> wrote:
>
>
> On 2021/7/28 23:53, Shakeel Butt wrote:
> > SLUB uses page allocator for higher order allocations and update
> > unreclaimable slab stat for such allocations. At the moment, the bulk
> > free for SLUB does not share code with normal free code path for these
> > type of allocations and have missed the stat update. So, fix the stat
> > update by common code. The user visible impact of the bug is the
> > potential of inconsistent unreclaimable slab stat visible through
> > meminfo and vmstat.
> >
> > Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
> > Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
> > ---
> > mm/slub.c | 22 ++++++++++++----------
> > 1 file changed, 12 insertions(+), 10 deletions(-)
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 6dad2b6fda6f..03770291aa6b 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3238,6 +3238,16 @@ struct detached_freelist {
> > struct kmem_cache *s;
> > };
> >
> > +static inline void free_nonslab_page(struct page *page)
> > +{
> > + unsigned int order = compound_order(page);
> > +
> > + VM_BUG_ON_PAGE(!PageCompound(page), page);
>
> Could we add WARN_ON here, or we got nothing when CONFIG_DEBUG_VM is
> disabled.
I don't have a strong opinion on this. Please send a patch with
reasoning if you want WARN_ON_ONCE here.
Powered by blists - more mailing lists