[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YwMv1A1rVNZQuuOo@dhcp22.suse.cz>
Date: Mon, 22 Aug 2022 09:27:16 +0200
From: Michal Hocko <mhocko@...e.com>
To: Liu Shixin <liushixin2@...wei.com>
Cc: Aaron Lu <aaron.lu@...el.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
huang ying <huang.ying.caritas@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Huang Ying <ying.huang@...el.com>
Subject: Re: [PATCH RFC] mm, proc: add PcpFree to meminfo
On Fri 19-08-22 17:53:27, Liu Shixin wrote:
>
>
> On 2022/8/19 15:40, Aaron Lu wrote:
> > On Tue, Aug 16, 2022 at 05:24:07PM +0800, Kefeng Wang wrote:
> >> On 2022/8/16 16:48, huang ying wrote:
> >>> On Tue, Aug 16, 2022 at 4:38 PM Kefeng Wang <wangkefeng.wang@...wei.com> wrote:
> >>>> From: Liu Shixin <liushixin2@...wei.com>
> >>>>
> >>>> The page on pcplist could be used, but not counted into memory free or
> >>>> avaliable, and pcp_free is only showed by show_mem(). Since commit
> >>>> d8a759b57035 ("mm, page_alloc: double zone's batchsize"), there is a
> >>>> significant decrease in the display of free memory, with a large number
> >>>> of cpus and nodes, the number of pages in the percpu list can be very
> >>>> large, so it is better to let user to know the pcp count.
> >>> Can you show some data?
> >> 80M+ with 72cpus/2node
I would expect that such system would have quite some memory as well and
80MB wouldn't be a really noticeable. What is that amount in %tage
> > 80M+ for a 2 node system doesn't sound like a significant number.
> >
> >>> Another choice is to count PCP free pages in MemFree. Is that OK for
> >>> your use case too?
> >> Yes, the user will make policy according to MemFree, we think count PCP free
> >> pages
> >>
> >> in MemFree is better, but don't know whether it is right way.
> >>
> > Is there a real problem where user makes a sub-optimal policy due to the
> > not accounted 80M+ free memory?
> I need to explain that 80M+ is the increased after patch d8a759b57035. Actually in my test,
> the pcplist is about 114M after system startup, and in high load scenarios, the pcplist memory
> can reach 300M+.
> The downstream has sensed the memory change after the kernel is updated, which has an
> actual impact on them. That's why I sent this patch to ask if should count this
> part of memory.
It would be really great to be more explicit about this. Because if this
is really runtime noticeable then we might need to consider an improved
tunning or way to manually configure the pcp batch sizes. Reporting the
amount on its own is unlikely going to help without being able to do
anything about that.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists