[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6b2977fc-1e4a-f3d4-db24-7c4699e0773f@huawei.com>
Date: Tue, 23 Aug 2022 20:46:43 +0800
From: Liu Shixin <liushixin2@...wei.com>
To: Michal Hocko <mhocko@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>
CC: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
huang ying <huang.ying.caritas@...il.com>,
Aaron Lu <aaron.lu@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Vlastimil Babka <vbabka@...e.cz>,
Kemi Wang <kemi.wang@...el.com>,
"Kefeng Wang" <wangkefeng.wang@...wei.com>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
Subject: Re: [PATCH -next v2] mm, proc: collect percpu free pages into the
free pages
On 2022/8/23 15:50, Michal Hocko wrote:
> On Mon 22-08-22 14:12:07, Andrew Morton wrote:
>> On Mon, 22 Aug 2022 11:33:54 +0800 Liu Shixin <liushixin2@...wei.com> wrote:
>>
>>> The page on pcplist could be used, but not counted into memory free or
>>> avaliable, and pcp_free is only showed by show_mem() for now. Since commit
>>> d8a759b57035 ("mm, page_alloc: double zone's batchsize"), there is a
>>> significant decrease in the display of free memory, with a large number
>>> of cpus and zones, the number of pages in the percpu list can be very
>>> large, so it is better to let user to know the pcp count.
>>>
>>> On a machine with 3 zones and 72 CPUs. Before commit d8a759b57035, the
>>> maximum amount of pages in the pcp lists was theoretically 162MB(3*72*768KB).
>>> After the patch, the lists can hold 324MB. It has been observed to be 114MB
>>> in the idle state after system startup in practice(increased 80 MB).
>>>
>> Seems reasonable.
> I have asked in the previous incarnation of the patch but haven't really
> received any answer[1]. Is this a _real_ problem? The absolute amount of
> memory could be perceived as a lot but is this really noticeable wrt
> overall memory on those systems?
This may not obvious when the memory is sufficient. However, as products monitor the
memory to plan it. The change has caused warning. We have also considered using /proc/zoneinfo
to calculate the total number of pcplists. However, we think it is more appropriate to add
the total number of pcplists to free and available pages. After all, this part is also free pages.
> Also the patch is accounting these pcp caches as free memory but that
> can be misleading as this memory is not readily available for use in
> general. E.g. MemAvailable is documented as:
> An estimate of how much memory is available for starting new
> applications, without swapping.
> but pcp caches are drained only after direct reclaim fails which can
> imply a lot of reclaim and runtime disruption.
Maybe it makes more sense to add it only to free? Or handle it like page cache?
>
> [1] http://lkml.kernel.org/r/YwMv1A1rVNZQuuOo@dhcp22.suse.cz
>
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index 033f1e26d15b..f89928d3ad4e 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -5853,6 +5853,26 @@ static unsigned long nr_free_zone_pages(int offset)
>>> return sum;
>>> }
>>>
>>> +static unsigned long nr_free_zone_pcplist_pages(struct zone *zone)
>>> +{
>>> + unsigned long sum = 0;
>>> + int cpu;
>>> +
>>> + for_each_online_cpu(cpu)
>>> + sum += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count;
>>> + return sum;
>>> +}
>>> +
>>> +static unsigned long nr_free_pcplist_pages(void)
>>> +{
>>> + unsigned long sum = 0;
>>> + struct zone *zone;
>>> +
>>> + for_each_zone(zone)
>>> + sum += nr_free_zone_pcplist_pages(zone);
>>> + return sum;
>>> +}
>> Prevention of races against zone/node hotplug?
> Memory hotplug doesn't remove nodes nor its zones.
>
Powered by blists - more mailing lists