[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1eabe244-4568-e1e1-7f9e-235175cc8c1d@sangfor.com.cn>
Date: Wed, 26 May 2021 08:43:28 +0800
From: Ding Hui <dinghui@...gfor.com.cn>
To: HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>
Cc: "david@...hat.com" <david@...hat.com>,
"osalvador@...e.de" <osalvador@...e.de>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [PATCH v2] mm/page_alloc: fix counting of free pages after take
off from buddy
On 2021/5/25 16:32, HORIGUCHI NAOYA(堀口 直也) wrote:
> On Sat, May 08, 2021 at 11:55:33AM +0800, Ding Hui wrote:
>> Recently we found there is a lot MemFree left in /proc/meminfo after
>> do a lot of pages soft offline.
>>
>> I think it's incorrect since NR_FREE_PAGES should not contain HWPoison pages.
>> For offline free pages, after a successful call take_page_off_buddy(), the
>> page is no longer belong to buddy allocator, and will not be used any more,
>> but we missed accounting NR_FREE_PAGES in this situation.
>>
>> Do update like rmqueue() does.
>>
>> Signed-off-by: Ding Hui <dinghui@...gfor.com.cn>
>> ---
>> V2:
>> use __mod_zone_freepage_state instead of __mod_zone_page_state
>>
>> mm/page_alloc.c | 1 +
>> 1 file changed, 1 insertion(+)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index cfc72873961d..e124a615303b 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -8947,6 +8947,7 @@ bool take_page_off_buddy(struct page *page)
>> del_page_from_free_list(page_head, zone, page_order);
>> break_down_buddy_pages(zone, page_head, page, 0,
>> page_order, migratetype);
>> + __mod_zone_freepage_state(zone, -1, migratetype);
>
> Page offline code (see set_migratetype_isolate()) seems to handle
> NR_FREE_PAGES counter in its own way, so I think that it's more correct to
> call __mod_zone_freepage_state() only when is_migrate_isolate(migratetype))
> is false.
>
> Otherwise, the patch looks good to me.
>
Thanks for reply and suggestion, I'll send v3 patch later.
>> ret = true;
>> break;
>> }
>> --
>> 2.17.1
--
Thanks,
- Ding Hui
Powered by blists - more mailing lists