[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c3c26c7a-748c-6090-67f4-3014bedea2e6@linux.alibaba.com>
Date: Mon, 13 May 2019 14:09:59 -0700
From: Yang Shi <yang.shi@...ux.alibaba.com>
To: Michal Hocko <mhocko@...nel.org>
Cc: ying.huang@...el.com, hannes@...xchg.org,
mgorman@...hsingularity.net, kirill.shutemov@...ux.intel.com,
hughd@...gle.com, shakeelb@...gle.com,
william.kucharski@...cle.com, akpm@...ux-foundation.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [v2 PATCH] mm: vmscan: correct nr_reclaimed for THP
On 5/13/19 1:09 AM, Michal Hocko wrote:
> On Sat 11-05-19 00:23:40, Yang Shi wrote:
>> Since commit bd4c82c22c36 ("mm, THP, swap: delay splitting THP after
>> swapped out"), THP can be swapped out in a whole. But, nr_reclaimed
>> still gets inc'ed by one even though a whole THP (512 pages) gets
>> swapped out.
>>
>> This doesn't make too much sense to memory reclaim. For example, direct
>> reclaim may just need reclaim SWAP_CLUSTER_MAX pages, reclaiming one THP
>> could fulfill it. But, if nr_reclaimed is not increased correctly,
>> direct reclaim may just waste time to reclaim more pages,
>> SWAP_CLUSTER_MAX * 512 pages in worst case.
> You are technically right here. This has been a known issue for a while.
> I am wondering whether somebody actually noticed some misbehavior.
> Swapping out is a rare event and you have to have a considerable number
> of THPs to notice.
The commit bd4c82c22c36 ("mm, THP, swap: delay splitting THP after
swapped out") was added in 4.14, it might be not used widely yet. And we
need swap + THPs to trigger it, furthermore the misbehavior might be
accumulative over time. I don't expect it will have any obvious
misbehavior for a single shot.
>
>> This change may result in more reclaimed pages than scanned pages showed
>> by /proc/vmstat since scanning one head page would reclaim 512 base pages.
> This is quite nasty and confusing. I am worried that having those two
> unsynced begs for subtle issues. Can we account THP as scanning 512 base
> pages as well?
Actually, such unsync has been there for a while. The
isolate_lru_pages() returns nr_taken which has THP accounted as 512 base
pages, but nr_scanned just shows 1. nr_taken is not showed via
/proc/vmstat, but it is showed via trace point.
I think we can just account 512 base pages for nr_scanned for
isolate_lru_pages() to make the counters sane since PGSCAN_KSWAPD/DIRECT
just use it.
And, sc->nr_scanned should be accounted as 512 base pages too otherwise
we may have nr_scanned < nr_to_reclaim all the time to result in
false-negative for priority raise and something else wrong (e.g. wrong
vmpressure).
>
>> Cc: "Huang, Ying" <ying.huang@...el.com>
>> Cc: Johannes Weiner <hannes@...xchg.org>
>> Cc: Michal Hocko <mhocko@...e.com>
>> Cc: Mel Gorman <mgorman@...e.de>
>> Cc: "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>
>> Cc: Hugh Dickins <hughd@...gle.com>
>> Reviewed-by: Shakeel Butt <shakeelb@...gle.com>
>> Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com>
>> ---
>> v2: Added Shakeel's Reviewed-by
>> Use hpage_nr_pages instead of compound_order per Huang Ying and William Kucharski
>>
>> mm/vmscan.c | 6 +++++-
>> 1 file changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index fd9de50..4226d6b 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1446,7 +1446,11 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>>
>> unlock_page(page);
>> free_it:
>> - nr_reclaimed++;
>> + /*
>> + * THP may get swapped out in a whole, need account
>> + * all base pages.
>> + */
>> + nr_reclaimed += hpage_nr_pages(page);
>>
>> /*
>> * Is there need to periodically free_page_list? It would
>> --
>> 1.8.3.1
>>
Powered by blists - more mailing lists