[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <022f2f7c-fc03-182a-1f8f-4b77c0731d4f@huawei.com>
Date: Mon, 12 Jul 2021 15:11:55 +0800
From: Miaohe Lin <linmiaohe@...wei.com>
To: Yu Zhao <yuzhao@...gle.com>
CC: <akpm@...ux-foundation.org>, <hannes@...xchg.org>,
<vbabka@...e.cz>, <mhocko@...e.com>, <axboe@...nel.dk>,
<iamjoonsoo.kim@....com>, <alexs@...nel.org>, <apopple@...dia.com>,
<willy@...radead.org>, <minchan@...nel.org>, <david@...hat.com>,
<shli@...com>, <hillf.zj@...baba-inc.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/5] mm/vmscan: put the redirtied MADV_FREE pages back to
anonymous LRU list
On 2021/7/11 7:22, Yu Zhao wrote:
> On Sat, Jul 10, 2021 at 4:03 AM Miaohe Lin <linmiaohe@...wei.com> wrote:
>>
>> If the MADV_FREE pages are redirtied before they could be reclaimed, put
>> the pages back to anonymous LRU list by setting SwapBacked flag and the
>> pages will be reclaimed in normal swapout way. Otherwise MADV_FREE pages
>> won't be reclaimed as expected.
>>
>> Fixes: 802a3a92ad7a ("mm: reclaim MADV_FREE pages")
>
> This is not a bug -- the dirty check isn't needed but it was copied
> from __remove_mapping().
Yes, this is not a bug and harmless. When we reach here, page should not be
dirtied because PageDirty is handled above and there is no way to redirty it
again as pagetable references are all gone and it's not in the swap cache.
>
> The page has only one reference left, which is from the isolation.
> After the caller puts the page back on lru and drops the reference,
> the page will be freed anyway. It doesn't matter which lru it goes.
But it looks buggy as it didn't perform the expected ops from code view.
Should I drop the Fixes tag and send a v2 version?
Many thanks for reply!
>
>> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
>> ---
>> mm/vmscan.c | 1 +
>> 1 file changed, 1 insertion(+)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index a7602f71ec04..6483fe0e2065 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1628,6 +1628,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
>> if (!page_ref_freeze(page, 1))
>> goto keep_locked;
>> if (PageDirty(page)) {
>> + SetPageSwapBacked(page);
>> page_ref_unfreeze(page, 1);
>> goto keep_locked;
>> }
>> --
>> 2.23.0
>>
>>
> .
>
Powered by blists - more mailing lists