[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4b63e949-16bc-f239-89ec-93898cb4d772@linux.alibaba.com>
Date: Wed, 8 Apr 2020 11:42:55 -0700
From: Yang Shi <yang.shi@...ux.alibaba.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: akpm@...ux-foundation.org, Andrea Arcangeli <aarcange@...hat.com>,
Zi Yan <ziy@...dia.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCHv2 4/8] khugepaged: Drain LRU add pagevec after swapin
On 4/8/20 6:05 AM, Kirill A. Shutemov wrote:
> On Mon, Apr 06, 2020 at 11:29:11AM -0700, Yang Shi wrote:
>>
>> On 4/3/20 4:29 AM, Kirill A. Shutemov wrote:
>>> __collapse_huge_page_isolate() may fail due to extra pin in the LRU add
>>> pagevec. It's petty common for swapin case: we swap in pages just to
>>> fail due to the extra pin.
>>>
>>> Drain LRU add pagevec on sucessfull swapin.
>>>
>>> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
>>> ---
>>> mm/khugepaged.c | 5 +++++
>>> 1 file changed, 5 insertions(+)
>>>
>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>> index fdc10ffde1ca..57ff287caf6b 100644
>>> --- a/mm/khugepaged.c
>>> +++ b/mm/khugepaged.c
>>> @@ -940,6 +940,11 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
>>> }
>>> vmf.pte--;
>>> pte_unmap(vmf.pte);
>>> +
>>> + /* Drain LRU add pagevec to remove extra pin on the swapped in pages */
>>> + if (swapped_in)
>>> + lru_add_drain();
>> There is already lru_add_drain() called in swap readahead path, please see
>> swap_vma_readahead() and swap_cluster_readahead().
> But not for synchronous case. See SWP_SYNCHRONOUS_IO branch in
> do_swap_page().
Aha, yes. I missed the synchronous case.
>
> Maybe we should drain it in swap_readpage() or in do_swap_page() after
> swap_readpage()? I donno.
It may be better to keep it as is. Draining lru for every page for
synchronous case in do_swap_page() path sounds not very productive.
Doing it in khugepaged seems acceptable. We just drain lru cache again
for non-synchronous case, but the cache may be already empty so it
should take very short time since nothing to drain.
>
Powered by blists - more mailing lists