[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YLYef3i2OGseGbsS@casper.infradead.org>
Date: Tue, 1 Jun 2021 12:48:15 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Huang Ying <ying.huang@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Johannes Weiner <hannes@...xchg.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Xu <peterx@...hat.com>, Hugh Dickins <hughd@...gle.com>,
Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...riel.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Michal Hocko <mhocko@...nel.org>,
Dave Hansen <dave.hansen@...el.com>,
Tim Chen <tim.c.chen@...el.com>
Subject: Re: [PATCH] mm: free idle swap cache page after COW
On Tue, Jun 01, 2021 at 01:31:43PM +0800, Huang Ying wrote:
> With commit 09854ba94c6a ("mm: do_wp_page() simplification"), after
> COW, the idle swap cache page (neither the page nor the corresponding
> swap entry is mapped by any process) will be left in the LRU list,
> even if it's in the active list or the head of the inactive list. So,
> the page reclaimer may take quite some overhead to reclaim these
> actually unused pages.
>
> To help the page reclaiming, in this patch, after COW, the idle swap
> cache page will be tried to be freed. To avoid to introduce much
> overhead to the hot COW code path,
>
> a) there's almost zero overhead for non-swap case via checking
> PageSwapCache() firstly.
>
> b) the page lock is acquired via trylock only.
>
> To test the patch, we used pmbench memory accessing benchmark with
> working-set larger than available memory on a 2-socket Intel server
> with a NVMe SSD as swap device. Test results shows that the pmbench
> score increases up to 23.8% with the decreased size of swap cache and
> swapin throughput.
So 2 percentage points better than my original idea? Sweet.
> diff --git a/mm/memory.c b/mm/memory.c
> index 2b7ffcbca175..d44425820240 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3104,6 +3104,8 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
> munlock_vma_page(old_page);
> unlock_page(old_page);
> }
> + if (page_copied)
> + free_swap_cache(old_page);
> put_page(old_page);
> }
> return page_copied ? VM_FAULT_WRITE : 0;
Why not ...
if (page_copied)
free_page_and_swap_cache(old_page);
else
put_page(old_page);
then you don't need to expose free_swap_cache(). Or does the test for
huge_zero_page mess this up?
Powered by blists - more mailing lists