[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YKahMXCwDRlBksAU@cmpxchg.org>
Date: Thu, 20 May 2021 13:49:37 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Matthew Wilcox <willy@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Xu <peterx@...hat.com>, Hugh Dickins <hughd@...gle.com>,
Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...riel.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Michal Hocko <mhocko@...nel.org>,
Dave Hansen <dave.hansen@...el.com>,
Tim Chen <tim.c.chen@...el.com>
Subject: Re: [PATCH] mm: move idle swap cache pages to the tail of LRU after
COW
On Thu, May 20, 2021 at 09:59:15AM +0800, Huang, Ying wrote:
> Johannes Weiner <hannes@...xchg.org> writes:
>
> > On Thu, May 20, 2021 at 09:22:45AM +0800, Huang, Ying wrote:
> >> Johannes Weiner <hannes@...xchg.org> writes:
> >>
> >> > On Wed, May 19, 2021 at 09:33:13AM +0800, Huang Ying wrote:
> >> >> diff --git a/mm/memory.c b/mm/memory.c
> >> >> index b83f734c4e1d..2b6847f4c03e 100644
> >> >> --- a/mm/memory.c
> >> >> +++ b/mm/memory.c
> >> >> @@ -3012,6 +3012,11 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
> >> >> munlock_vma_page(old_page);
> >> >> unlock_page(old_page);
> >> >> }
> >> >> + if (page_copied && PageSwapCache(old_page) &&
> >> >> + !page_mapped(old_page) && trylock_page(old_page)) {
> >> >> + try_to_free_idle_swapcache(old_page);
> >> >> + unlock_page(old_page);
> >> >
> >> > If there are no more swap or pte references, can we just attempt to
> >> > free the page right away, like we do during regular unmap?
> >> >
> >> > if (page_copied)
> >> > free_swap_cache(old_page);
> >> > put_page(old_page);
> >>
> >> A previous version of the patch does roughly this.
> >>
> >> https://lore.kernel.org/lkml/20210113024241.179113-1-ying.huang@intel.com/
> >>
> >> But Linus has concerns with the overhead introduced in the hot COW path.
> >
> > Sorry, I had missed that thread.
> >
> > It sounds like there were the same concerns about the LRU shuffling
> > overhead in the COW page. Now we have numbers for that, but not the
> > free_swap_cache version. Would you be able to run the numbers for that
> > as well? It would be interesting to see how much the additional code
> > complexity buys us.
>
> The number for which workload? The workload that is used to evaluate
> this patch?
Yeah, the pmbench one from the changelog.
Powered by blists - more mailing lists