[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190809183424.GA22347@cmpxchg.org>
Date: Fri, 9 Aug 2019 14:34:24 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Michal Hocko <mhocko@...nel.org>
Cc: Minchan Kim <minchan@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Miguel de Dios <migueldedios@...gle.com>,
Wei Wang <wvw@...gle.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Nicholas Piggin <npiggin@...il.com>
Subject: Re: [RFC PATCH] mm: drop mark_page_access from the unmap path
On Fri, Aug 09, 2019 at 02:43:24PM +0200, Michal Hocko wrote:
> On Tue 06-08-19 19:55:09, Minchan Kim wrote:
> > On Wed, Jul 31, 2019 at 09:21:01AM +0200, Michal Hocko wrote:
> > > On Wed 31-07-19 14:44:47, Minchan Kim wrote:
> [...]
> > > > As Nick mentioned in the description, without mark_page_accessed in
> > > > zapping part, repeated mmap + touch + munmap never acticated the page
> > > > while several read(2) calls easily promote it.
> > >
> > > And is this really a problem? If we refault the same page then the
> > > refaults detection should catch it no? In other words is the above still
> > > a problem these days?
> >
> > I admit we have been not fair for them because read(2) syscall pages are
> > easily promoted regardless of zap timing unlike mmap-based pages.
> >
> > However, if we remove the mark_page_accessed in the zap_pte_range, it
> > would make them more unfair in that read(2)-accessed pages are easily
> > promoted while mmap-based page should go through refault to be promoted.
>
> I have really hard time to follow why an unmap special handling is
> making the overall state more reasonable.
>
> Anyway, let me throw the patch for further discussion. Nick, Mel,
> Johannes what do you think?
>
> From 3821c2e66347a2141358cabdc6224d9990276fec Mon Sep 17 00:00:00 2001
> From: Michal Hocko <mhocko@...e.com>
> Date: Fri, 9 Aug 2019 14:29:59 +0200
> Subject: [PATCH] mm: drop mark_page_access from the unmap path
>
> Minchan has noticed that mark_page_access can take quite some time
> during unmap:
> : I had a time to benchmark it via adding some trace_printk hooks between
> : pte_offset_map_lock and pte_unmap_unlock in zap_pte_range. The testing
> : device is 2018 premium mobile device.
> :
> : I can get 2ms delay rather easily to release 2M(ie, 512 pages) when the
> : task runs on little core even though it doesn't have any IPI and LRU
> : lock contention. It's already too heavy.
> :
> : If I remove activate_page, 35-40% overhead of zap_pte_range is gone
> : so most of overhead(about 0.7ms) comes from activate_page via
> : mark_page_accessed. Thus, if there are LRU contention, that 0.7ms could
> : accumulate up to several ms.
>
> bf3f3bc5e734 ("mm: don't mark_page_accessed in fault path") has replaced
> SetPageReferenced by mark_page_accessed arguing that the former is not
> sufficient when mark_page_accessed is removed from the fault path
> because it doesn't promote page to the active list. It is true that a
> page that is mapped by a single process might not get promoted even when
> referenced if the reclaim checks it after the unmap but does that matter
> that much? Can we cosider the page hot if there are no other
> users? Moreover we do have workingset detection in place since then and
> so a next refault would activate the page if it was really hot one.
I do think the pages can be very hot. Think of short-lived executables
and their libraries. Like shell commands. When they run a few times or
periodically, they should be promoted to the active list and not have
to compete with streaming IO on the inactive list - the PG_referenced
doesn't really help them there, see page_check_references().
Maybe the refaults will be fine - but latency expectations around
mapped page cache certainly are a lot higher than unmapped cache.
So I'm a bit reluctant about this patch. If Minchan can be happy with
the lock batching, I'd prefer that.
Powered by blists - more mailing lists