[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200219150215.GU3466@techsingularity.net>
Date: Wed, 19 Feb 2020 15:02:15 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Alexander Duyck <alexander.duyck@...il.com>
Cc: kvm@...r.kernel.org, david@...hat.com, mst@...hat.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
akpm@...ux-foundation.org, yang.zhang.wz@...il.com,
pagupta@...hat.com, konrad.wilk@...cle.com, nitesh@...hat.com,
riel@...riel.com, willy@...radead.org, lcapitulino@...hat.com,
dave.hansen@...el.com, wei.w.wang@...el.com, aarcange@...hat.com,
pbonzini@...hat.com, dan.j.williams@...el.com, mhocko@...nel.org,
alexander.h.duyck@...ux.intel.com, vbabka@...e.cz,
osalvador@...e.de
Subject: Re: [PATCH v17 8/9] mm/page_reporting: Add budget limit on how many
pages can be reported per pass
On Tue, Feb 11, 2020 at 02:47:19PM -0800, Alexander Duyck wrote:
> From: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
>
> In order to keep ourselves from reporting pages that are just going to be
> reused again in the case of heavy churn we can put a limit on how many
> total pages we will process per pass. Doing this will allow the worker
> thread to go into idle much more quickly so that we avoid competing with
> other threads that might be allocating or freeing pages.
>
> The logic added here will limit the worker thread to no more than one
> sixteenth of the total free pages in a given area per list. Once that limit
> is reached it will update the state so that at the end of the pass we will
> reschedule the worker to try again in 2 seconds when the memory churn has
> hopefully settled down.
>
> Again this optimization doesn't show much of a benefit in the standard case
> as the memory churn is minmal. However with page allocator shuffling
> enabled the gain is quite noticeable. Below are the results with a THP
> enabled version of the will-it-scale page_fault1 test showing the
> improvement in iterations for 16 processes or threads.
>
> Without:
> tasks processes processes_idle threads threads_idle
> 16 8283274.75 0.17 5594261.00 38.15
>
> With:
> tasks processes processes_idle threads threads_idle
> 16 8767010.50 0.21 5791312.75 36.98
>
> Signed-off-by: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
Seems fair. The test case you used would have been pounding on the zone
lock at fairly high frequency so it represents a worst-case scenario but
not necessarily an unrealistic one
Acked-by: Mel Gorman <mgorman@...hsingularity.net>
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists