[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200219200527.GF11847@dhcp22.suse.cz>
Date: Wed, 19 Feb 2020 21:05:27 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Sultan Alsawaf <sultan@...neltoast.com>
Cc: Dave Hansen <dave.hansen@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Mel Gorman <mgorman@...e.de>,
Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH] mm: Stop kswapd early when nothing's waiting for it to
free pages
[Ups, for some reason I have missed Dave's response previously]
On Wed 19-02-20 11:40:06, Sultan Alsawaf wrote:
> On Wed, Feb 19, 2020 at 11:13:21AM -0800, Dave Hansen wrote:
> > On 2/19/20 10:25 AM, Sultan Alsawaf wrote:
> > > Keeping kswapd running when all the failed allocations that invoked it
> > > are satisfied incurs a high overhead due to unnecessary page eviction
> > > and writeback, as well as spurious VM pressure events to various
> > > registered shrinkers. When kswapd doesn't need to work to make an
> > > allocation succeed anymore, stop it prematurely to save resources.
> >
> > But kswapd isn't just to provide memory to waiters. It also serves to
> > get free memory back up to the high watermark. This seems like it might
> > result in more frequent allocation stalls and kswapd wakeups, which
> > consumes extra resources.
Agreed as expressed in my other reply
> > I guess I'd wonder what positive effects you have observed as a result
> > of this patch and whether you've gone looking for any negative effects.
>
> This patch essentially stops kswapd from going overboard when a failed
> allocation fires up kswapd. Otherwise, when memory pressure is really high,
> kswapd just chomps through CPU time freeing pages nonstop when it isn't needed.
Could you be more specific please? kspwad should stop as soon as the
high watermark is reached. If that is not the case then there is a bug
which should be fixed.
Sure it is quite possible that kswapd is busy for extended amount of
time if the memory pressure is continuous.
> On a constrained system I tested (mem=2G), this patch had the positive effect of
> improving overall responsiveness at high memory pressure.
Again, do you have more details about the workload and what was the
cause of responsiveness issues? Because I would expect that the
situation would be quite opposite because it is usually the direct
reclaim that is a source of stalls visible from userspace. Or is this
about a single CPU situation where kswapd saturates the single CPU and
all other tasks are just not getting enough CPU cycles?
> On systems with more memory I tested (>=4G), kswapd becomes more expensive to
> run at its higher scan depths, so stopping kswapd prematurely when there aren't
> any memory allocations waiting for it prevents it from reaching the *really*
> expensive scan depths and burning through even more resources.
>
> Combine a large amount of memory with a slow CPU and the current problematic
> behavior of kswapd at high memory pressure shows. My personal test scenario for
> this was an arm64 CPU with a variable amount of memory (up to 4G RAM + 2G swap).
But still, somebody has to put the system into balanced state so who is
going to do all the work?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists