[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160203161910.GA10440@cmpxchg.org>
Date: Wed, 3 Feb 2016 11:19:10 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: Martijn Coenen <maco@...gle.com>
Cc: linux-mm@...ck.org, Anton Vorontsov <anton@...msg.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
Vladimir Davydov <vdavydov@...tuozzo.com>,
linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
Jonathan Corbet <corbet@....net>
Subject: Re: [PATCH] mm: vmpressure: make vmpressure_window a tunable.
On Wed, Feb 03, 2016 at 11:06:20AM +0100, Martijn Coenen wrote:
> The window size used for calculating vm pressure
> events was previously fixed at 512 pages. The
> window size has a big impact on the rate of notifications
> sent off to userspace, in particular when using the
> "low" level. On machines with a lot of memory, the
> current value may be excessive.
>
> On the other hand, making the window size depend on
> machine size does not allow userspace to change the
> notification rate based on the current state of the
> system. For example, when a lot of memory is still
> available, userspace may want to increase the window
> since it's not interested in receiving notifications
> for every 2MB scanned.
>
> This patch makes vmpressure_window a sysctl tunable.
If the machine is just cleaning up use-once cache, frequent events
make no sense. And if the machine is struggling, the notifications
better be in time.
That's hardly a tunable. It's a factor that needs constant dynamic
adjustment depending on VM state. The same state this mechanism is
supposed to report. If we can't get this right, how will userspace?
A better approach here would be to 1) find a minimum window size that
makes us confident that there are no false positives - this is likely
to be based on machine size, maybe the low watermark? - and 2) limit
reporting of lower levels, so you're not flooded with ALLGOOD! events.
VMPRESSURE_CRITICAL: report every vmpressure_win
VMPRESSURE_MEDIUM: report every vmpressure_win*2
VMPRESSURE_LOW: report every vmpressure_win*4
Pick your favorite scaling factor here.
Powered by blists - more mailing lists