[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160627165723.GW21652@esperanza>
Date: Mon, 27 Jun 2016 19:57:23 +0300
From: Vladimir Davydov <vdavydov@...tuozzo.com>
To: Chen Feng <puck.chen@...ilicon.com>
CC: <akpm@...ux-foundation.org>, <hannes@...xchg.org>,
<mhocko@...e.com>, <vbabka@...e.cz>, <mgorman@...hsingularity.net>,
<riel@...hat.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <labbott@...hat.com>,
<suzhuangluan@...ilicon.com>, <oliver.fu@...ilicon.com>,
<puck.chen@...mail.com>, <dan.zhao@...ilicon.com>,
<saberlily.xia@...ilicon.com>, <xuyiping@...ilicon.com>
Subject: Re: [PATCH] mm, vmscan: set shrinker to the left page count
On Mon, Jun 27, 2016 at 07:02:15PM +0800, Chen Feng wrote:
> In my platform, there can be cache a lot of memory in
> ion page pool. When shrink memory the nr_to_scan to ion
> is always to little.
> to_scan: 395 ion_pool_cached: 27305
That's OK. We want to shrink slabs gradually, not all at once.
>
> Currently, the shrinker nr_deferred is set to total_scan.
> But it's not the real left of the shrinker.
And it shouldn't. The idea behind nr_deferred is following. A shrinker
may return SHRINK_STOP if the current allocation context doesn't allow
to reclaim its objects (e.g. reclaiming inodes under GFP_NOFS is
deadlock prone). In this case we can't call the shrinker right now, but
if we just forget about the batch we are supposed to reclaim at the
current iteration, we can wind up having too many of these objects so
that they start to exert unfairly high pressure on user memory. So we
add the amount that we wanted to scan but couldn't to nr_deferred, so
that we can catch up when we get to shrink_slab() with a proper context.
> Change it to
> the freeable - freed.
>
> Signed-off-by: Chen Feng <puck.chen@...ilicon.com>
> ---
> mm/vmscan.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c4a2f45..1ce3fc4 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -357,8 +357,8 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
> * manner that handles concurrent updates. If we exhausted the
> * scan, there is no need to do an update.
> */
> - if (total_scan > 0)
> - new_nr = atomic_long_add_return(total_scan,
> + if (freeable - freed > 0)
> + new_nr = atomic_long_add_return(freeable - freed,
> &shrinker->nr_deferred[nid]);
> else
> new_nr = atomic_long_read(&shrinker->nr_deferred[nid]);
Powered by blists - more mailing lists