[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20151124150227.78c9e39b789f593c5216471e@linux-foundation.org>
Date: Tue, 24 Nov 2015 15:02:27 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Vladimir Davydov <vdavydov@...tuozzo.com>
Cc: Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>,
Mel Gorman <mgorman@...hsingularity.net>,
Dave Chinner <david@...morbit.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] vmscan: fix slab vs lru balance
On Tue, 24 Nov 2015 15:47:21 +0300 Vladimir Davydov <vdavydov@...tuozzo.com> wrote:
> The comment to shrink_slab states that the portion of kmem objects
> scanned by it equals the portion of lru pages scanned by shrink_zone
> over shrinker->seeks.
>
> shrinker->seeks is supposed to be equal to the number of disk seeks
> required to recreated an object. It is usually set to DEFAULT_SEEKS (2),
> which is quite logical, because most kmem objects (e.g. dentry or inode)
> require random IO to reread (seek to read and seek back).
>
> That said, one would expect that dcache is scanned two times less
> intensively than page cache, which sounds sane as dentries are generally
> more costly to recreate.
>
> However, the formula for distributing memory pressure between slab and
> lru actually looks as follows (see do_shrink_slab):
>
> lru_scanned
> objs_to_scan = objs_total * --------------- * 4 / shrinker->seeks
> lru_reclaimable
>
> That is dcache, as well as most of other slab caches, is scanned two
> times more aggressively than page cache.
>
> Fix this by dropping '4' from the equation above.
>
oh geeze. Who wrote that crap?
commit c3f4656118a78c1c294e0b4d338ac946265a822b
Author: Andrew Morton <akpm@...l.org>
Date: Mon Dec 29 23:48:44 2003 -0800
[PATCH] shrink_slab acounts for seeks incorrectly
wli points out that shrink_slab inverts the sense of shrinker->seeks: those
caches which require more seeks to reestablish an object are shrunk harder.
That's wrong - they should be shrunk less.
So fix that up, but scaling the result so that the patch is actually a no-op
at this time, because all caches use DEFAULT_SEEKS (2).
diff --git a/mm/vmscan.c b/mm/vmscan.c
index b859482..f2da3c9 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -154,7 +154,7 @@ static int shrink_slab(long scanned, unsigned int gfp_mask)
list_for_each_entry(shrinker, &shrinker_list, list) {
unsigned long long delta;
- delta = scanned * shrinker->seeks;
+ delta = 4 * (scanned / shrinker->seeks);
delta *= (*shrinker->shrinker)(0, gfp_mask);
do_div(delta, pages + 1);
shrinker->nr += delta;
What a pathetic changelog.
The current code may be good, it may be bad, but I'm reluctant to
change it without a solid demonstration that the result is overall
superior.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists