[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130918105631.GS32145@phenom.ffwll.local>
Date: Wed, 18 Sep 2013 12:56:31 +0200
From: Daniel Vetter <daniel@...ll.ch>
To: Knut Petersen <Knut_Petersen@...nline.de>
Cc: Daniel Vetter <daniel.vetter@...ll.ch>,
Linux MM <linux-mm@...ck.org>, Rik van Riel <riel@...hat.com>,
Intel Graphics Development <intel-gfx@...ts.freedesktop.org>,
Johannes Weiner <hannes@...xchg.org>,
LKML <linux-kernel@...r.kernel.org>,
DRI Development <dri-devel@...ts.freedesktop.org>,
Michal Hocko <mhocko@...e.cz>, Mel Gorman <mgorman@...e.de>,
Glauber Costa <glommer@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [Intel-gfx] [PATCH] [RFC] mm/shrinker: Add a shrinker flag to
always shrink a bit
On Wed, Sep 18, 2013 at 12:38:23PM +0200, Knut Petersen wrote:
> On 18.09.2013 11:10, Daniel Vetter wrote:
>
> Just now I prepared a patch changing the same function in vmscan.c
> >Also, this needs to be rebased to the new shrinker api in 3.12, I
> >simply haven't rolled my trees forward yet.
>
> Well, you should. Since commit 81e49f shrinker->count_objects might be
> set to SHRINK_STOP, causing shrink_slab_node() to complain loud and often:
>
> [ 1908.234595] shrink_slab: i915_gem_inactive_scan+0x0/0x9c negative objects to delete nr=-xxxxxxxxx
>
> The kernel emitted a few thousand log lines like the one quoted above during the
> last few days on my system.
>
> >diff --git a/mm/vmscan.c b/mm/vmscan.c
> >index 2cff0d4..d81f6e0 100644
> >--- a/mm/vmscan.c
> >+++ b/mm/vmscan.c
> >@@ -254,6 +254,10 @@ unsigned long shrink_slab(struct shrink_control *shrink,
> > total_scan = max_pass;
> > }
> >+ /* Always try to shrink a bit to make forward progress. */
> >+ if (shrinker->evicts_to_page_lru)
> >+ total_scan = max_t(long, total_scan, batch_size);
> >+
> At that place the error message is already emitted.
> > /*
> > * We need to avoid excessive windup on filesystem shrinkers
> > * due to large numbers of GFP_NOFS allocations causing the
>
> Have a look at the attached patch. It fixes my problem with the erroneous/misleading
> error messages, and I think it´s right to just bail out early if SHRINK_STOP is found.
>
> Do you agree ?
Looking at the patch which introduced these error message for you, which
changed the ->count_objects return value from 0 to SHRINK_STOP your patch
below to treat 0 and SHRINK_STOP equally simply reverts the functional
change.
I don't think that's the intention behind SHRINK_STOP. But if it's the
right think to do we better revert the offending commit directly. And
since I lack clue I think that's a call for core mm guys to make.
-Daniel
>
> cu,
> Knut
>
> From 75ae570ce7b0bb6b40c76beb18fc075e9af3127a Mon Sep 17 00:00:00 2001
> From: Knut Petersen <Knut_Petersen@...nline.de>
> Date: Wed, 18 Sep 2013 12:06:33 +0200
> Subject: [PATCH] mm: respect SHRINK_STOP in shrink_slab_node()
>
> Since commit 81e49f811404f428a9d9a63295a0c267e802fa12
> i915_gem_inactive_count() might return SHRINK_STOP.
>
> Unfortunately SHRINK_STOP is not handled propperly in
> shrink_slab_node(), causing a system log cluttered with
> kernel error messages complaining about "negative objects
> to delete".
>
> I think the proper way of handling SHRINK_STOP is obvious,
> we should obey ;-)
>
> Signed-off-by: Knut Petersen <Knut_Petersen@...nline.de>
> ---
> mm/vmscan.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 8ed1b77..b1e6f0d 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -244,6 +244,8 @@ shrink_slab_node(struct shrink_control *shrinkctl, struct shrinker *shrinker,
> max_pass = shrinker->count_objects(shrinker, shrinkctl);
> if (max_pass == 0)
> return 0;
> + if (max_pass == SHRINK_STOP)
> + return 0;
>
> /*
> * copy the current shrinker scan count into a local variable
> --
> 1.8.1.4
>
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists