[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150715235944.GA3970@swordfish>
Date: Thu, 16 Jul 2015 08:59:44 +0900
From: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To: Minchan Kim <minchan@...nel.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Subject: Re: [PATCH 3/3] zsmalloc: do not take class lock in
zs_pages_to_compact()
Hi,
On (07/16/15 08:38), Minchan Kim wrote:
> > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > > index b10a228..824c182 100644
> > > --- a/mm/zsmalloc.c
> > > +++ b/mm/zsmalloc.c
> > > @@ -1811,9 +1811,7 @@ unsigned long zs_pages_to_compact(struct zs_pool *pool)
> > > if (class->index != i)
> > > continue;
> > >
> > > - spin_lock(&class->lock);
> > > pages_to_free += zs_can_compact(class);
> > > - spin_unlock(&class->lock);
> > > }
> > >
> > > return pages_to_free;
> >
> > This patch still makes sense. Agree?
>
> There is already race window between shrink_count and shrink_slab so
> it would be okay if we return stale stat with removing the lock if
> the difference is not huge.
>
> Even, now we don't obey nr_to_scan of shrinker in zs_shrinker_scan
> so the such accuracy would be pointless.
Yeah, automatic shrinker may work concurrently with the user triggered
one, so it may be hard (time consuming) to release the exact amount of
pages that we returned from _count(). We can look at `sc->nr_to_reclaim'
to avoid releasing more pages than shrinker wants us to release, but
I'd probably prefer to keep the existing behaviour if we were called by
the shrinker.
OK, will resend later today.
-ss
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists