[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160115032712.GC1993@swordfish>
Date: Fri, 15 Jan 2016 12:27:12 +0900
From: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To: Minchan Kim <minchan@...nel.org>
Cc: Junil Lee <junil0814.lee@....com>,
Andrew Morton <akpm@...ux-foundation.org>, ngupta@...are.org,
sergey.senozhatsky.work@...il.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] zsmalloc: fix migrate_zspage-zs_free race condition
Cc Andrew,
On (01/15/16 11:35), Minchan Kim wrote:
[..]
> > Signed-off-by: Junil Lee <junil0814.lee@....com>
> > ---
> > mm/zsmalloc.c | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index e7414ce..bb459ef 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -1635,6 +1635,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
> > free_obj = obj_malloc(d_page, class, handle);
> > zs_object_copy(free_obj, used_obj, class);
> > index++;
> > + free_obj |= BIT(HANDLE_PIN_BIT);
> > record_obj(handle, free_obj);
>
> I think record_obj should store free_obj to *handle with masking off least bit.
> IOW, how about this?
>
> record_obj(handle, obj)
> {
> *(unsigned long)handle = obj & ~(1<<HANDLE_PIN_BIT);
> }
[just a wild idea]
or zs_free() can take spin_lock(&class->lock) earlier, it cannot free the
object until the class is locked anyway, and migration is happening with
the locked class. extending class->lock scope in zs_free() thus should
not affect the perfomance. so it'll be either zs_free() is touching the
object or the migration, not both.
-ss
Powered by blists - more mailing lists