[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160115033015.GD1993@swordfish>
Date: Fri, 15 Jan 2016 12:30:15 +0900
From: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc: Minchan Kim <minchan@...nel.org>,
Junil Lee <junil0814.lee@....com>,
Andrew Morton <akpm@...ux-foundation.org>, ngupta@...are.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Subject: Re: [PATCH] zsmalloc: fix migrate_zspage-zs_free race condition
On (01/15/16 12:27), Sergey Senozhatsky wrote:
> > > @@ -1635,6 +1635,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
> > > free_obj = obj_malloc(d_page, class, handle);
> > > zs_object_copy(free_obj, used_obj, class);
> > > index++;
> > > + free_obj |= BIT(HANDLE_PIN_BIT);
> > > record_obj(handle, free_obj);
> >
> > I think record_obj should store free_obj to *handle with masking off least bit.
> > IOW, how about this?
> >
> > record_obj(handle, obj)
> > {
> > *(unsigned long)handle = obj & ~(1<<HANDLE_PIN_BIT);
> > }
>
> [just a wild idea]
>
> or zs_free() can take spin_lock(&class->lock) earlier, it cannot free the
> object until the class is locked anyway, and migration is happening with
UNlocked
> the locked class. extending class->lock scope in zs_free() thus should
> not affect the perfomance. so it'll be either zs_free() is touching the
> object or the migration, not both.
-ss
Powered by blists - more mailing lists