[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150202024550.GE6402@blaptop>
Date: Mon, 2 Feb 2015 11:45:50 +0900
From: Minchan Kim <minchan@...nel.org>
To: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>, Nitin Gupta <ngupta@...are.org>,
Jerome Marchand <jmarchan@...hat.com>,
Ganesh Mahendran <opensource.ganesh@...il.com>
Subject: Re: [PATCH v1 2/2] zram: remove init_lock in zram_make_request
On Mon, Feb 02, 2015 at 10:59:40AM +0900, Sergey Senozhatsky wrote:
> On (02/02/15 10:43), Minchan Kim wrote:
> > > static inline int init_done(struct zram *zram)
> > > {
> > > - return zram->meta != NULL;
> > > + return atomic_read(&zram->refcount);
> >
> > As I said previous mail, it could make livelock so I want to use disksize
> > in here to prevent further I/O handling.
>
> just as I said in my previous email -- is this live lock really possible?
> we need to umount device to continue with reset. and umount will kill IOs out
> of our way.
>
> the other reset caller is __exit zram_exit(). but once again, I don't
> expect this function being executed on mounted device and module being
> in use.
>
>
> > > +static inline void zram_put(struct zram *zram)
> > > +{
> > > + if (atomic_dec_and_test(&zram->refcount))
> > > + complete(&zram->io_done);
> > > +}
> >
> > Although I suggested this complete, it might be rather overkill(pz,
> > understand me it was work in midnight. :))
> > Instead, we could use just atomic_dec in here and
> > use wait_event(event, atomic_read(&zram->refcount) == 0) in reset.
> >
>
> yes, I think it can do the trick.
Hey, it's not a trick. It suits for the our goal well. Completion
was too much, I think.
>
> -ss
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists