[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20140113155527.5731d24ca86f01bfd5ec716f@linux-foundation.org>
Date: Mon, 13 Jan 2014 15:55:27 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Minchan Kim <minchan@...nel.org>
Cc: linux-kernel@...r.kernel.org, Nitin Gupta <ngupta@...are.org>,
Jerome Marchand <jmarchan@...hat.com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
stable <stable@...r.kernel.org>
Subject: Re: [PATCH 1/7] zram: fix race between reset and flushing pending
work
On Mon, 13 Jan 2014 20:18:56 +0900 Minchan Kim <minchan@...nel.org> wrote:
> Dan and Sergey reported that there is a racy between reset and
> flushing of pending work so that it could make oops by freeing
> zram->meta in reset while zram_slot_free can access zram->meta
> if new request is adding during the race window.
>
> This patch moves flush after taking init_lock so it prevents
> new request so that it closes the race.
>
> ..
>
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -553,14 +553,14 @@ static void zram_reset_device(struct zram *zram, bool reset_capacity)
> size_t index;
> struct zram_meta *meta;
>
> - flush_work(&zram->free_work);
> -
> down_write(&zram->init_lock);
> if (!zram->init_done) {
> up_write(&zram->init_lock);
> return;
> }
>
> + flush_work(&zram->free_work);
> +
> meta = zram->meta;
> zram->init_done = 0;
This makes zram.lock nest inside zram.init_lock, which afaict is new
behaviour.
That all seems OK and logical - has it been well tested with lockdep?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists