[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140227213252.GA18830@quack.suse.cz>
Date: Thu, 27 Feb 2014 22:32:52 +0100
From: Jan Kara <jack@...e.cz>
To: Tejun Heo <tj@...nel.org>
Cc: Jan Kara <jack@...e.cz>, linux-fsdevel@...r.kernel.org,
LKML <linux-kernel@...r.kernel.org>,
Jens Axboe <axboe@...nel.dk>, stable@...r.kernel.org
Subject: Re: [PATCH 2/2] bdi: Avoid oops on device removal
On Thu 27-02-14 15:07:48, Tejun Heo wrote:
> Hello,
>
> On Tue, Feb 25, 2014 at 11:29:14PM +0100, Jan Kara wrote:
> > +static void bdi_wakeup_thread(struct backing_dev_info *bdi)
> > +{
> > + spin_lock_bh(&bdi->wb_lock);
> > + if (test_bit(BDI_registered, &bdi->state))
> > + mod_delayed_work(bdi_wq, &bdi->wb.dwork, 0);
> > + spin_unlock_bh(&bdi->wb_lock);
> > +}
>
> I wonder whether this can be smarter without requiring wb_lock each
> timer but this probably is the simplest for -stable backports.
We could be clever, check whether the work is already queued for
execution and bail out without taking wb_lock if yes (that would also
save us some unnecessary juggling in try_to_grab_pending() for the situation
were the work is already queued). But I'm not sure how to cleanly implement
this...
> > static void bdi_queue_work(struct backing_dev_info *bdi,
> > struct wb_writeback_work *work)
> > {
> > trace_writeback_queue(bdi, work);
> >
> > spin_lock_bh(&bdi->wb_lock);
> > + if (!test_bit(BDI_registered, &bdi->state)) {
> > + if (work->done)
> > + complete(work->done);
> > + goto out_unlock;
> > + }
> > list_add_tail(&work->list, &bdi->work_list);
> > - spin_unlock_bh(&bdi->wb_lock);
> > -
> > mod_delayed_work(bdi_wq, &bdi->wb.dwork, 0);
> > +out_unlock:
> > + spin_unlock_bh(&bdi->wb_lock);
> > }
> >
> > +
> > +
>
> Why three blank lines?
A mistake. Will fix.
> Other than that,
>
> Reviewed-by: Tejun Heo <tj@...nel.org>
Thanks!
Honza
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists