[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111026194857.GF355@redhat.com>
Date: Wed, 26 Oct 2011 15:48:57 -0400
From: Vivek Goyal <vgoyal@...hat.com>
To: Tejun Heo <tj@...nel.org>
Cc: axboe@...nel.dk, ctalbott@...gle.com, rni@...gle.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 10/13] block, cfq: unlink cfq_io_context's immediately
On Tue, Oct 25, 2011 at 06:48:36PM -0700, Tejun Heo wrote:
[..]
> +/*
> + * Slow path for ioc release in put_io_context(). Performs double-lock
> + * dancing to unlink all cic's and then frees ioc.
> + */
> +static void ioc_release_fn(struct work_struct *work)
> {
> - if (!hlist_empty(&ioc->cic_list)) {
> - struct cfq_io_context *cic;
> + struct io_context *ioc = container_of(work, struct io_context,
> + release_work);
> + struct request_queue *last_q = NULL;
> +
> + spin_lock_irq(&ioc->lock);
> +
> + while (!hlist_empty(&ioc->cic_list)) {
> + struct cfq_io_context *cic = hlist_entry(ioc->cic_list.first,
> + struct cfq_io_context,
> + cic_list);
> + if (cic->q != last_q) {
> + struct request_queue *this_q = cic->q;
> +
> + /*
> + * Need to switch to @this_q. Once we release
> + * @ioc->lock, it can go away along with @cic.
> + * Hold on to it.
> + */
> + __blk_get_queue(this_q);
> +
> + /*
> + * blk_put_queue() might sleep thanks to kobject
> + * idiocy. Always release both locks, put and
> + * restart.
> + */
> + if (last_q) {
> + spin_unlock(last_q->queue_lock);
> + spin_unlock_irq(&ioc->lock);
> + blk_put_queue(last_q);
> + } else {
> + spin_unlock_irq(&ioc->lock);
> + }
> +
> + last_q = this_q;
> + spin_lock_irq(this_q->queue_lock);
> + spin_lock(&ioc->lock);
> + continue;
> + }
> + ioc_release_depth_inc(cic->q);
> + cic->exit(cic);
> + cic->release(cic);
> + ioc_release_depth_dec(cic->q);
cic->release(cic) can free the cic? Are we accessing cic after freeing it
up in ioc_release_depth_dec(cic->q);
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists