[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110412164417.GA13890@infradead.org>
Date: Tue, 12 Apr 2011 12:44:17 -0400
From: "hch@...radead.org" <hch@...radead.org>
To: Jens Axboe <jaxboe@...ionio.com>
Cc: Dave Chinner <david@...morbit.com>,
"hch@...radead.org" <hch@...radead.org>, NeilBrown <neilb@...e.de>,
Mike Snitzer <snitzer@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"dm-devel@...hat.com" <dm-devel@...hat.com>,
"linux-raid@...r.kernel.org" <linux-raid@...r.kernel.org>
Subject: Re: [PATCH 05/10] block: remove per-queue plugging
On Tue, Apr 12, 2011 at 02:58:46PM +0200, Jens Axboe wrote:
> Supposedly it's faster to do it inline rather than punt the dispatch.
> But that may actually not be true, if you have multiple plugs going (and
> thus multiple contenders for the queue lock on dispatch). So lets play
> it safe and punt to kblockd, we can always revisit this later.
Note that this can be optimized further by adding a new helper that just
queues up work on kblockd without taking the queue lock, e.g. adding a
new
void blk_run_queue_async(struct request_queue *q)
{
if (likely(!blk_queue_stopped(q)))
queue_delayed_work(kblockd_workqueue, &q->delay_work, 0);
}
And replacing all
__blk_run_queue(q, true);
callers with that, at which point they won't need the queuelock any
more.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists