[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1306136629.4876.13.camel@marge.simson.net>
Date: Mon, 23 May 2011 09:43:49 +0200
From: Mike Galbraith <efault@....de>
To: Con Kolivas <kernel@...ivas.org>
Cc: linux-kernel@...r.kernel.org, axboe@...nel.dk, mingo@...e.hu,
peterz@...radead.org
Subject: Re: question about blk_schedule_flush_plug
On Mon, 2011-05-23 at 17:05 +1000, Con Kolivas wrote:
> I was looking at the scheduler changes going into 2.6.39 again and wondered
> about the use of blk_schedule_flush_plug smack in the middle of schedule()
>
> It looks like this:
> if (blk_needs_flush_plug(prev)) {
> raw_spin_unlock(&rq->lock);
> blk_schedule_flush_plug(prev);
> raw_spin_lock(&rq->lock);
> }
>
> Now call me suspicious but to my eyes it looks really dubious unlocking the
> runqueue like that right in the heart of schedule().
>
> Comments?
Releasing/retaking rq->lock is nothing new:
static void idle_balance(int this_cpu, struct rq *this_rq)
{
...
/*
* Drop the rq->lock, but keep IRQ/preempt disabled.
*/
raw_spin_unlock(&this_rq->lock);
See also need_resched, and double_lock_balance() instances.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists