[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20060821154841.e6ea500a.akpm@osdl.org>
Date: Mon, 21 Aug 2006 15:48:41 -0700
From: Andrew Morton <akpm@...l.org>
To: Oleg Nesterov <oleg@...sign.ru>
Cc: Jens Axboe <axboe@...e.de>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sys_ioprio_set: don't disable irqs
On Mon, 21 Aug 2006 00:50:34 +0400
Oleg Nesterov <oleg@...sign.ru> wrote:
> Question: why do we need to disable irqs in exit_io_context() ?
iirc it was to prevent IRQ-context code from getting a hold on
current->io_context and then playing around with it while it's getting
freed.
In practice, a preempt_disable() there would probably suffice (ie: if this
CPU is running an ISR, it won't be running exit_io_context as well). But
local_irq_disable() is clearer, albeit more expensive.
> Why do we need ->alloc_lock to clear io_context->task ?
To prevent races against elv_unregister(), I guess.
> In other words, could you explain why the patch below is not correct.
>
> Thanks,
>
> Oleg.
>
> --- 2.6.18-rc4/block/ll_rw_blk.c~6_exit 2006-08-20 19:30:10.000000000 +0400
> +++ 2.6.18-rc4/block/ll_rw_blk.c 2006-08-20 22:34:46.000000000 +0400
> @@ -3580,25 +3580,22 @@ EXPORT_SYMBOL(put_io_context);
> /* Called by the exitting task */
> void exit_io_context(void)
> {
> - unsigned long flags;
> struct io_context *ioc;
> struct cfq_io_context *cic;
>
> - local_irq_save(flags);
> task_lock(current);
> ioc = current->io_context;
> current->io_context = NULL;
> - ioc->task = NULL;
> task_unlock(current);
> - local_irq_restore(flags);
>
> + ioc->task = NULL;
> if (ioc->aic && ioc->aic->exit)
> ioc->aic->exit(ioc->aic);
> if (ioc->cic_root.rb_node != NULL) {
> cic = rb_entry(rb_first(&ioc->cic_root), struct cfq_io_context, rb_node);
> cic->exit(ioc);
> }
> -
> +
> put_io_context(ioc);
> }
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists