[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130109155241.GE21265@redhat.com>
Date: Wed, 9 Jan 2013 10:52:41 -0500
From: Vivek Goyal <vgoyal@...hat.com>
To: "Jun'ichi Nomura" <j-nomura@...jp.nec.com>
Cc: Jens Axboe <axboe@...nel.dk>,
Peter Zijlstra <peterz@...radead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
device-mapper development <dm-devel@...hat.com>,
Tejun Heo <tj@...nel.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Alasdair G Kergon <agk@...hat.com>
Subject: Re: [PATCH repost] blkcg: fix "scheduling while atomic" in
blk_queue_bypass_start
On Tue, Jan 08, 2013 at 04:31:30PM +0900, Jun'ichi Nomura wrote:
> With 749fefe677 in v3.7 ("block: lift the initial queue bypass mode
> on blk_register_queue() instead of blk_init_allocated_queue()"),
> the following warning appears when multipath is used with CONFIG_PREEMPT=y.
>
> This patch moves blk_queue_bypass_start() before radix_tree_preload()
> to avoid the sleeping call while preemption is disabled.
Ok, raix_tree_preload() disabled preemption and blk_queue_bypass_start()
calls synchronize_rcu() which in turn leads to schedule(), hence the
warning.
We also call __blkg_lookup_create() with preemption disabled and this
can do blkg allocation. But allocation currently is GFP_ATOMIC, so not
sleeping and scheduling here. So it should be fine.
So fix looks good to me.
Acked-by: Vivek Goyal <vgoyal@...hat.com>
Vivek
>
> BUG: scheduling while atomic: multipath/2460/0x00000002
> 1 lock held by multipath/2460:
> #0: (&md->type_lock){......}, at: [<ffffffffa019fb05>] dm_lock_md_type+0x17/0x19 [dm_mod]
> Modules linked in: ...
> Pid: 2460, comm: multipath Tainted: G W 3.7.0-rc2 #1
> Call Trace:
> [<ffffffff810723ae>] __schedule_bug+0x6a/0x78
> [<ffffffff81428ba2>] __schedule+0xb4/0x5e0
> [<ffffffff814291e6>] schedule+0x64/0x66
> [<ffffffff8142773a>] schedule_timeout+0x39/0xf8
> [<ffffffff8108ad5f>] ? put_lock_stats+0xe/0x29
> [<ffffffff8108ae30>] ? lock_release_holdtime+0xb6/0xbb
> [<ffffffff814289e3>] wait_for_common+0x9d/0xee
> [<ffffffff8107526c>] ? try_to_wake_up+0x206/0x206
> [<ffffffff810c0eb8>] ? kfree_call_rcu+0x1c/0x1c
> [<ffffffff81428aec>] wait_for_completion+0x1d/0x1f
> [<ffffffff810611f9>] wait_rcu_gp+0x5d/0x7a
> [<ffffffff81061216>] ? wait_rcu_gp+0x7a/0x7a
> [<ffffffff8106fb18>] ? complete+0x21/0x53
> [<ffffffff810c0556>] synchronize_rcu+0x1e/0x20
> [<ffffffff811dd903>] blk_queue_bypass_start+0x5d/0x62
> [<ffffffff811ee109>] blkcg_activate_policy+0x73/0x270
> [<ffffffff81130521>] ? kmem_cache_alloc_node_trace+0xc7/0x108
> [<ffffffff811f04b3>] cfq_init_queue+0x80/0x28e
> [<ffffffffa01a1600>] ? dm_blk_ioctl+0xa7/0xa7 [dm_mod]
> [<ffffffff811d8c41>] elevator_init+0xe1/0x115
> [<ffffffff811e229f>] ? blk_queue_make_request+0x54/0x59
> [<ffffffff811dd743>] blk_init_allocated_queue+0x8c/0x9e
> [<ffffffffa019ffcd>] dm_setup_md_queue+0x36/0xaa [dm_mod]
> [<ffffffffa01a60e6>] table_load+0x1bd/0x2c8 [dm_mod]
> [<ffffffffa01a7026>] ctl_ioctl+0x1d6/0x236 [dm_mod]
> [<ffffffffa01a5f29>] ? table_clear+0xaa/0xaa [dm_mod]
> [<ffffffffa01a7099>] dm_ctl_ioctl+0x13/0x17 [dm_mod]
> [<ffffffff811479fc>] do_vfs_ioctl+0x3fb/0x441
> [<ffffffff811b643c>] ? file_has_perm+0x8a/0x99
> [<ffffffff81147aa0>] sys_ioctl+0x5e/0x82
> [<ffffffff812010be>] ? trace_hardirqs_on_thunk+0x3a/0x3f
> [<ffffffff814310d9>] system_call_fastpath+0x16/0x1b
>
> Signed-off-by: Jun'ichi Nomura <j-nomura@...jp.nec.com>
> Acked-by: Vivek Goyal <vgoyal@...hat.com>
> Cc: Tejun Heo <tj@...nel.org>
> Cc: Jens Axboe <axboe@...nel.dk>
> Cc: Alasdair G Kergon <agk@...hat.com>
> ---
> block/blk-cgroup.c | 4 ++--
> 1 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> index b8858fb..53628e4 100644
> --- a/block/blk-cgroup.c
> +++ b/block/blk-cgroup.c
> @@ -790,10 +790,10 @@ int blkcg_activate_policy(struct request_queue *q,
> if (!blkg)
> return -ENOMEM;
>
> - preloaded = !radix_tree_preload(GFP_KERNEL);
> -
> blk_queue_bypass_start(q);
>
> + preloaded = !radix_tree_preload(GFP_KERNEL);
> +
> /* make sure the root blkg exists and count the existing blkgs */
> spin_lock_irq(q->queue_lock);
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists