[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YXdZxzAPejKyE8Oi@T590>
Date: Tue, 26 Oct 2021 09:28:39 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Dmitry Osipenko <digetx@...il.com>
Cc: Stephen Rothwell <sfr@...b.auug.org.au>,
Linux Next Mailing List <linux-next@...r.kernel.org>,
Ulf Hansson <ulf.hansson@...aro.org>,
Adrian Hunter <adrian.hunter@...el.com>,
Jens Axboe <axboe@...nel.dk>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mmc <linux-mmc@...r.kernel.org>,
linux-block <linux-block@...r.kernel.org>
Subject: Re: linux-next: Tree for Oct 25
On Tue, Oct 26, 2021 at 01:11:07AM +0300, Dmitry Osipenko wrote:
> Hello,
>
> Recent -next has this new warning splat coming from MMC, please take a look.
>
> ------------[ cut here ]------------
> WARNING: CPU: 0 PID: 525 at kernel/sched/core.c:9477 __might_sleep+0x65/0x68
> do not call blocking ops when !TASK_RUNNING; state=2 set at [<4316eb02>] prepare_to_wait+0x2e/0xb8
> Modules linked in:
> CPU: 0 PID: 525 Comm: Xorg Tainted: G W 5.15.0-rc6-next-20211025-00226-g89ccd6948ec3 #5
> Hardware name: NVIDIA Tegra SoC (Flattened Device Tree)
> (unwind_backtrace) from [<c01089f9>] (show_stack+0x11/0x14)
> (show_stack) from [<c0afacb9>] (dump_stack_lvl+0x2b/0x34)
> (dump_stack_lvl) from [<c011f689>] (__warn+0xa1/0xe8)
> (__warn) from [<c0af6729>] (warn_slowpath_fmt+0x65/0x7c)
> (warn_slowpath_fmt) from [<c01421b9>] (__might_sleep+0x65/0x68)
> (__might_sleep) from [<c07eb377>] (mmc_blk_rw_wait+0x2f/0x118)
> (mmc_blk_rw_wait) from [<c07eba11>] (mmc_blk_mq_issue_rq+0x219/0x71c)
> (mmc_blk_mq_issue_rq) from [<c07ec199>] (mmc_mq_queue_rq+0xf9/0x200)
> (mmc_mq_queue_rq) from [<c04ad247>] (__blk_mq_try_issue_directly+0xcb/0x100)
> (__blk_mq_try_issue_directly) from [<c04adb89>] (blk_mq_request_issue_directly+0x2d/0x48)
> (blk_mq_request_issue_directly) from [<c04adcf3>] (blk_mq_flush_plug_list+0x14f/0x1f4)
> (blk_mq_flush_plug_list) from [<c04a5313>] (blk_flush_plug+0x83/0xb8)
> (blk_flush_plug) from [<c0b013cb>] (io_schedule+0x2b/0x3c)
> (io_schedule) from [<c0b01a17>] (bit_wait_io+0xf/0x48)
The following patch should fix the issue:
diff --git a/block/blk-mq.c b/block/blk-mq.c
index a71aeed7b987..bee9cb2a44cb 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2223,7 +2223,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
return;
plug->rq_count = 0;
- if (!plug->multiple_queues && !plug->has_elevator) {
+ if (!plug->multiple_queues && !plug->has_elevator && !from_schedule) {
blk_mq_plug_issue_direct(plug, from_schedule);
if (rq_list_empty(plug->mq_list))
return;
--
Ming
Powered by blists - more mailing lists