[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190530111519.GC22536@redhat.com>
Date: Thu, 30 May 2019 13:15:20 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Qian Cai <cai@....pw>
Cc: axboe@...nel.dk, akpm@...ux-foundation.org, hch@....de,
peterz@...radead.org, gkohli@...eaurora.org, mingo@...hat.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] block: fix a crash in do_task_dead()
On 05/29, Qian Cai wrote:
>
> The commit 0619317ff8ba ("block: add polled wakeup task helper")
> replaced wake_up_process() with blk_wake_io_task() in
> end_swap_bio_read() which triggers a crash when running heavy swapping
> workloads.
>
> [T114538] kernel BUG at kernel/sched/core.c:3462!
> [T114538] Process oom01 (pid: 114538, stack limit = 0x000000004f40e0c1)
> [T114538] Call trace:
> [T114538] do_task_dead+0xf0/0xf8
> [T114538] do_exit+0xd5c/0x10fc
> [T114538] do_group_exit+0xf4/0x110
> [T114538] get_signal+0x280/0xdd8
> [T114538] do_notify_resume+0x720/0x968
> [T114538] work_pending+0x8/0x10
>
> This is because shortly after set_special_state(TASK_DEAD),
> end_swap_bio_read() is called from an interrupt handler that revive the
> task state to TASK_RUNNING causes __schedule() to return and trip the
> BUG() later.
>
> [ C206] Call trace:
> [ C206] dump_backtrace+0x0/0x268
> [ C206] show_stack+0x20/0x2c
> [ C206] dump_stack+0xb4/0x108
> [ C206] blk_wake_io_task+0x7c/0x80
> [ C206] end_swap_bio_read+0x22c/0x31c
> [ C206] bio_endio+0x3d8/0x414
> [ C206] dec_pending+0x280/0x378 [dm_mod]
> [ C206] clone_endio+0x128/0x2ac [dm_mod]
> [ C206] bio_endio+0x3d8/0x414
> [ C206] blk_update_request+0x3ac/0x924
> [ C206] scsi_end_request+0x54/0x350
> [ C206] scsi_io_completion+0xf0/0x6f4
> [ C206] scsi_finish_command+0x214/0x228
> [ C206] scsi_softirq_done+0x170/0x1a4
> [ C206] blk_done_softirq+0x100/0x194
> [ C206] __do_softirq+0x350/0x790
> [ C206] irq_exit+0x200/0x26c
> [ C206] handle_IPI+0x2e8/0x514
> [ C206] gic_handle_irq+0x224/0x228
> [ C206] el1_irq+0xb8/0x140
> [ C206] _raw_spin_unlock_irqrestore+0x3c/0x74
> [ C206] do_task_dead+0x88/0xf8
> [ C206] do_exit+0xd5c/0x10fc
> [ C206] do_group_exit+0xf4/0x110
> [ C206] get_signal+0x280/0xdd8
> [ C206] do_notify_resume+0x720/0x968
> [ C206] work_pending+0x8/0x10
>
> Before the offensive commit, wake_up_process() will prevent this from
> happening by taking the pi_lock and bail out immediately if TASK_DEAD is
> set.
>
> if (!(p->state & TASK_NORMAL))
> goto out;
I don't understand this code at all but I am just curious, can we do
something like incomplete patch below ?
Oleg.
--- x/mm/page_io.c
+++ x/mm/page_io.c
@@ -140,8 +140,10 @@ int swap_readpage(struct page *page, bool synchronous)
unlock_page(page);
WRITE_ONCE(bio->bi_private, NULL);
bio_put(bio);
- blk_wake_io_task(waiter);
- put_task_struct(waiter);
+ if (waiter) {
+ blk_wake_io_task(waiter);
+ put_task_struct(waiter);
+ }
}
int generic_swapfile_activate(struct swap_info_struct *sis,
@@ -398,11 +400,12 @@ int swap_readpage(struct page *page, boo
* Keep this task valid during swap readpage because the oom killer may
* attempt to access it in the page fault retry time check.
*/
- get_task_struct(current);
- bio->bi_private = current;
bio_set_op_attrs(bio, REQ_OP_READ, 0);
- if (synchronous)
+ if (synchronous) {
bio->bi_opf |= REQ_HIPRI;
+ get_task_struct(current);
+ bio->bi_private = current;
+ }
count_vm_event(PSWPIN);
bio_get(bio);
qc = submit_bio(bio);
Powered by blists - more mailing lists