[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241211082534.2211057-2-senozhatsky@chromium.org>
Date: Wed, 11 Dec 2024 17:25:31 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Minchan Kim <minchan@...nel.org>,
linux-kernel@...r.kernel.org,
Sergey Senozhatsky <senozhatsky@...omium.org>,
stable@...r.kernel.org
Subject: [PATCH 2/2] zram: cond_resched() in writeback loop
zram writeback is a costly operation, because every target slot
(unless ZRAM_HUGE) is decompressed before it gets written to a
backing device. The writeback to a backing device uses
submit_bio_wait() which may look like a rescheduling point.
However, if the backing device has BD_HAS_SUBMIT_BIO bit set
__submit_bio() calls directly disk->fops->submit_bio(bio) on
the backing device and so when submit_bio_wait() calls
blk_wait_io() the I/O is already done. On such systems we
effective end up in a loop
for_each (target slot) {
decompress(slot)
__submit_bio()
disk->fops->submit_bio(bio)
}
Which on PREEMPT_NONE systems triggers watchdogs (since there
are no explicit rescheduling points). Add cond_resched() to
the zram writeback loop.
Fixes: a939888ec38b ("zram: support idle/huge page writeback")
Cc: stable@...r.kernel.org
Signed-off-by: Sergey Senozhatsky <senozhatsky@...omium.org>
---
drivers/block/zram/zram_drv.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 0a924fae02a4..f5fa3db6b4f8 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -884,6 +884,8 @@ static ssize_t writeback_store(struct device *dev,
next:
zram_slot_unlock(zram, index);
release_pp_slot(zram, pps);
+
+ cond_resched();
}
if (blk_idx)
--
2.47.1.613.gc27f4b7a9f-goog
Powered by blists - more mailing lists