[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1497897167-14556-19-git-send-email-w@1wt.eu>
Date: Mon, 19 Jun 2017 20:28:37 +0200
From: Willy Tarreau <w@....eu>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org,
linux@...ck-us.net
Cc: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
Shaohua Li <shli@...nel.org>, Neil Brown <neilb@...e.com>,
Shaohua Li <shli@...com>, Willy Tarreau <w@....eu>
Subject: [PATCH 3.10 018/268] md/raid5: limit request size according to implementation limits
From: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
commit e8d7c33232e5fdfa761c3416539bc5b4acd12db5 upstream.
Current implementation employ 16bit counter of active stripes in lower
bits of bio->bi_phys_segments. If request is big enough to overflow
this counter bio will be completed and freed too early.
Fortunately this not happens in default configuration because several
other limits prevent that: stripe_cache_size * nr_disks effectively
limits count of active stripes. And small max_sectors_kb at lower
disks prevent that during normal read/write operations.
Overflow easily happens in discard if it's enabled by module parameter
"devices_handle_discard_safely" and stripe_cache_size is set big enough.
This patch limits requests size with 256Mb - 8Kb to prevent overflows.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
Cc: Shaohua Li <shli@...nel.org>
Cc: Neil Brown <neilb@...e.com>
Signed-off-by: Shaohua Li <shli@...com>
Signed-off-by: Willy Tarreau <w@....eu>
---
drivers/md/raid5.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 9ee3c46..8f5c890 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -5616,6 +5616,15 @@ static int run(struct mddev *mddev)
stripe = (stripe | (stripe-1)) + 1;
mddev->queue->limits.discard_alignment = stripe;
mddev->queue->limits.discard_granularity = stripe;
+
+ /*
+ * We use 16-bit counter of active stripes in bi_phys_segments
+ * (minus one for over-loaded initialization)
+ */
+ blk_queue_max_hw_sectors(mddev->queue, 0xfffe * STRIPE_SECTORS);
+ blk_queue_max_discard_sectors(mddev->queue,
+ 0xfffe * STRIPE_SECTORS);
+
/*
* unaligned part of discard request will be ignored, so can't
* guarantee discard_zeroes_data
--
2.8.0.rc2.1.gbe9624a
Powered by blists - more mailing lists