[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180319180756.821554753@linuxfoundation.org>
Date: Mon, 19 Mar 2018 19:06:41 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Dan Williams <dan.j.williams@...el.com>,
NeilBrown <neilb@...e.com>, Shaohua Li <shli@...com>,
Sasha Levin <alexander.levin@...rosoft.com>
Subject: [PATCH 4.9 136/241] md/raid6: Fix anomily when recovering a single device in RAID6.
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: NeilBrown <neilb@...e.com>
[ Upstream commit 7471fb77ce4dc4cb81291189947fcdf621a97987 ]
When recoverying a single missing/failed device in a RAID6,
those stripes where the Q block is on the missing device are
handled a bit differently. In these cases it is easy to
check that the P block is correct, so we do. This results
in the P block be destroy. Consequently the P block needs
to be read a second time in order to compute Q. This causes
lots of seeks and hurts performance.
It shouldn't be necessary to re-read P as it can be computed
from the DATA. But we only compute blocks on missing
devices, since c337869d9501 ("md: do not compute parity
unless it is on a failed drive").
So relax the change made in that commit to allow computing
of the P block in a RAID6 which it is the only missing that
block.
This makes RAID6 recovery run much faster as the disk just
"before" the recovering device is no longer seeking
back-and-forth.
Reported-by-tested-by: Brad Campbell <lists2009@...rfbargle.com>
Reviewed-by: Dan Williams <dan.j.williams@...el.com>
Signed-off-by: NeilBrown <neilb@...e.com>
Signed-off-by: Shaohua Li <shli@...com>
Signed-off-by: Sasha Levin <alexander.levin@...rosoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/md/raid5.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -3391,9 +3391,20 @@ static int fetch_block(struct stripe_hea
BUG_ON(test_bit(R5_Wantcompute, &dev->flags));
BUG_ON(test_bit(R5_Wantread, &dev->flags));
BUG_ON(sh->batch_head);
+
+ /*
+ * In the raid6 case if the only non-uptodate disk is P
+ * then we already trusted P to compute the other failed
+ * drives. It is safe to compute rather than re-read P.
+ * In other cases we only compute blocks from failed
+ * devices, otherwise check/repair might fail to detect
+ * a real inconsistency.
+ */
+
if ((s->uptodate == disks - 1) &&
+ ((sh->qd_idx >= 0 && sh->pd_idx == disk_idx) ||
(s->failed && (disk_idx == s->failed_num[0] ||
- disk_idx == s->failed_num[1]))) {
+ disk_idx == s->failed_num[1])))) {
/* have disk failed, and we're requested to fetch it;
* do compute it
*/
Powered by blists - more mailing lists