lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <D9FFE20C522965449E182ACE73889AEB1A664D26@IRSMSX105.ger.corp.intel.com>
Date:	Wed, 16 Oct 2013 07:43:16 +0000
From:	"Dorau, Lukasz" <lukasz.dorau@...el.com>
To:	NeilBrown <neilb@...e.de>
CC:	"linux-raid@...r.kernel.org" <linux-raid@...r.kernel.org>,
	"Baldysiak, Pawel" <pawel.baldysiak@...el.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] md: Fix skipping recovery for read-only arrays.

On Wednesday, October 16, 2013 5:50 AM NeilBrown <neilb@...e.de> wrote:
> On Mon, 07 Oct 2013 16:25:51 +0200 Lukasz Dorau <lukasz.dorau@...el.com>
> wrote:
> 
> > Since:
> >         commit 7ceb17e87bde79d285a8b988cfed9eaeebe60b86
> >         md: Allow devices to be re-added to a read-only array.
> >
> > spares are activated on a read-only array. In case of raid1 and raid10
> > personalities it causes that not-in-sync devices are marked in-sync
> > without checking if recovery has been finished.
> >
> > If a read-only array is degraded and one of its devices is not in-sync
> > (because the array has been only partially recovered) recovery will be skipped.
> >
> > This patch adds checking if recovery has been finished before marking
> > a device in-sync for raid1 and raid10 personalities. In case of raid5
> > personality such condition is already present (at raid5.c:6029).
> >
> > Bug was introduced in 3.10 and causes data corruption.
> >
> > Cc: stable@...r.kernel.org
> > Signed-off-by: Pawel Baldysiak <pawel.baldysiak@...el.com>
> > Signed-off-by: Lukasz Dorau <lukasz.dorau@...el.com>
> > ---
> >  drivers/md/raid1.c  |    1 +
> >  drivers/md/raid10.c |    1 +
> >  2 files changed, 2 insertions(+)
> >
> > diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index
> > d60412c..aacf6bf 100644
> > --- a/drivers/md/raid1.c
> > +++ b/drivers/md/raid1.c
> > @@ -1479,6 +1479,7 @@ static int raid1_spare_active(struct mddev *mddev)
> >  			}
> >  		}
> >  		if (rdev
> > +		    && rdev->recovery_offset == MaxSector
> >  		    && !test_bit(Faulty, &rdev->flags)
> >  		    && !test_and_set_bit(In_sync, &rdev->flags)) {
> >  			count++;
> > diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index
> > df7b0a0..73dc8a3 100644
> > --- a/drivers/md/raid10.c
> > +++ b/drivers/md/raid10.c
> > @@ -1782,6 +1782,7 @@ static int raid10_spare_active(struct mddev
> *mddev)
> >  			}
> >  			sysfs_notify_dirent_safe(tmp->replacement-
> >sysfs_state);
> >  		} else if (tmp->rdev
> > +			   && tmp->rdev->recovery_offset == MaxSector
> >  			   && !test_bit(Faulty, &tmp->rdev->flags)
> >  			   && !test_and_set_bit(In_sync, &tmp->rdev->flags)) {
> >  			count++;
> 
> Applied - thanks.
> 
> I'll forward it to Linus and -stable shortly.
> 
> NeilBrown

Thanks!

Lukasz

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ