lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1171041349.4099.1173808365@webmail.messagingengine.com>
Date:	Fri, 09 Feb 2007 09:15:49 -0800
From:	"Kai" <epimetreus@...tmail.fm>
To:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] Re: Bio device too big | kernel BUG at mm/filemap.c:537!


On Thu, 8 Feb 2007 09:08:58 +1100, "Neil Brown" <neilb@...e.de> said:
> On Wednesday February 7, epimetreus@...tmail.fm wrote:
> > 
> > On Wed, 7 Feb 2007 10:26:56 +1100, "Neil Brown" <neilb@...e.de> said:
> > > On Tuesday February 6, neilb@...e.de wrote:
> > > > 
> > > > This patch should fix the worst of the offences, but I'd like to
> > > > experiment and think a bit more before I submit it to stable.
> > > > And probably test it too - as yet I have only compile and brain
> > > > tested.
> > > 
> > > Ok, I've experimented and tested and now I know what was causing the
> > > double-unlock.
> > > 
> > > The following patch is suitable for 2.6.20.1 and mainline.  There is
> > > room for a bit more improvement, but only for performance, not
> > > correctness.  I'll look into that later.
> > > 
> > > Thanks,
> > > NeilBrown
> > 
> > I figure I should test this on my hardware, but since the RAID array
> > resynched itself when I rebooted back into an earlier kernel version,
> > I'm guessing it means this bug introduced some corruption into the
> > array, when it occurred, so I'd like some pointers on how I can test it
> > out without compromising my data.
> 
> This bug should not introduce any data corruption.
> It causes some read requests to get a failure from the device, which
> will cause raid5 to remove the device from the array (Though the data
> will still be intact).
> On restart a resync will put everything back as it was.
> 
> It is quite possible (this happened to me in my testing) for several
> devices to get these errors and for several or even all of these
> device to get failed.  However even in this case the data is still
> intact and "mdadm --assemble --force ..." will put everything back
> together.
> 
> So there should be no risk of data corruption.
> 
> NeilBrown

I've been running it for the last few hours, and no error output, yet; I
even did some fairly heavy I/O stuff to try and mess with it, but it
hasn't budged. I'm ready to say, works for me.

-Kai
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ