lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <18043.13037.40956.366334@notabene.brown>
Date:	Fri, 22 Jun 2007 12:24:45 +1000
From:	Neil Brown <neilb@...e.de>
To:	Bill Davidsen <davidsen@....com>
Cc:	david@...g.hm, linux-kernel@...r.kernel.org,
	linux-raid@...r.kernel.org
Subject: Re: limits on raid

On Thursday June 21, davidsen@....com wrote:
> I didn't get a comment on my suggestion for a quick and dirty fix for 
> -assume-clean issues...
> 
> Bill Davidsen wrote:
> > How about a simple solution which would get an array on line and still 
> > be safe? All it would take is a flag which forced reconstruct writes 
> > for RAID-5. You could set it with an option, or automatically if 
> > someone puts --assume-clean with --create, leave it in the superblock 
> > until the first "repair" runs to completion. And for repair you could 
> > make some assumptions about bad parity not being caused by error but 
> > just unwritten.

It is certainly possible, and probably not a lot of effort.  I'm not
really excited about it though.

So if someone to submit a patch that did the right stuff,  I would
probably accept it, but I am unlikely to do it myself.


> >
> > Thought 2: I think the unwritten bit is easier than you think, you 
> > only need it on parity blocks for RAID5, not on data blocks. When a 
> > write is done, if the bit is set do a reconstruct, write the parity 
> > block, and clear the bit. Keeping a bit per data block is madness, and 
> > appears to be unnecessary as well.

Where do you propose storing those bits?  And how many would you cache
in memory?  And what performance hit would you suffer for accessing
them?  And would it be worth it?

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ