[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <18041.59928.812167.453118@notabene.brown>
Date: Thu, 21 Jun 2007 13:01:44 +1000
From: Neil Brown <neilb@...e.de>
To: David Greaves <david@...eaves.com>
Cc: Wakko Warner <wakko@...mx.eu.org>, david@...g.hm,
linux-kernel@...r.kernel.org, linux-raid@...r.kernel.org
Subject: Re: limits on raid
On Saturday June 16, david@...eaves.com wrote:
> Neil Brown wrote:
> > On Friday June 15, wakko@...mx.eu.org wrote:
> >
> >> As I understand the way
> >> raid works, when you write a block to the array, it will have to read all
> >> the other blocks in the stripe and recalculate the parity and write it out.
> >
> > Your understanding is incomplete.
>
> Does this help?
> [for future reference so you can paste a url and save the typing for code :) ]
>
> http://linux-raid.osdl.org/index.php/Initial_Array_Creation
>
> David
>
>
>
> Initial Creation
>
> When mdadm asks the kernel to create a raid array the most noticeable activity
> is what's called the "initial resync".
>
> The kernel takes one (or two for raid6) disks and marks them as 'spare'; it then
> creates the array in degraded mode. It then marks spare disks as 'rebuilding'
> and starts to read from the 'good' disks, calculate the parity and determines
> what should be on any spare disks and then writes it. Once all this is done the
> array is clean and all disks are active.
This isn't quite right.
Firstly, it is mdadm which decided to make one drive a 'spare' for
raid5, not the kernel.
Secondly, it only applies to raid5, not raid6 or raid1 or raid10.
For raid6, the initial resync (just like the resync after an unclean
shutdown) reads all the data blocks, and writes all the P and Q
blocks.
raid5 can do that, but it is faster the read all but one disk, and
write to that one disk.
NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists