[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0708122016100.22470@asgard.lang.hm>
Date: Sun, 12 Aug 2007 20:21:17 -0700 (PDT)
From: david@...g.hm
To: Paul Clements <paul.clements@...eleye.com>
cc: Jan Engelhardt <jengelh@...putergmbh.de>,
Al Boldi <a1426z@...ab.com>, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, netdev@...r.kernel.org,
linux-raid@...r.kernel.org
Subject: Re: [RFD] Layering: Use-Case Composers (was: DRBD - what is it,
anyways? [compare with e.g. NBD + MD raid])
per the message below MD (or DM) would need to be modified to work
reasonably well with one of the disk components being over an unreliable
link (like a network link)
are the MD/DM maintainers interested in extending their code in this
direction? or would they prefer to keep it simpler by being able to
continue to assume that the raid components are connected over a highly
reliable connection?
if they are interested in adding (and maintaining) this functionality then
there is a real possibility that NBD+MD/DM could eliminate the need for
DRDB. however if they are not interested in adding all the code to deal
with the network type issues, then the argument that DRDB should not be
merged becouse you can do the same thing with MD/DM + NBD is invalid and
can be dropped/ignored
David Lang
On Sun, 12 Aug 2007, Paul Clements wrote:
> Iustin Pop wrote:
>> On Sun, Aug 12, 2007 at 07:03:44PM +0200, Jan Engelhardt wrote:
>> > On Aug 12 2007 09:39, david@...g.hm wrote:
>> > > now, I am not an expert on either option, but three are a couple
>> > > things that I
>> > > would question about the DRDB+MD option
>> > >
>> > > 1. when the remote machine is down, how does MD deal with it for reads
>> > > and
>> > > writes?
>> > I suppose it kicks the drive and you'd have to re-add it by hand unless
>> > done by
>> > a cronjob.
>
> Yes, and with a bitmap configured on the raid1, you just resync the blocks
> that have been written while the connection was down.
>
>
>> >From my tests, since NBD doesn't have a timeout option, MD hangs in the
>> write to that mirror indefinitely, somewhat like when dealing with a
>> broken IDE driver/chipset/disk.
>
> Well, if people would like to see a timeout option, I actually coded up a
> patch a couple of years ago to do just that, but I never got it into mainline
> because you can do almost as well by doing a check at user-level (I basically
> ping the nbd connection periodically and if it fails, I kill -9 the
> nbd-client).
>
>
>> > > 2. MD over local drive will alternate reads between mirrors (or so
>> > > I've been
>> > > told), doing so over the network is wrong.
>> > Certainly. In which case you set "write_mostly" (or even write_only, not
>> > sure
>> > of its name) on the raid component that is nbd.
>> >
>> > > 3. when writing, will MD wait for the network I/O to get the data
>> > > saved on the
>> > > backup before returning from the syscall? or can it sync the data out
>> > > lazily
>> > Can't answer this one - ask Neil :)
>>
>> MD has the write-mostly/write-behind options - which help in this case
>> but only up to a certain amount.
>
> You can configure write_behind (aka, asynchronous writes) to buffer as much
> data as you have RAM to hold. At a certain point, presumably, you'd want to
> just break the mirror and take the hit of doing a resync once your network
> leg falls too far behind.
>
> --
> Paul
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists