lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0905030753380.15782@asgard>
Date:	Sun, 3 May 2009 07:56:59 -0700 (PDT)
From:	david@...g.hm
To:	James Bottomley <James.Bottomley@...senPartnership.com>
cc:	Willy Tarreau <w@....eu>,
	Bart Van Assche <bart.vanassche@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Philipp Reisner <philipp.reisner@...bit.com>,
	linux-kernel@...r.kernel.org, Jens Axboe <jens.axboe@...cle.com>,
	Greg KH <gregkh@...e.de>, Neil Brown <neilb@...e.de>,
	Sam Ravnborg <sam@...nborg.org>, Dave Jones <davej@...hat.com>,
	Nikanth Karthikesan <knikanth@...e.de>,
	Lars Marowsky-Bree <lmb@...e.de>,
	Kyle Moffett <kyle@...fetthome.net>,
	Lars Ellenberg <lars.ellenberg@...bit.com>
Subject: Re: [PATCH 00/16] DRBD: a block device for HA clusters

On Sun, 3 May 2009, James Bottomley wrote:

> Subject: Re: [PATCH 00/16] DRBD: a block device for HA clusters
> 
> On Sun, 2009-05-03 at 07:36 -0700, david@...g.hm wrote:
>> On Sun, 3 May 2009, James Bottomley wrote:
>>
>>> Subject: Re: [PATCH 00/16] DRBD: a block device for HA clusters
>>>
>>> On Sat, 2009-05-02 at 22:40 -0700, david@...g.hm wrote:
>>>> On Sun, 3 May 2009, Willy Tarreau wrote:
>>>>
>>>>> On Sat, May 02, 2009 at 09:33:35AM +0200, Bart Van Assche wrote:
>>>>>> On Fri, May 1, 2009 at 10:59 AM, Andrew Morton
>>>>>> <akpm@...ux-foundation.org> wrote:
>>>>>>> On Thu, 30 Apr 2009 13:26:36 +0200 Philipp Reisner <philipp.reisner@...bit.com> wrote:
>>>>>>>
>>>>>>>> This is a repost of DRBD
>>>>>>>
>>>>>>> Is it being used anywhere for anything?  If so, where and what?
>>>>>>
>>>>>> One popular application is to run iSCSI and HA software on top of DRBD
>>>>>> in order to build a highly available iSCSI storage target.
>>>>>
>>>>> Confirmed, I have several customers who're doing exactly that.
>>>>
>>>> I will also say that there are a lot of us out here who would have a use
>>>> for DRDB in our HA setups, but have held off implementing it specificly
>>>> because it's not yet in the upstream kernel.
>>>
>>> Actually, that's not a particularly strong reason because we already
>>> have an in-kernel replicator that has much of the functionality of drbd
>>> that you could use.  The main reason for wanting drbd in kernel is that
>>> it has a *current* user base.
>>>
>>> Both the in kernel md/nbd and drbd do sync and async replication with
>>> primary side bitmaps.  The main differences are:
>>>
>>>      * md/nbd can do 1 to N replication,
>>>      * drbd can do active/active replication (useful for cluster
>>>        filesystems)
>>>      * The chunk size of the md/nbd is tunable
>>>      * With the updated nbd-tools, current md/nbd can do point in time
>>>        rollback on transaction logged secondaries (a BCS requirement)
>>>      * drbd manages the mirror state explicitly, md/nbd needs a user
>>>        space helper
>>>
>>> And probably a few others I forget.
>>
>> one very big one:
>>
>> DRDB has better support for dealing with split brain situations and
>> recovering from them.
>
> I don't really think so.  The decision about which (or if a) node should
> be killed lies with the HA harness outside of the province of the
> replication.
>
> One could argue that the symmetric active mode of drbd allows both nodes
> to continue rather than having the harness make a kill decision about
> one.  However, if they both alter the same data, you get an
> irreconcilable data corruption fault which, one can argue, is directly
> counter to HA principles and so allowing drbd continuation is arguably
> the wrong thing to do.

but the issue is that at the time the failure is taking place, neither 
side _knows_ that the other side is running. In fact, they both think that 
the other side is dead.

with DRDB, when the two sides start talking again they will discover that 
they are different and complain, loudly, to the sysadmin that they need 
help

with md/ndb you have the situation where both sides will try to resync to 
the other side as soon as the packets can get through. this can end up 
corrupting both sides if it's not caught fast enough

David Lang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ