lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 10 Jun 2015 11:23:25 -0500
From:	Goldwyn Rodrigues <rgoldwyn@...e.com>
To:	David Teigland <teigland@...hat.com>
CC:	linux-kernel@...r.kernel.org, NeilBrown <neilb@...e.de>
Subject: Re: clustered MD

To start with, the goal of (basic) MD RAID1 is to keep the two mirrored 
device consistent _all_ of the time. In case of a device failure, it 
should degrade the array pointing to the failed device, so it can be 
(hot)removed/replaced. Now, take the same concepts to multiple nodes 
using the same MD-RAID1 device..

On 06/10/2015 10:48 AM, David Teigland wrote:
> On Wed, Jun 10, 2015 at 10:27:27AM -0500, Goldwyn Rodrigues wrote:
>> I thought I answered that:
>> To use a software RAID1 across multiple nodes of a cluster. Let me
>> explain in more words..
>>
>> In a cluster with multiple nodes with a shared storage, such as a
>> SAN. The shared device becomes a single point of failure.
>
> OK, shared storage, that's an important starting point that was never
> clear.
>
>> If the
>> device loses power, you will lose everything. A solution proposed is
>> to use software RAID, say with two SAN switches with different
>> devices and create a RAID1 on it. So if you lose power on one switch
>> or one of the device is fails the other is still available. Once you
>> get the other switch/device back up, it would resync the devices.
>
> OK, MD RAID1 on shared disks.
>
>>> , and exactly
>>> what breaks when you use raid1 in that way?  Once we've established the
>>> technical problem, then I can fairly evaluate your solution for it.
>>
>> Data consistency breaks. If node 1 is writing to the RAID1 device,
>> you have to make sure the data between the two RAID devices is
>> consistent. With software raid, this is performed with bitmaps. The
>> DLM is used to maintain data consistency.
>
> What's different about disks being on SAN that breaks data consistency vs
> disks being locally attached?  Where did the dlm come into the picture?

There are multiple nodes using the same shared device. Different nodes 
would be writing their own data to the shared device possibly using a 
shared filesystem such as ocfs2 on top of it. Each node maintains a 
bitmap to co-ordinate syncs between the two devices of the RAID. Since 
there are two devices, writes on the two devices can end at different 
times and must be co-ordinated.

>
>> Device failure can be partial. Say, only node 1 sees that one of the
>> device has failed (link break).  You need to "tell" other nodes not
>> to use the device and that the array is degraded.
>
> Why?

Data consistency. Because the node which continues to "see" the failed 
device (on another node) as working will read stale data.

>
>> In case of node failure, the blocks of the failed nodes must be
>> synced before the cluster can continue operation.
>
> What do cluster/node failures have to do with syncing mirror copies?
>

Data consistency. Different nodes will be writing to different blocks. 
So, if a node fails, you need to make sure that what the other node has 
not synced between the two devices is completed by the one performing 
recovery. You need to provide a consistent view to all nodes.

>> Does that explain the situation?
>
> No.  I don't see what clusters have to do with MD RAID1 devices, they seem
> like completely orthogonal concepts.

If you need an analogy: cLVM, but with lesser overhead ;)

Also, may I point you to linux/Documentation/md-cluster.txt?

HTH,

-- 
Goldwyn
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ