lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20100425191643.GP4011@vanheusden.com>
Date:	Sun, 25 Apr 2010 21:16:43 +0200
From:	Folkert van Heusden <folkert@...heusden.com>
To:	linux-kernel@...r.kernel.org
Subject: disabling cache to be able to create a
	http://en.wikipedia.org/wiki/Peer_to_Peer_Remote_Copy solution

Hi,

I'm trying to achive the following

   storage a                   storage b
      |   |                     | |
      |   +--+                  | |
      | +----|------------------+ |    <- fibre
      | |    |                    |
      | |    +------------------+ |
      | |                       | |
    linux a                    linux b
      |                           |
      +-------------+  +----------+    <- iscsi
                    |  |
                   vmware

A trained eye will recognize this as a 'peer to peer remote
copy'-implementation.

So linux a (or b) receive storage requests from vmware and store those
mirrored on storage a and b: both have an md1 on their paths to storage
a and b.

Now let's say vmware talks active/passive to those linux boxes. At some
moment the left path is selected and vmware sends a block to linux a to
store on disk. Linux a stores this block on both storage systems and
_caches this block in memory_.
Then at some time, vmware switches to the path via system b and also
stores at the same block some data.
Then, vmware switches back to path a and reads again this block from
disk. Now here's where the problem comes up: linux a thinks it still has
this block in the memory-cache and serves it from there while in fact,
when the path went through system b, this block was already changed on
the storage!
So to be able to construct this mechanism, I somehow need to be able to
disable the read-cache of linux a and b. I found that you can clear the
cache by entering "echo 3 > /proc/sys/vm/drop_caches" but this can't be
automatically invoked by the vmware system when it switches paths.

Anyone got a suggestion?


Folkert van Heusden

-- 
----------------------------------------------------------------------
Phone: +31-6-41278122, PGP-key: 1F28D8AE, www.vanheusden.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ