lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Thu, 05 Jun 2008 12:36:20 -0700
From:	"Nicholas A. Bellinger" <nab@...ux-iscsi.org>
To:	"Ross S. W. Walker" <rwalker@...allion.com>
Cc:	Jerome Martin <tramjoe.merin@...il.com>,
	"Linux-iSCSI.org Target Dev" 
	<linux-iscsi-target-dev@...glegroups.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Christoph Hellwig <hch@....de>,
	"H. Peter Anvin" <hpa@...or.com>, Andrew Morton <akpm@...l.org>
Subject: RE: Vpools and v3.0-UPSTREAM LIO

On Thu, 2008-06-05 at 14:28 -0400, Ross S. W. Walker wrote:
> Jerome Martin wrote:
> 
> > On Thu, Jun 5, 2008 at 5:39 PM, Ross S. W. Walker 
> > <rwalker@...allion.com> wrote:
> > 
> > 
> > 	How about something like "ipool"?
> > 	
> > 	Maybe if I had a better picture of what it is your trying to
> > 	achieve I can throw some better suggestions out too?
> > 	
> > 
> > 
> > Well, basically the idea is to have a cluster of machines 
> > driven by heartbeatv2 (pacemaker)  implementing "storage" and 
> > "vhost" roles for arbitrary nodes. The storage part is based 
> > on lvm + drbd + lio-target  + open/iSCSI + ext3 (ocfs2 
> > planned for v2). The vhost part is based on linux-vservers, 
> > with plans to add vmware server in v2 and then maybe 
> > xen/openVZ depending on contribs. The term "pool" relates to 
> > the fact that the vservers are "pooled" by storage chunks, 
> > and the "v" stands for vservers. The effect is "Vserver POOLS".
> 
> How about a "stateless" storage pool where each node knows
> nothing about any other node in the storage pool.
> 
> You have central servers that then organize each node in
> the pool by available storage, performance of that storage,
> etc.

Having central servers (and having to worry about the redundancy of said
servers) would increase the complexity a great deal I would think.  

>  The storage is then mapped out based on these 
> characteristics, redundancy and striping is handled in
> the mapping, also based on performance characteristics
> and served up to servers that need it.
> 

Hmmm, so what are the advantages what we have working so far..?  

> > An other property of that cluster scheme is that it is 
> > designed to work at N+M redundancy (N active, M spares per 
> > resource). That could be used for devising a name too.
> > 
> > I'm currently trying to find some acronym based on the 
> > software stack used :
> > - DRBD
> > - iSCSI (LIO-target, no other target being able to fullfill 
> > the project requirements, and either open/iSCSI or core iscsi)
> > - virtual servers (to stay implementation-agnostic)
> > - linux-ha / pacemaker (the actual heartbeat part is 
> > non-essential, will be replaced by openAIS in v2 I think)
> > 
> > So maybe something like funny combinations like "DIVHA, 
> > software that sings" or  "PAID, free software" or anything 
> > along those lines would work :-) To be honest, I liked 
> > "vpools", and don't really have a good idea yet. 
> 
> How about DIVA?
> 
> This product can be added on to already existing iSCSI
> infrastructures and provides integrated high-level
> storage management services that would otherwise
> require separate products.
> 

Hmm, not bad.  I usually have to go for a walk or two to come up with
good project names.. :-)

--nab

> -Ross
> 
> ______________________________________________________________________
> This e-mail, and any attachments thereto, is intended only for use by
> the addressee(s) named herein and may contain legally privileged
> and/or confidential information. If you are not the intended recipient
> of this e-mail, you are hereby notified that any dissemination,
> distribution or copying of this e-mail, and any attachments thereto,
> is strictly prohibited. If you have received this e-mail in error,
> please immediately notify the sender and permanently delete the
> original and any copy or printout thereof.
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ