lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <29686.1316526117@turing-police.cc.vt.edu>
Date:	Tue, 20 Sep 2011 09:41:57 -0400
From:	Valdis.Kletnieks@...edu
To:	Evgeniy Polyakov <zbr@...emap.net>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: POHMELFS is back

On Tue, 20 Sep 2011 09:58:12 +0400, Evgeniy Polyakov said:
> On Mon, Sep 19, 2011 at 02:10:51PM -0400, Valdis.Kletnieks@...edu (Valdis.Kletnieks@...edu) wrote:
> > On Mon, 19 Sep 2011 10:13:02 +0400, Evgeniy Polyakov said:
> > > more that 4 Gb/s of bandwidth from each datacenter,
> > Also not at all impressive per-node if we're talking an average of 50 nodes per
> > data center.  I'm currently waiting for some 10GigE to be provisioned at the
> > moment because we're targeting close to a giga*byte*/sec per server.
>
> If you get 10 times more bandwidth you will not be able to saturate it
> with 10 times less servers.

The point is that the solutions we're looking at are able to drive enough I/O
*per server* that we need to look at 10GigE and Infiniband connections. Your
numbers currently indicate about 5T of disk and 75 megabit of throughput per
node, while current solutions are doing about 100T and pushing a 10GigE per
node.  So you have a *lot* of per-server scaling work to do still...

>                                              Scaling to hundreds of server nodes is a
> good result, since we evenly balance all IO between nodes and no single
> server is disk or network bound.

You missed the point. Scaling to hundreds of server nodes is a nice
*theoretical* result, but one that's not going to get a lot of traction out in
the real world, where the *per server* scaling matters too.  Which is my boss
more likely to be willing to spend money on - a solution that has 50 servers
per datacenter to deliver 4 Gb/sec per data center, or one that is delivering
that much *per server*? Remember - servers cost money, rack space costs money,
hardware maintenance contracts cost money, electricity and cooling cost money ,
network connectivity costs money, sysadmin time to manage each node costs money
- so in the real world, the solution that uses the smallest number of servers
to get to the target aggregate is probably going to win.

Looked at differently - if I'm currently targeting multiple gigabytes/sec throughput
to a petabyte of disk from a half-dozen servers, how big and fast a disk farm
could I build if I had 50 servers in the room, or 200 across datacenters?

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ