[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080514193234.GA10165@2ka.mipt.ru>
Date: Wed, 14 May 2008 23:32:35 +0400
From: Evgeniy Polyakov <johnpol@....mipt.ru>
To: Jeff Garzik <jeff@...zik.org>
Cc: Jamie Lokier <jamie@...reable.org>, Sage Weil <sage@...dream.net>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
linux-fsdevel@...r.kernel.org
Subject: Re: POHMELFS high performance network filesystem. Transactions, failover, performance.
On Wed, May 14, 2008 at 03:08:09PM -0400, Jeff Garzik (jeff@...zik.org) wrote:
> Evgeniy Polyakov wrote:
> >That can be the case if client connects to some gate server, which in
> >turn broadcasts data further, that is how I plan to implement things at
> >first.
>
> That means you are less optimal than the direct-to-storage-server path
> in NFSv4.1, then........
No, server to connect is the server, which stores data. By addition it
will also store it to some other places according to distributed
algorithm (like weaver, raid, mirror, whatever).
> <waves red flag in front of the bull>
>
> If access controls permit, the ideal would be for the client to avoid an
> intermediary when storing data. The client only _needs_ a consensus
> reply that their transaction was committed. They don't necessarily need
> an intermediary to do the boring data transfer work.
Sure the less number of machines between client and storage we have, the
faster and more robust we are.
Either client has to write data to all servers, or it has to write it to
one and wait utill that server will broadcast it further (to quorum or any
number of machines it wants). Having pure client to think to what
servers it has to put its data is a bit wrong (if not saying more),
since it has to join not only data network, but also control one, to
check that some servers are alive or not, to be able not to race, when
server is recovering and so on...
--
Evgeniy Polyakov
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists