[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080514213251.GB23758@shareable.org>
Date: Wed, 14 May 2008 22:32:51 +0100
From: Jamie Lokier <jamie@...reable.org>
To: Evgeniy Polyakov <johnpol@....mipt.ru>
Cc: Sage Weil <sage@...dream.net>, Jeff Garzik <jeff@...zik.org>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
linux-fsdevel@...r.kernel.org
Subject: Re: POHMELFS high performance network filesystem. Transactions, failover, performance.
Evgeniy Polyakov wrote:
> If I understood that, client has to connect to all servers and send data
> there, so that after single reply things got committed. That is
> definitely not the issue, when there are lots of servers.
>
> That can be the case if client connects to some gate server, which in
> turn broadcasts data further, that is how I plan to implement things at
> first.
Look up Bittorrent, and bandwidth diffusion generally. Also look up
multicast trees.
Sometimes it's faster for a client to send to many servers; sometimes
it's faster to send fewer and have them relayed by intermediaries -
because every packet takes time to transmit, and network topologies
aren't always homogenous or symmetric.
There is no simple answer which is optimal for all networks.
> Another approach, which seems also intersting is leader election (per
> client), so that leader would broadcast all the data.
Leader election is part of standard Paxos too :-)
If you have a single data forwarder elected per client, then if one
client generates a lot of traffic, you concentrate a lot of traffic to
one network link and one CPU. Sometimes it's better to elect several
leaders per client, and hash requests onto them. You diffuse CPU and
traffic, but reduce opportunities to aggregate transactions into fewer
message. It's an interesting problem, again probably with different
optimal results for different networks.
-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists