[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080514135652.GB23131@2ka.mipt.ru>
Date: Wed, 14 May 2008 17:56:52 +0400
From: Evgeniy Polyakov <johnpol@....mipt.ru>
To: Sage Weil <sage@...dream.net>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
linux-fsdevel@...r.kernel.org
Subject: Re: POHMELFS high performance network filesystem. Transactions, failover, performance.
On Wed, May 14, 2008 at 06:41:53AM -0700, Sage Weil (sage@...dream.net) wrote:
> Yes. Only a pagevec at a time, though... apparently 14 is a small enough
> number not to bite too many people in practice?
Well, POHMELFS can use up to 90 out of 512 or 1024 on x86, but that just
moves a problem a bit closer.
IMHO problem can be in fact, that copy can be more significant overhead
than per page sockt lock and direct DMA (I belive most of the GigE and
above (and of course RDMA) links have scatter-gather and RX checksumming),
it has to be tested, so I will change writeback path for POHMELFS to test
things. If there will not be any performance degradataion (and I believe
there will not be, as long as no improvements, since tests were always
network bound), I will use that approach.
--
Evgeniy Polyakov
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists