[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <752a8c91b7a418fa52cb8a8f28cb30155a574904.camel@redhat.com>
Date: Tue, 16 Feb 2021 06:01:16 -0500
From: Jeff Layton <jlayton@...hat.com>
To: Steve French <smfrench@...il.com>
Cc: David Howells <dhowells@...hat.com>,
Trond Myklebust <trondmy@...merspace.com>,
Anna Schumaker <anna.schumaker@...app.com>,
Steve French <sfrench@...ba.org>,
Dominique Martinet <asmadeus@...ewreck.org>,
CIFS <linux-cifs@...r.kernel.org>, ceph-devel@...r.kernel.org,
Matthew Wilcox <willy@...radead.org>, linux-cachefs@...hat.com,
Alexander Viro <viro@...iv.linux.org.uk>,
linux-mm <linux-mm@...ck.org>, linux-afs@...ts.infradead.org,
v9fs-developer@...ts.sourceforge.net,
Christoph Hellwig <hch@....de>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-nfs <linux-nfs@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
David Wysochanski <dwysocha@...hat.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 00/33] Network fs helper library & fscache kiocb API
[ver #3]
On Mon, 2021-02-15 at 18:40 -0600, Steve French wrote:
> Jeff,
> What are the performance differences you are seeing (positive or
> negative) with ceph and netfs, especially with simple examples like
> file copy or grep of large files?
>
> It could be good if netfs simplifies the problem experienced by
> network filesystems on Linux with readahead on large sequential reads
> - where we don't get as much parallelism due to only having one
> readahead request at a time (thus in many cases there is 'dead time'
> on either the network or the file server while waiting for the next
> readpages request to be issued). This can be a significant
> performance problem for current readpages when network latency is long
> (or e.g. in cases when network encryption is enabled, and hardware
> offload not available so time consuming on the server or client to
> encrypt the packet).
>
> Do you see netfs much faster than currentreadpages for ceph?
>
> Have you been able to get much benefit from throttling readahead with
> ceph from the current netfs approach for clamping i/o?
>
I haven't seen big performance differences at all with this set. It's
pretty much a wash, and it doesn't seem to change how the I/Os are
ultimately driven on the wire. For instance, the clamp_length op
basically just mirrors what ceph does today -- it ensures that the
length of the I/O can't go past the end of the current object.
The main benefits are that we get a large swath of readpage, readpages
amd write_begin code out of ceph altogether. All of the netfs's need to
gather and vet pages for I/O, etc. Most of that doesn't have anything to
do with the filesystem itself. By offloading that into the netfs lib,
most of that is taken care of for us and we don't need to bother with
doing that ourselves.
--
Jeff Layton <jlayton@...hat.com>
Powered by blists - more mailing lists