[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFkjPT=nxuG5rSuJ1seFV9eWvWNkyzw2f45yWqyEQV3+M91MPg@mail.gmail.com>
Date: Sun, 5 Feb 2023 10:37:53 -0600
From: Eric Van Hensbergen <ericvh@...il.com>
To: Christian Schoenebeck <linux_oss@...debyte.com>
Cc: v9fs-developer@...ts.sourceforge.net, asmadeus@...ewreck.org,
rminnich@...il.com, lucho@...kov.net,
Eric Van Hensbergen <ericvh@...nel.org>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v3 00/11] Performance fixes for 9p filesystem
On Thu, Feb 2, 2023 at 5:27 AM Christian Schoenebeck
<linux_oss@...debyte.com> wrote:
>
> Looks like this needs more work.
>
> I only had a glimpse on your patches yet, but made some tests by doing
> compilations on guest on top of a 9p root fs [1], msize=500k. Under that
> scenario:
>
> * loose: this is suprisingly the only mode where I can see some performance
> increase, over "loose" on master it compiled ~5% faster, but I also got some
> misbehaviours on guest.
>
I was so focused on the bugs that I forgot to respond to the
performance concerns -- just to be clear, readahead and writeback
aren't meant to be more performant than loose, they are meant to have
stronger guarantees of consistency with the server file system. Loose
is inclusive of readahead and writeback, and it keeps the caches
around for longer, and it does some dir caching as well -- so its
always going to win, but it does so with risk of being more
inconsistent with the server file system and should only be done when
the guest/client has exclusive access or the filesystem itself is
read-only. I've a design for a "tight" cache, which will also not be
as performant as loose but will add consistent dir-caching on top of
readahead and writeback -- once we've properly vetted that it should
likely be the default cache option and any fscache should be built on
top of it. I was also thinking of augmenting "tight" and "loose" with
a "temporal" cache that works more like NFS and bounds consistency to
a particular time quanta. Loose was always a bit of a "hack" for some
particular use cases and has always been a bit problematic in my mind.
So, to make sure we are on the same page, was your performance
uplifts/penalties versus cache=none or versus legacy cache=loose? The
10x perf improvement in the patch series was in streaming reads over
cache=none. I'll add the cache=loose datapoints to my performance
notebook (on github) for the future as points of reference, but I'd
always expect cache=loose to be the upper bound (although I have seen
some things in the code to do with directory reads/etc. that could be
improved there and should benefit from some of the changes I have
planned once I get to the dir caching).
-eric
Powered by blists - more mailing lists