[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZCHU6k56nF5849xj@bombadil.infradead.org>
Date: Mon, 27 Mar 2023 10:39:54 -0700
From: Luis Chamberlain <mcgrof@...nel.org>
To: Eric Van Hensbergen <ericvh@...il.com>
Cc: Josef Bacik <josef@...icpanda.com>,
Jeff Layton <jlayton@...nel.org>, lucho@...kov.net,
asmadeus@...ewreck.org, v9fs-developer@...ts.sourceforge.net,
linux-kernel@...r.kernel.org, Amir Goldstein <amir73il@...il.com>,
Pankaj Raghav <p.raghav@...sung.com>
Subject: Re: 9p caching with cache=loose and cache=fscache
On Mon, Mar 27, 2023 at 08:05:21AM -0500, Eric Van Hensbergen wrote:
> Sorry, took a bit to unstack from day job, but while going through the
> patch queue I remembered I still had some questions to answer here.
>
> On Fri, Mar 17, 2023 at 12:01 PM Luis Chamberlain <mcgrof@...nel.org> wrote:
> >
> > On Sun, Mar 12, 2023 at 01:22:34PM -0500, Eric Van Hensbergen wrote:
> > > I was looking at kdevops the other day - cool stuff. Was trying to
> > > figure out how we could do v9fs CI with it.
> >
> > Happy to help any questions you may have about it!
> >
> > > Both cache=loose and cache=fscache currently don't validate via host.
> >
> > What does this mean exactly?
> >
>
> That's a good question - I guess the answer is "by design" they don't
> do anything special to check that the cache is up to date with the
> host.
>From a user perspective none of this is clear from any of the
documentation I read.
> That being said, there are several places in the code where the
> cache will be invalidated
<-- elaboration of current invalidation schemes when cache is enabled -->
>
> > Right now a host with debian 6.0.0-6-amd64 certainly does not seem to push
> > out changes to 9p clients on VMs but Josef informs me that with 6.2-rc8
> > he did see the changes propagate.
>
> I did tighten up some of the invalidation in the last round of
> patches, however these are likely more on the overly conservative side
> versus doing the right thing -- however, its really not at the point
> where you can rely on it. If consistency is something you care about,
> I'd suggest cache=none until you can get cache=readahead.
Indeed that fixes things and makes things work *exactly* as one would
expect. The simple case of just editing a file and immediately comparing
sha1sum on the host and guest yields the exact same result.
In comparison using the default cache of loose... things are not what
you'd expect. The cache is completley useless even if you do a full
kernel compilation on the host. The guest will not get any updates at
all.
> > Do none of the existing 9p cache modes not support open-to-close policies
> > at all?
> >
>
> not specifically open-to-close, loose supports file and dir caching
> but without consistency, it might be tempting to try cache=mmap to see
> if it gets you closer, but my frame of reference is more the current
> patches versus the old code so not sure it would buy you anything.
OK thanks, it seems we should probably help test 9p with fstests and get
a baseline, and then check to see how many tests actually do test cache
modes. If none exists, we should add some to test for each invalidation
strategy possible.
We can certainly use kdevops to help test this but I suspect that
since it relies on a client / host model it may make sense to share
some code for automation on bringup with what NFS folks are doing
with kdevops.
> > Right now the cache mode used is cache=loose as that's the default,
> > what do you recommend for a typical kernel development environemnt?
>
> As I said, if you are interactively changing things I think you'd want
> to go for cache=none for now (as painful as it is).
The documentation is not so clear what the pain is in terms of
performance.
> I have fixed what
> I hope to be my last bug with the new patch series so it should be
> going into linux-next today.
Nice, thanks, since kdevops relies on a host kernel though and we strive
to have stability for that, I personally like to recommend distro
kernels and so they're a few kernel releases out of date. So debian-testing
is on 6.1 as of today for example.
So I'll go with the following for now on kdevops for now. This seems to
fix all the woes I had for now. Once a cache mode with proper testign
proves to pass most tests in fstests we can switch to it on kdevops.
diff --git a/playbooks/roles/bootlinux/tasks/main.yml b/playbooks/roles/bootlinux/tasks/main.yml
index 297a42794a09..477cacdee887 100644
--- a/playbooks/roles/bootlinux/tasks/main.yml
+++ b/playbooks/roles/bootlinux/tasks/main.yml
@@ -111,7 +111,7 @@
name: "{{ target_linux_dir_path }}"
src: "{{ bootlinux_9p_mount_tag }}"
fstype: "9p"
- opts: "ro,trans=virtio,version=9p2000.L,posixacl,cache=loose"
+ opts: "ro,trans=virtio,version=9p2000.L,posixacl,cache=none"
state: "mounted"
tags: [ 'data_partition' ]
when:
BTW the qemu wiki seems to suggest cache=loose and its why I used it on
kdevops as a default. What about the following so to avoid folks running
into similar issues? I can go and update the wiki too.
diff --git a/Documentation/filesystems/9p.rst b/Documentation/filesystems/9p.rst
index 0e800b8f73cc..476944912312 100644
--- a/Documentation/filesystems/9p.rst
+++ b/Documentation/filesystems/9p.rst
@@ -56,6 +56,12 @@ seen by reading /sys/bus/virtio/drivers/9pnet_virtio/virtio<n>/mount_tag files.
Options
=======
+Note that a none of the currently supported cache modes validate against data
+againgst the host, they don't do anything special to check that the cache is up
+to date with the host. If you really need some synchronization the cache mode
+you should use is default "none", that will ensure data is kept up to date
+synchronously right away despite some performance overhead.
+
============= ===============================================================
trans=name select an alternative transport. Valid options are
currently:
Powered by blists - more mailing lists