lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAFkjPT=gJBELBg1gCjYFrZKVp5fy1vmidByOstB8tfqcuCUvLA@mail.gmail.com>
Date:   Fri, 31 Mar 2023 11:47:12 -0500
From:   Eric Van Hensbergen <ericvh@...il.com>
To:     Jeff Layton <jlayton@...nel.org>
Cc:     Christian Schoenebeck <linux_oss@...debyte.com>,
        Luis Chamberlain <mcgrof@...nel.org>,
        Dominique Martinet <asmadeus@...ewreck.org>,
        Josef Bacik <josef@...icpanda.com>, lucho@...kov.net,
        v9fs-developer@...ts.sourceforge.net, linux-kernel@...r.kernel.org,
        Amir Goldstein <amir73il@...il.com>,
        Pankaj Raghav <p.raghav@...sung.com>, v9fs@...ts.linux.dev
Subject: Re: 9p caching with cache=loose and cache=fscache

I like the sliding window concept - I wasn't aware NFS was doing that,
I'll have a look as part of my rework.
The unmount/mount should indeed flush any cache (unless using
fscache), so that might be a good workaround if it can be automated in
the workflow.

          -eric

On Wed, Mar 29, 2023 at 6:32 AM Jeff Layton <jlayton@...nel.org> wrote:
>
> On Wed, 2023-03-29 at 13:19 +0200, Christian Schoenebeck wrote:
> > On Wednesday, March 29, 2023 12:08:26 AM CEST Dominique Martinet wrote:
> > > Luis Chamberlain wrote on Tue, Mar 28, 2023 at 10:41:02AM -0700:
> > > > >   "To speedup things you can also consider to use e.g. cache=loose instead.
> > > >
> > > > My experience is that cache=loose is totally useless.
> > >
> > > If the fs you mount isn't accessed by the host while the VM is up, and
> > > isn't shared with another guest (e.g. "exclusive share"), you'll get
> > > what you expect.
> > >
> > > I have no idea what people use qemu's virtfs for but this is apparently
> > > common enough that it was recommended before without anyone complaining
> > > since that started being recommended in 2011[1] until now?
> > >
> > > [1] https://wiki.qemu.org/index.php?title=Documentation/9psetup&diff=2178&oldid=2177
> > >
> > > (now I'm not arguing it should be recommended, my stance as a 9p
> > > maintainer is that the default should be used unless you know what
> > > you're doing, so the new code should just remove the 'cache=none'
> > > altogether as that's the default.
> > > With the new cache models Eric is preparing comes, we'll get a new safe
> > > default that will likely be better than cache=none, there is no reason
> > > to explicitly recommend the historic safe model as the default has
> > > always been on the safe side and we have no plan of changing that.)
> >
> > It's not that I receive a lot of feedback for what people use 9p for, nor am I
> > QEMU's 9p maintainer for a long time, but so far contributors cared more about
> > performance and other issues than propagating changes host -> guest without
> > reboot/remount/drop_caches. At least they did not care enough to work on
> > patches.
> >
> > Personally I also use cache=loose and only need to push changes host->guest
> > once in a while.
> >
> > > > >    That will deploy a filesystem cache on guest side and reduces the amount of
> > > > >    9p requests to hosts. As a consequence however guest might not see file
> > > > >    changes performed on host side *at* *all*
> > > >
> > > > I think that makes it pretty useless, aren't most setups on the guest read-only?
> > > >
> > > > It is not about "may not see", just won't. For example I modified the
> > > > Makefile and compiled a full kernel and even with those series of
> > > > changes, the guest *minutes later* never saw any updates.
> > >
> > > read-only on the guest has nothing to do with it, nor has time.
> > >
> > > If the directory is never accessed on the guest before the kernel has
> > > been built, you'll be able to make install on the guest -- once, even if
> > > the build was done after the VM booted and fs mounted.
> > >
> > > After it's been read once, it'll stay in cache until memory pressure (or
> > > an admin action like umount/mount or sysctl vm.drop_caches=3) clears it.
> > >
> > > I believe that's why it appeared to work until you noticed the issue and
> > > had to change the mount option -- I'd expect in most case you'll run
> > > make install once and reboot/kexec into the new kernel.
> > >
> > > It's not safe for your usecase and cache=none definitely sounds better
> > > to me, but people should use defaults make their own informed decision.
> >
> > It appears to me that read-only seems not to be the average use case for 9p,
> > at least from the command lines I received. It is often used in combination
> > with overlayfs though.
> >
> > I (think) the reason why cache=loose was recommended as default option on the
> > QEMU wiki page ages ago, was because of its really poor performance at that
> > point. I would personally not go that far and discourage people from using
> > cache=loose in general, as long as they get informed about the consequences.
> > You still get a great deal of performance boost, the rest is for each
> > individual to decide.
> >
> > Considering that Eric already has patches for revalidating the cache in the
> > works, I think the changes I made on the other QEMU wiki page are appropriate,
> > including the word "might" as it's soon only a matter of kernel version.
> >
> > > > >   In the above example the folder /home/guest/9p_setup/ shared of the
> > > > >   host is shared with the folder /tmp/shared on the guest. We use no
> > > > >   cache because current caching mechanisms need more work and the
> > > > >   results are not what you would expect."
> > > >
> > > > I got a wiki account now and I was the one who had clarified this.
> > >
> > > Thanks for helping making this clearer.
> >
> > Yep, and thanks for making a wiki account and improving the content there
> > directly. Always appreciated!
> >
>
> Catching up on this thread.
>
> Getting cache coherency right on a network filesystem is quite
> difficult. It's always a balance between correctness and performance.
>
> Some protocols (e.g. CIFS and Ceph) take a very heavy-handed approach to
> try ensure that the caches are always coherent. Basically, these clients
> are only allowed to cache when the server grants permission for it. That
> can have a negative effect on performance, of course.
>
> NFS as a protocol is more "loose", but we've generally beat its cache
> coherency mechanisms into shape over the years, so you don't see these
> sorts of problems there as much. FWIW, NFS uses a sliding time window to
> revalidate the cache, such that it'll revalidate frequently when an
> inodes is changing frequently, but less so when it's more stable.
>
> 9P I haven't worked with as much, but it sounds like it doesn't try to
> keep caches coherent (at least not with cache=loose).
>
> Probably the simplest solution here is to simply unmount/mount before
> you have the clients call "make modules_install && make install". That
> should ensure that the client doesn't have any stale data in the cache
> when the time comes to do the reads. A full reboot shouldn't be
> required.
>
> --
> Jeff Layton <jlayton@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ