lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5898218.pUKYPoVZaQ@silver>
Date:   Sun, 02 Apr 2023 16:07:51 +0200
From:   Christian Schoenebeck <linux_oss@...debyte.com>
To:     Eric Van Hensbergen <ericvh@...il.com>
Cc:     v9fs-developer@...ts.sourceforge.net,
        Eric Van Hensbergen <ericvh@...nel.org>,
        asmadeus@...ewreck.org, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org, Jeff Layton <jlayton@...nel.org>
Subject: Re: [PATCH] fs/9p: Add new options to Documentation

+CC: Jeff for experience on this caching issue with NFS ...

On Tuesday, March 28, 2023 5:51:51 PM CEST Eric Van Hensbergen wrote:
> As I work through the documentation rework and to some extent the
> testing matrix -- I am reconsidering some choices and wanted to open
> up the discussion here.
> 
> TLDR; I'm thinking of reworking the cache options before the merge
> window to keep things simple while setting up for some of the future
> options.

Yeah, revising the 9p cache options highly makes sense!

So what is the plan on this now? I saw you sent a new patch with the "old"
options today? So is this one here deferred?

> While we have a bunch of new options, in practice I expect users to
> probably consolidate around three models: no caching, tight caches,
> and expiring caches with fscache being an orthogonal add-on to the
> last two.

Actually as of today I don't know why somebody would want to use fscache
instead of loose. Does it actually make sense to keep fscache and if yes why?

> The ultimate goal is to simplify the options based on expected use models:
> 
> - cache=[ none, file, all ] (none is currently default)

dir?

> - write_policy = [ *writethrough, writeback ] (writethrough would be default)
> - cache_validate = [ never, *open, x (seconds) ]  (cache_validate
> would default to open)

Jeff came up with the point that NFS uses a slicing time window for NFS. So
the question is, would it make sense to add an option x seconds that might be
dropped soon anyway?

> - fscache
> 
> So, mapping of existing (deprecated) legacy modes:
> - none (obvious) write_policy=writethrough
> - *readahead -> cache=file cache_validate_open write_policy=writethrough
> - mmap -> cache=file cache_validate=open write_policy=writeback

Mmm, why is that "file"? To me "file" sounds like any access to files is
cached, whereas cache=mmap just uses the cache if mmap() was called, not for
any other file access.

> - loose -> cache=all cache_validate=never write_policy=writeback
> - fscache -> cache=all cache_validate=never write_policy=writeback &
> fscache enabled
> 
> Some things I'm less certain of: cache_validation is probably an
> imperfect term as is using 'open' as one of the options, in this case
> I'm envisioning 'open' to mean open-to-close coherency for file
> caching (cache is only validated on open) and validation on lookup for
> dir-cache coherency (using qid.version). Specifying a number here
> expires existing caches and requires validation after a certain number
> of seconds (is that the right granularity)?

Personally I would then really call it open-to-close or opentoclose and waste
some more characters in favour of clarity.

> So, I think this is more clear from a documentation standpoint, but
> unfortuantely I haven't reduced the test matrix much - in fact I've
> probably made it worse. I expect the common cases to basically be:
> - cache=none
> - new default? (cache=all, write_policy=writeback, cache_validate=open)
> - fscache w/(cache=all, write_policy=writeback, cache_validate=5)
> 
> Which would give us 3 configurations to test against versus 25
> (assuming testing for one time value for cache-validate=x). Important
> to remember that this is just cache mode tests, the other mount
> options act as multipliers.
> 
> Thoughts?  Alternatives?
> 
>         -eric
> 
> On Mon, Mar 27, 2023 at 10:38 AM Christian Schoenebeck
> <linux_oss@...debyte.com> wrote:
> >
> > On Monday, March 27, 2023 5:05:52 AM CEST Eric Van Hensbergen wrote:
> > > Need to update the documentation for new mount flags
> > > and cache modes.
> > >
> > > Signed-off-by: Eric Van Hensbergen <ericvh@...nel.org>
> > > ---
> > >  Documentation/filesystems/9p.rst | 29 ++++++++++++++++-------------
> > >  1 file changed, 16 insertions(+), 13 deletions(-)
> > >
> > > diff --git a/Documentation/filesystems/9p.rst b/Documentation/filesystems/9p.rst
> > > index 0e800b8f73cc..6d257854a02a 100644
> > > --- a/Documentation/filesystems/9p.rst
> > > +++ b/Documentation/filesystems/9p.rst
> > > @@ -78,19 +78,18 @@ Options
> > >               offering several exported file systems.
> > >
> > >    cache=mode specifies a caching policy.  By default, no caches are used.
> > > -
> > > -                        none
> > > -                             default no cache policy, metadata and data
> > > -                                alike are synchronous.
> > > -                     loose
> > > -                             no attempts are made at consistency,
> > > -                                intended for exclusive, read-only mounts
> > > -                        fscache
> > > -                             use FS-Cache for a persistent, read-only
> > > -                             cache backend.
> > > -                        mmap
> > > -                             minimal cache that is only used for read-write
> > > -                                mmap.  Northing else is cached, like cache=none
> > > +             Modes are progressive and inclusive.  For example, specifying fscache
> > > +             will use loose caches, writeback, and readahead.  Due to their
> > > +             inclusive nature, only one cache mode can be specified per mount.
> >
> > I would highly recommend to rather specify below for each option "this option
> > implies writeback, readahead ..." etc., as it is not obvious otherwise which
> > option would exactly imply what. It is worth those extra few lines IMO to
> > avoid confusion.
> >
> > > +
> > > +                     =========       =============================================
> > > +                     none            no cache of file or metadata
> > > +                     readahead       readahead caching of files
> > > +                     writeback       delayed writeback of files
> > > +                     mmap            support mmap operations read/write with cache
> > > +                     loose           meta-data and file cache with no coherency
> > > +                     fscache         use FS-Cache for a persistent cache backend
> > > +                     =========       =============================================
> > >
> > >    debug=n    specifies debug level.  The debug level is a bitmask.
> > >
> > > @@ -137,6 +136,10 @@ Options
> > >               This can be used to share devices/named pipes/sockets between
> > >               hosts.  This functionality will be expanded in later versions.
> > >
> > > +  directio   bypass page cache on all read/write operations
> > > +
> > > +  ignoreqv   ignore qid.version==0 as a marker to ignore cache
> > > +
> > >    noxattr    do not offer xattr functions on this mount.
> > >
> > >    access     there are four access modes.
> > >
> >
> >
> >
> >
> 




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ