lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171215152631.GD12428@fieldses.org>
Date:   Fri, 15 Dec 2017 10:26:31 -0500
From:   "J. Bruce Fields" <bfields@...ldses.org>
To:     Jeff Layton <jlayton@...nel.org>
Cc:     Dave Chinner <david@...morbit.com>, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org, hch@....de, neilb@...e.de,
        amir73il@...il.com, jack@...e.de, viro@...iv.linux.org.uk
Subject: Re: [PATCH 00/19] fs: rework and optimize i_version handling in
 filesystems

On Fri, Dec 15, 2017 at 10:15:29AM -0500, Jeff Layton wrote:
> On Thu, 2017-12-14 at 10:14 -0500, J. Bruce Fields wrote:
> > On Thu, Dec 14, 2017 at 09:14:47AM -0500, Jeff Layton wrote:
> > > There is some clear peformance impact when you are running frequent
> > > queries of the i_version.
> > > 
> > > My gut feeling is that you could probably make the new code perform
> > > worse than the old if you were to _really_ hammer the inode with queries
> > > for the i_version (probably from many threads in parallel) while doing a
> > > lot of small writes to it.
> > > 
> > > That'd be a pretty unusual workload though.
> > 
> > It may be pretty common for NFS itself: if I'm understanding the client
> > code right (mainly nfs4_write_need_cache_consistency()), our client will
> > request the change attribute in every WRITE that isn't a pNFS write, an
> > O_DIRECT write, or associated with a delegation.
> > 
> > The goal of this series isn't to improve NFS performance, it's to save
> > non-NFS users from paying a performance penalty for something that NFS
> > requires for correctness.  Probably this series doesn't make much
> > difference in the NFS write case, and that's fine.  Still, might be
> > worth confirming that a workload with lots of small NFS writes is mostly
> > unaffected.
> 
> Just for yuks, I ran such a test this morning. I used the same fio
> jobfile, but changed it to have:
> 
>     direct=1
> 
> ...to eliminate client-side caching effects:
> 
> old:
>    WRITE: bw=1146KiB/s (1174kB/s), 143KiB/s-143KiB/s (147kB/s-147kB/s), io=672MiB (705MB), run=600075-600435msec
> 
> patched:
>    WRITE: bw=1253KiB/s (1283kB/s), 156KiB/s-157KiB/s (160kB/s-161kB/s), io=735MiB (770MB), run=600089-600414msec
> 
> So still seems to be a bit faster -- maybe because we're using an
> atomic64_t instead of a spinlock now? Probably I should profile that at
> some point...

That would be interesting!

But it looks like this and all your results are about what we expect,
and all the evidence so far is that the series is doing what we need it
to.

--b.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ