lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 5 May 2022 08:54:01 -0400
From:   Vivek Goyal <vgoyal@...hat.com>
To:     Dharmendra Hans <dharamhans87@...il.com>
Cc:     Miklos Szeredi <miklos@...redi.hu>, linux-fsdevel@...r.kernel.org,
        fuse-devel <fuse-devel@...ts.sourceforge.net>,
        linux-kernel@...r.kernel.org, Bernd Schubert <bschubert@....com>
Subject: Re: [PATCH v4 0/3] FUSE: Implement atomic lookup + open/create

On Thu, May 05, 2022 at 11:42:51AM +0530, Dharmendra Hans wrote:
> On Thu, May 5, 2022 at 12:48 AM Vivek Goyal <vgoyal@...hat.com> wrote:
> >
> > On Mon, May 02, 2022 at 03:55:18PM +0530, Dharmendra Singh wrote:
> > > In FUSE, as of now, uncached lookups are expensive over the wire.
> > > E.g additional latencies and stressing (meta data) servers from
> > > thousands of clients. These lookup calls possibly can be avoided
> > > in some cases. Incoming three patches address this issue.
> >
> > BTW, these patches are designed to improve performance by cutting down
> > on number of fuse commands sent. Are there any performance numbers
> > which demonstrate what kind of improvement you are seeing.
> >
> > Say, If I do kernel build, is the performance improvement observable?
> 
> Here are the numbers I took last time. These were taken on tmpfs to
> actually see the effect of reduced calls. On local file systems it
> might not be that much visible. But we have observed that on systems
> where we have thousands of clients hammering the metadata servers, it
> helps a lot (We did not take numbers yet as  we are required to change
> a lot of our client code but would be doing it later on).
> 
> Note that for a change in performance number due to the new version of
> these patches, we have just refactored the code and functionality has
> remained the same since then.
> 
> here is the link to the performance numbers
> https://lore.kernel.org/linux-fsdevel/20220322121212.5087-1-dharamhans87@gmail.com/

There is a lot going in that table. Trying to understand it. 

- Why care about No-Flush. I mean that's independent of these changes,
  right?  I am assuming this means that upon file close do not send
  a flush to fuse server. Not sure how bringing No-Flush into the
  mix is helpful here.

- What is "Patched Libfuse"? I am assuming that these are changes
  needed in libfuse to support atomic create + atomic open. Similarly
  assuming "Patched FuseK" means patched kernel with your changes.

  If this is correct, I would probably only be interested in 
  looking at "Patched Libfuse + Patched FuseK" numbers to figure out
  what's the effect of your changes w.r.t vanilla kernel + libfuse.
  Am I understanding it right?

- I am wondering why do we measure "Sequential" and "Random" patterns. 
  These optimizations are primarily for file creation + file opening
  and I/O pattern should not matter. 

- Also wondering why performance of Read/s improves. Assuming once
  file has been opened, I think your optimizations get out of the
  way (no create, no open) and we are just going through data path of
  reading file data and no lookups happening. If that's the case, why
  do Read/s numbers show an improvement.

- Why do we measure "Patched Libfuse". It shows performance regression
  of 4-5% in table 0B, Sequential workoad. That sounds bad. So without
  any optimization kicking in, it has a performance cost.

Thanks
Vivek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ