lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200608165040.GI3127@techsingularity.net>
Date:   Mon, 8 Jun 2020 17:50:40 +0100
From:   Mel Gorman <mgorman@...hsingularity.net>
To:     Jan Kara <jack@...e.cz>
Cc:     Amir Goldstein <amir73il@...il.com>, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] fsnotify: Rearrange fast path to minimise overhead when
 there is no watcher

On Mon, Jun 08, 2020 at 05:19:43PM +0200, Jan Kara wrote:
> > This is showing that the latencies are improved by roughly 2-9%. The
> > variability is not shown but some of these results are within the noise
> > as this workload heavily overloads the machine. That said, the system CPU
> > usage is reduced by quite a bit so it makes sense to avoid the overhead
> > even if it is a bit tricky to detect at times. A perf profile of just 1
> > group of tasks showed that 5.14% of samples taken were in either fsnotify()
> > or fsnotify_parent(). With the patch, 2.8% of samples were in fsnotify,
> > mostly function entry and the initial check for watchers.  The check for
> > watchers is complicated enough that inlining it may be controversial.
> > 
> > Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
> 
> Thanks for the patch! I have to tell I'm surprised this small reordering
> helps so much. For pipe inode we will bail on:
> 
>        if (!to_tell->i_fsnotify_marks && !sb->s_fsnotify_marks &&
>            (!mnt || !mnt->mnt_fsnotify_marks))
>                return 0;
> 
> So what we save with the reordering is sb->s_fsnotify_mask and
> mnt->mnt_fsnotify_mask fetch but that should be the same cacheline as
> sb->s_fsnotify_marks and mnt->mnt_fsnotify_marks, respectively.

It is likely that the contribution of that change is marginal relative
to the fsnotify_parent() call. I'll know by tomorrow morning at the latest.

> We also
> save a function call of fsnotify_parent() but I would think that is very
> cheap (compared to the whole write path) as well.
> 

To be fair, it is cheap but with this particular workload, we call
vfs_write() a *lot* and the path is not that long so it builds up to 5%
of samples overall. Given that these were anonymous pipes, it surprised
me to see fsnotify at all which is why I took a closer look.

> The patch is simple enough so I have no problem merging it but I'm just
> surprised by the results... Hum, maybe the structure randomization is used
> in the builds and so e.g. sb->s_fsnotify_mask and sb->s_fsnotify_marks
> don't end up in the same cacheline? But I don't think we enable that in
> SUSE builds?
> 

Correct, GCC_PLUGIN_RANDSTRUCT was not set.

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ