lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Thu, 23 May 2024 10:21:39 +0800
From: Yafang Shao <laoar.shao@...il.com>
To: Oliver Sang <oliver.sang@...el.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>, brauner@...nel.org, jack@...e.cz, 
	linux-fsdevel@...r.kernel.org, longman@...hat.com, viro@...iv.linux.org.uk, 
	walters@...bum.org, wangkai86@...wei.com, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org, Matthew Wilcox <willy@...radead.org>, ying.huang@...el.com, 
	feng.tang@...el.com, fengwei.yin@...el.com, philip.li@...el.com, 
	yujie.liu@...el.com
Subject: Re: [PATCH] vfs: Delete the associated dentry when deleting a file

On Wed, May 22, 2024 at 4:51 PM Oliver Sang <oliver.sang@...el.com> wrote:
>
>
> hi, Linus, hi, Yafang Shao,
>
>
> On Wed, May 15, 2024 at 09:05:24AM -0700, Linus Torvalds wrote:
> > Oliver,
> >  is there any chance you could run this through the test robot
> > performance suite? The original full patch at
> >
> >     https://lore.kernel.org/all/20240515091727.22034-1-laoar.shao@gmailcom/
> >
> > and it would be interesting if the test robot could see if the patch
> > makes any difference on any other loads?
> >
>
> we just reported a stress-ng performance improvement by this patch [1]

Awesome!

>
> test robot applied this patch upon
>   3c999d1ae3 ("Merge tag 'wq-for-6.10' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq")
>
> filesystem is not our team's major domain, so we just made some limited review
> of the results, and decided to send out the report FYI.
>
> at first stage, we decided to check below catagories of tests as priority:
>
> stress-ng filesystem
> filebench mailserver
> reaim fileserver
>
> we also pick sysbench-fileio, blogbench into coverage.
>
> here is a summary.
>
> for stress-ng, besided [1] which was reported, we got below data that are
> about this patch comparing to 3c999d1ae3.
>
> either there is no significant performance change, or the change is smaller
> than the noise which will make test robot's bisect fail, so these information
> is just FYI. and if you have any doubt about any subtests, could you let us know
> then we could check further?
>
> (also included some net test results)
>
>       12.87 ą  6%      -0.6%      12.79        stress-ng.xattr.ops_per_sec
>        6721 ą  5%      +7.5%       7224 ą 27%  stress-ng.rawdevops_per_sec
>        9002 ą  7%      -8.7%       8217        stress-ng.dirmany.ops_per_sec
>     8594743 ą  4%      -3.0%    8337417        stress-ng.rawsock.ops_per_sec
>        2056 ą  3%      +2.9%       2116        stress-ng.dirdeep.ops_per_sec
>        4307 ą 21%      -6.9%       4009        stress-ng.dir.ops_per_sec
>      137946 ą 18%      +5.8%     145942        stress-ng.fiemap.ops_per_sec
>    22413006 ą  2%      +2.5%   22982512 ą  2%  stress-ng.sockdiag.ops_per_sec
>      286714 ą  2%      -3.8%     275876 ą  5%  stress-ng.udp-flood.ops_per_sec
>       82904 ą 46%     -31.6%      56716        stress-ng.sctp.ops_per_sec
>     9853408            -0.3%    9826387        stress-ng.ping-sock.ops_per_sec
>       84667 ą 12%     -26.7%      62050 ą 17%  stress-ng.dccp.ops_per_sec
>       61750 ą 25%     -24.2%      46821 ą 38%  stress-ng.open.ops_per_sec
>      583443 ą  3%      -3.4%     563822        stress-ng.file-ioctl.ops_per_sec
>       11919 ą 28%     -34.3%       7833        stress-ng.dentry.ops_per_sec
>       18.59 ą 12%     -23.9%      14.15 ą 27%  stress-ng.swap.ops_per_sec
>      246.37 ą  2%     +15.9%     285.58 ą 12%  stress-ng.aiol.ops_per_sec
>        7.45            -4.8%       7.10 ą  7%  stress-ng.fallocate.ops_per_sec
>      207.97 ą  7%      +5.2%     218.70        stress-ng.copy-file.ops_per_sec
>       69.87 ą  7%      +5.8%      73.93 ą  5%  stress-ng.fpunchops_per_sec
>        0.25 ą 21%     +24.0%       0.31        stress-ng.inode-flagsops_per_sec
>      849.35 ą  6%      +1.4%     861.51        stress-ng.mknod.ops_per_sec
>      926144 ą  4%      -5.2%     877558        stress-ng.lease.ops_per_sec
>       82924            -2.1%      81220        stress-ng.fcntl.ops_per_sec
>        6.19 ą124%     -50.7%       3.05        stress-ng.chattr.ops_per_sec
>      676.90 ą  4%      -1.9%     663.94 ą  5%  stress-ng.iomix.ops_per_sec
>        0.93 ą  6%      +5.6%       0.98 ą  7%  stress-ng.symlink.ops_per_sec
>     1703608            -3.8%    1639057 ą  3%  stress-ng.eventfd.ops_per_sec
>     1735861            -0.6%    1726072        stress-ng.sockpair.ops_per_sec
>       85440            -2.0%      83705        stress-ng.rawudp.ops_per_sec
>        6198            +0.6%       6236        stress-ng.sockabuse.ops_per_sec
>       39226            +0.0%      39234        stress-ng.sock.ops_per_sec
>        1358            +0.3%       1363        stress-ng.tun.ops_per_sec
>     9794021            -1.7%    9623340        stress-ng.icmp-flood.ops_per_sec
>     1324728            +0.3%    1328244        stress-ng.epoll.ops_per_sec
>      146150            -2.0%     143231        stress-ng.rawpkt.ops_per_sec
>     6381112            -0.4%    6352696        stress-ng.udp.ops_per_sec
>     1234258            +0.2%    1236738        stress-ng.sockfd.ops_per_sec
>       23954            -0.1%      23932        stress-ng.sockmany.ops_per_sec
>      257030            -0.1%     256860        stress-ng.netdev.ops_per_sec
>     6337097            +0.1%    6341130        stress-ng.flock.ops_per_sec
>      173212            -0.3%     172728        stress-ng.rename.ops_per_sec
>      199.69            +0.6%     200.82        stress-ng.sync-file.ops_per_sec
>      606.57            +0.8%     611.53        stress-ng.chown.ops_per_sec
>      183549            -0.9%     181975        stress-ng.handle.ops_per_sec
>        1299            +0.0%       1299        stress-ng.hdd.ops_per_sec
>    98371066            +0.2%   98571113        stress-ng.lockofd.ops_per_sec
>       25.49            -4.3%      24.39        stress-ng.ioprio.ops_per_sec
>    96745191            -1.5%   95333632        stress-ng.locka.ops_per_sec
>      582.35            +0.1%     582.86        stress-ng.chmod.ops_per_sec
>     2075897            -2.2%    2029552        stress-ng.getdent.ops_per_sec
>       60.47            -1.9%      59.34        stress-ng.metamix.ops_per_sec
>       14161            -0.3%      14123        stress-ng.io.ops_per_sec
>       23.98            -1.5%      23.61        stress-ng.link.ops_per_sec
>       27514            +0.0%      27528        stress-ng.filename.ops_per_sec
>       44955            +1.6%      45678        stress-ng.dnotify.ops_per_sec
>      160.94            +0.4%     161.51        stress-ng.inotify.ops_per_sec
>     2452224            +4.0%    2549607        stress-ng.lockf.ops_per_sec
>        6761            +0.3%       6779        stress-ng.fsize.ops_per_sec
>      775083            -1.5%     763487        stress-ng.fanotify.ops_per_sec
>      309124            -4.2%     296285        stress-ng.utime.ops_per_sec
>       25567            -0.1%      25530        stress-ng.dup.ops_per_sec
>        1858            +0.9%       1876        stress-ng.procfs.ops_per_sec
>      105804            -3.9%     101658        stress-ng.access.ops_per_sec
>        1.04            -1.9%       1.02        stress-ng.chdir.ops_per_sec
>       82753            -0.3%      82480        stress-ng.fstat.ops_per_sec
>      681128            +3.7%     706375        stress-ng.acl.ops_per_sec
>       11892            -0.1%      11875        stress-ng.bind-mount.ops_per_sec
>
>
> for filebench, similar results, but data is less unstable than stress-ng, which
> means for most of them, we regarded them as that they should not be impacted by
> this patch.
>
> for reaim/sysbench-fileio/blogbench, the data are quite stable, and we didn't
> notice any significant performance changes. we even doubt whether they go
> through the code path changed by this patch.
>
> so for these, we don't list full results here.
>
> BTW, besides filesystem tests, this patch is also piped into other performance
> test categories such like net, scheduler, mm and others, and since it also goes
> into our so-called hourly kernels, it could run by full other performance test
> suites which test robot supports. so in following 2-3 weeks, it's still possible
> for us to report other results including regression.
>

That's great. Many thanks for your help.

-- 
Regards
Yafang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ