lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4eae44395ad321d05f47571b58fe3fe2413b6b36.camel@themaw.net>
Date:   Thu, 13 May 2021 22:10:46 +0800
From:   Ian Kent <raven@...maw.net>
To:     Fox Chen <foxhlchen@...il.com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc:     Tejun Heo <tj@...nel.org>, Al Viro <viro@...iv.linux.org.uk>,
        Eric Sandeen <sandeen@...deen.net>,
        Brice Goglin <brice.goglin@...il.com>,
        Rick Lindsley <ricklind@...ux.vnet.ibm.com>,
        David Howells <dhowells@...hat.com>,
        Miklos Szeredi <miklos@...redi.hu>,
        Marcelo Tosatti <mtosatti@...hat.com>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v4 0/5] kernfs: proposed locking and concurrency
 improvement

On Wed, 2021-05-12 at 16:54 +0800, Fox Chen wrote:
> On Wed, May 12, 2021 at 4:47 PM Fox Chen <foxhlchen@...il.com> wrote:
> > 
> > Hi,
> > 
> > I ran it on my benchmark (
> > https://github.com/foxhlchen/sysfs_benchmark).
> > 
> > machine: aws c5 (Intel Xeon with 96 logical cores)
> > kernel: v5.12
> > benchmark: create 96 threads and bind them to each core then run
> > open+read+close on a sysfs file simultaneously for 1000 times.
> > result:
> > Without the patchset, an open+read+close operation takes 550-570
> > us,
> > perf shows significant time(>40%) spending on mutex_lock.
> > After applying it, it takes 410-440 us for that operation and perf
> > shows only ~4% time on mutex_lock.
> > 
> > It's weird, I don't see a huge performance boost compared to v2,
> > even
> 
> I meant I don't see a huge performance boost here and it's way worse
> than v2.
> IIRC, for v2 fastest one only takes 40us

Thanks Fox,

I'll have a look at those reports but this is puzzling.

Perhaps the added overhead of the check if an update is
needed is taking more than expected and more than just
taking the lock and being done with it. Then there's
the v2 series ... I'll see if I can dig out your reports
on those too.

> 
> 
> > though there is no mutex problem from the perf report.
> > I've put console outputs and perf reports on the attachment for
> > your reference.

Yep, thanks.
Ian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ