[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAC2o3DJdwr0aqT6LwhuRj8kyXt6NAPex2nG5ToadUTJ3Jqr_4w@mail.gmail.com>
Date: Wed, 12 May 2021 16:54:07 +0800
From: Fox Chen <foxhlchen@...il.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Ian Kent <raven@...maw.net>
Cc: Tejun Heo <tj@...nel.org>, Al Viro <viro@...iv.linux.org.uk>,
Eric Sandeen <sandeen@...deen.net>,
Brice Goglin <brice.goglin@...il.com>,
Rick Lindsley <ricklind@...ux.vnet.ibm.com>,
David Howells <dhowells@...hat.com>,
Miklos Szeredi <miklos@...redi.hu>,
Marcelo Tosatti <mtosatti@...hat.com>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v4 0/5] kernfs: proposed locking and concurrency improvement
On Wed, May 12, 2021 at 4:47 PM Fox Chen <foxhlchen@...il.com> wrote:
>
> Hi,
>
> I ran it on my benchmark (https://github.com/foxhlchen/sysfs_benchmark).
>
> machine: aws c5 (Intel Xeon with 96 logical cores)
> kernel: v5.12
> benchmark: create 96 threads and bind them to each core then run
> open+read+close on a sysfs file simultaneously for 1000 times.
> result:
> Without the patchset, an open+read+close operation takes 550-570 us,
> perf shows significant time(>40%) spending on mutex_lock.
> After applying it, it takes 410-440 us for that operation and perf
> shows only ~4% time on mutex_lock.
>
> It's weird, I don't see a huge performance boost compared to v2, even
I meant I don't see a huge performance boost here and it's way worse than v2.
IIRC, for v2 fastest one only takes 40us
> though there is no mutex problem from the perf report.
> I've put console outputs and perf reports on the attachment for your reference.
>
>
> thanks,
> fox
fox
Powered by blists - more mailing lists