[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <915bca37dc73206b0a79f2fba4cac3255f8f6c0d.camel@kernel.org>
Date: Tue, 06 Aug 2024 12:17:12 -0400
From: Jeff Layton <jlayton@...nel.org>
To: Mateusz Guzik <mjguzik@...il.com>
Cc: Alexander Viro <viro@...iv.linux.org.uk>, Christian Brauner
<brauner@...nel.org>, Jan Kara <jack@...e.cz>, Andrew Morton
<akpm@...ux-foundation.org>, Josef Bacik <josef@...icpanda.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] fs: try an opportunistic lookup for O_CREAT opens too
On Tue, 2024-08-06 at 17:25 +0200, Mateusz Guzik wrote:
> On Tue, Aug 6, 2024 at 4:32 PM Jeff Layton <jlayton@...nel.org>
> wrote:
> >
> > Today, when opening a file we'll typically do a fast lookup, but if
> > O_CREAT is set, the kernel always takes the exclusive inode lock. I
> > assume this was done with the expectation that O_CREAT means that
> > we
> > always expect to do the create, but that's often not the case. Many
> > programs set O_CREAT even in scenarios where the file already
> > exists.
> >
> > This patch rearranges the pathwalk-for-open code to also attempt a
> > fast_lookup in certain O_CREAT cases. If a positive dentry is
> > found, the
> > inode_lock can be avoided altogether, and if auditing isn't
> > enabled, it
> > can stay in rcuwalk mode for the last step_into.
> >
> > One notable exception that is hopefully temporary: if we're doing
> > an
> > rcuwalk and auditing is enabled, skip the lookup_fast. Legitimizing
> > the
> > dentry in that case is more expensive than taking the i_rwsem for
> > now.
> >
> > Signed-off-by: Jeff Layton <jlayton@...nel.org>
> > ---
> > Here's a revised patch that does a fast_lookup in the O_CREAT
> > codepath
> > too. The main difference here is that if a positive dentry is found
> > and
> > audit_dummy_context is true, then we keep the walk lazy for the
> > last
> > component, which avoids having to take any locks on the parent
> > (just
> > like with non-O_CREAT opens).
> >
> > The testcase below runs in about 18s on v6.10 (on an 80 CPU
> > machine).
> > With this patch, it runs in about 1s:
> >
>
> I don't have an opinion on the patch.
>
> If your kernel does not use apparmor and the patch manages to dodge
> refing the parent, then indeed this should be fully deserialized just
> like non-creat opens.
>
Yep. Pity that auditing will slow things down, but them's the breaks...
> Instead of the hand-rolled benchmark may I interest you in using
> will-it-scale instead? Notably it reports the achieved rate once per
> second, so you can check if there is funky business going on between
> reruns, gives the cpu the time to kick off turbo boost if applicable
> etc.
>
> I would bench with that myself, but I temporarily don't have handy
> access to bigger hw. Even so, the below is completely optional and
> perhaps more of a suggestion for the future :)
>
> I hacked up the test case based on tests/open1.c.
>
> git clone https://github.com/antonblanchard/will-it-scale.git
>
> For example plop into tests/opencreate1.c && gmake &&
> ./opencreate1_processes -t 70:
>
> #include <stdlib.h>
> #include <unistd.h>
> #include <sys/types.h>
> #include <sys/stat.h>
> #include <fcntl.h>
> #include <assert.h>
> #include <string.h>
>
> char *testcase_description = "Separate file open/close + O_CREAT";
>
> #define template "/tmp/willitscale.XXXXXX"
> static char (*tmpfiles)[sizeof(template)];
> static unsigned long local_nr_tasks;
>
> void testcase_prepare(unsigned long nr_tasks)
> {
> int i;
> tmpfiles = (char(*)[sizeof(template)])malloc(sizeof(template)
> * nr_tasks);
> assert(tmpfiles);
>
> for (i = 0; i < nr_tasks; i++) {
> strcpy(tmpfiles[i], template);
> char *tmpfile = tmpfiles[i];
> int fd = mkstemp(tmpfile);
>
> assert(fd >= 0);
> close(fd);
> }
>
> local_nr_tasks = nr_tasks;
> }
>
> void testcase(unsigned long long *iterations, unsigned long nr)
> {
> char *tmpfile = tmpfiles[nr];
>
> while (1) {
> int fd = open(tmpfile, O_RDWR | O_CREAT, 0600);
> assert(fd >= 0);
> close(fd);
>
> (*iterations)++;
> }
> }
>
> void testcase_cleanup(void)
> {
> int i;
> for (i = 0; i < local_nr_tasks; i++) {
> unlink(tmpfiles[i]);
> }
> free(tmpfiles);
> }
>
>
Good suggestion and thanks for the testcase. With v6.10 kernel, I'm
seeing numbers like this at -t 70:
min:4873 max:11510 total:418915
min:4884 max:10598 total:408848
min:3704 max:12371 total:467658
min:2842 max:11792 total:418239
min:2966 max:11511 total:414144
min:4756 max:11381 total:413137
min:4557 max:10789 total:404628
min:4780 max:11125 total:413349
min:4757 max:11156 total:405963
...with the patched kernel, things are significantly faster:
min:265865 max:508909 total:21464553
min:263252 max:500084 total:21242190
min:263989 max:504929 total:21396968
min:263343 max:505852 total:21346829
min:263023 max:507303 total:21410217
min:263420 max:506593 total:21426307
min:259556 max:494529 total:20927169
min:264451 max:508967 total:21433676
min:263486 max:509460 total:21399874
min:263906 max:507400 total:21393015
I can get some fancier plots if anyone is interested, but the benefit
of this patch seems pretty clear.
--
Jeff Layton <jlayton@...nel.org>
Powered by blists - more mailing lists