[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200706111111.GX25523@casper.infradead.org>
Date: Mon, 6 Jul 2020 12:11:11 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Jan Ziak <0xe2.0x9a.0x9b@...il.com>
Cc: Greg KH <gregkh@...uxfoundation.org>, linux-api@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org, linux-man@...r.kernel.org,
mtk.manpages@...il.com, shuah@...nel.org, viro@...iv.linux.org.uk
Subject: Re: [PATCH 0/3] readfile(2): a new syscall to make open/read/close
faster
On Mon, Jul 06, 2020 at 08:07:46AM +0200, Jan Ziak wrote:
> On Sun, Jul 5, 2020 at 1:58 PM Greg KH <gregkh@...uxfoundation.org> wrote:
> > It also is a measurable increase over reading just a single file.
> > Here's my really really fast AMD system doing just one call to readfile
> > vs. one call sequence to open/read/close:
> >
> > $ ./readfile_speed -l 1
> > Running readfile test on file /sys/devices/system/cpu/vulnerabilities/meltdown for 1 loops...
> > Took 3410 ns
> > Running open/read/close test on file /sys/devices/system/cpu/vulnerabilities/meltdown for 1 loops...
> > Took 3780 ns
> >
> > 370ns isn't all that much, yes, but it is 370ns that could have been
> > used for something else :)
>
> I am curious as to how you amortized or accounted for the fact that
> readfile() first needs to open the dirfd and then close it later.
>
> >From performance viewpoint, only codes where readfile() is called
> multiple times from within a loop make sense:
>
> dirfd = open();
> for(...) {
> readfile(dirfd, ...);
> }
> close(dirfd);
dirfd can be AT_FDCWD or if the path is absolute, dirfd will be ignored,
so one does not have to open anything. It would be an optimisation
if one wanted to read several files relating to the same process:
char dir[50];
sprintf(dir, "/proc/%d", pid);
dirfd = open(dir);
readfile(dirfd, "maps", ...);
readfile(dirfd, "stack", ...);
readfile(dirfd, "comm", ...);
readfile(dirfd, "environ", ...);
close(dirfd);
but one would not have to do that.
Powered by blists - more mailing lists