[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <28de01eb.208.18b4f9a0051.Coremail.00107082@163.com>
Date: Sat, 21 Oct 2023 08:19:34 +0800 (CST)
From: "David Wang" <00107082@....com>
To: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PERFORMANCE]fs: sendfile suffer performance degradation when
buffer size have performance impact on underling IO
Hi,
I was trying to confirm the performance improvement via replacing read/write sequences with sendfile,
But I got quite a surprising result:
$ gcc -DUSE_SENDFILE cp.cpp
$ time ./a.out
real 0m56.121s
user 0m0.000s
sys 0m4.844s
$ gcc cp.cpp
$ time ./a.out
real 0m27.363s
user 0m0.014s
sys 0m4.443s
The result show that, in my test scenario, the read/write sequences only use half of the time by sendfile.
My guess is that sendfile using a default pipe with buffer size 1<<16 (16 pages), which is not tuned for the underling IO,
hence a read/write sequences with buffer size 1<<17 is much faster than sendfile.
But the problem with sendfile is that there is no parameter to tune the buffer size from userspace...Any chance to fix this?
The test code is as following:
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/sendfile.h>
#include <fcntl.h>
char buf[1<<17]; // much better than 1<<16
int main() {
int i, fin, fout, n, m;
for (i=0; i<128; i++) {
// dd if=/dev/urandom of=./bigfile bs=131072 count=256
fin = open("./bigfile", O_RDONLY);
fout = open("./target", O_WRONLY | O_CREAT | O_DSYNC, S_IWUSR);
#ifndef USE_SENDFILE
while(1) {
n = read(fin, buf, sizeof(buf));
if (n==0) break;
m = write(fout, buf, n);
if (n != m) {
printf("fail to write, expect %d, actual %d\n", n, m);
perror(":");
return 1;
}
}
#else
off_t offset = 0;
struct stat st;
if (fstat(fin, &st) != 0) {
perror("fail to fstat\n");
return 1;
}
sendfile(fout, fin, &offset, st.st_size);
#endif
close(fin);
close(fout);
}
return 0;
}
FYI
David
Powered by blists - more mailing lists