lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1163784110.8170.8.camel@sale659.sandia.gov>
Date:	Fri, 17 Nov 2006 10:21:50 -0700
From:	"Jim Schutt" <jaschut@...dia.gov>
To:	"Jens Axboe" <jens.axboe@...cle.com>
cc:	linux-kernel@...r.kernel.org
Subject: Re: splice/vmsplice performance test results

On Thu, 2006-11-16 at 21:25 +0100, Jens Axboe wrote:
> On Thu, Nov 16 2006, Jim Schutt wrote:
> > Hi,
> > 

> > My test program can do one of the following:
> > 
> > send data:
> >  A) read() from file into buffer, write() buffer into socket
> >  B) mmap() section of file, write() that into socket, munmap()
> >  C) splice() from file to pipe, splice() from pipe to socket
> > 
> > receive data:
> >  1) read() from socket into buffer, write() buffer into file
> >  2) ftruncate() to extend file, mmap() new extent, read() 
> >       from socket into new extent, munmap()
> >  3) read() from socket into buffer, vmsplice() buffer to 
> >      pipe, splice() pipe to file (using the double-buffer trick)
> > 
> > Here's the results, using:
> >  - 64 KiB buffer, mmap extent, or splice
> >  - 1 MiB TCP window
> >  - 16 GiB data sent across network
> > 
> > A) from /dev/zero -> 1) to /dev/null : 857 MB/s (6.86 Gb/s)
> > 
> > A) from file      -> 1) to /dev/null : 472 MB/s (3.77 Gb/s)
> > B) from file      -> 1) to /dev/null : 366 MB/s (2.93 Gb/s)
> > C) from file      -> 1) to /dev/null : 854 MB/s (6.83 Gb/s)
> > 
> > A) from /dev/zero -> 1) to file      : 375 MB/s (3.00 Gb/s)
> > A) from /dev/zero -> 2) to file      : 150 MB/s (1.20 Gb/s)
> > A) from /dev/zero -> 3) to file      : 286 MB/s (2.29 Gb/s)
> > 
> > I had (naively) hoped the read/vmsplice/splice combination would 
> > run at the same speed I can write a file, i.e. at about 450 MB/s
> > on my setup.  Do any of my numbers seem bogus, so I should look 
> > harder at my test program?
> 
> Could be read-ahead playing in here, I'd have to take a closer look at
> the generated io patterns to say more about that. Any chance you can
> capture iostat or blktrace info for such a run to compare that goes to
> the disk?

I've attached a file with iostat and vmstat results for the case
where I read from a socket and write a file, vs. the case where I
read from a socket and use vmsplice/splice to write the file.
(Sorry it's not inline - my mailer locks up when I try to
include the file.)

Would you still like blktrace info for these two cases?

-- Jim


View attachment "iostat.txt" of type "text/plain" (19790 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ