lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 16 Jun 2009 13:59:17 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Steve Rottinger <steve@...tek.com>
Cc:	Leon Woestenberg <leon.woestenberg@...il.com>,
	linux-kernel@...r.kernel.org
Subject: Re: splice methods in character device driver

On Fri, Jun 12 2009, Steve Rottinger wrote:
> Hi Leon,
> 
> It does seem like a lot of code needs to be executed to move a small
> chunk of data.

It's really not, you should try and benchmark the function call overhead
:-).

> Although,  I think that most of the overhead that I was experiencing
> came from the cumulative
> overhead of each splice system call.   I increased my pipe size using
> Jens' pipe size patch,
> from 16 to 256 pages, and this had a huge effect -- the speed of my
> transfers more than doubled.
> Pipe sizes larger that 256 pages, cause my kernel to crash.

Yes, the system call is more expensive. Increasing the pipe size can
definitely help there.

> I'm doing about 300MB/s to my hardware RAID, running two instances of my
> splice() copy application
> (One on each RAID channel).  I would like to combine the two RAID
> channels using a software RAID 0;
> however, splice, even from /dev/zero runs horribly slow to a software
> RAID device.  I'd be curious
> to know if anyone else has tried this?

Did you trace it and find out why it was slow? It should not be. Moving
300MB/sec should not be making any machine sweat.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ