[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A37B4D8.5090404@pentek.com>
Date: Tue, 16 Jun 2009 11:06:00 -0400
From: Steve Rottinger <steve@...tek.com>
To: Jens Axboe <jens.axboe@...cle.com>
CC: Leon Woestenberg <leon.woestenberg@...il.com>,
linux-kernel@...r.kernel.org
Subject: Re: splice methods in character device driver
Hi Jens,
Jens Axboe wrote:
>
>> Although, I think that most of the overhead that I was experiencing
>> came from the cumulative
>> overhead of each splice system call. I increased my pipe size using
>> Jens' pipe size patch,
>> from 16 to 256 pages, and this had a huge effect -- the speed of my
>> transfers more than doubled.
>> Pipe sizes larger that 256 pages, cause my kernel to crash.
>>
>
> Yes, the system call is more expensive. Increasing the pipe size can
> definitely help there.
>
>
I know that you have been asked this before, but is there any chance
that we can
get the pipe size patch into the kernel mainline? It seems like it is
essential to
moving data fast using the splice interface.
>> I'm doing about 300MB/s to my hardware RAID, running two instances of my
>> splice() copy application
>> (One on each RAID channel). I would like to combine the two RAID
>> channels using a software RAID 0;
>> however, splice, even from /dev/zero runs horribly slow to a software
>> RAID device. I'd be curious
>> to know if anyone else has tried this?
>>
>
> Did you trace it and find out why it was slow? It should not be. Moving
> 300MB/sec should not be making any machine sweat.
>
>
I haven't dug into this too deeply, yet; however, I did discover
something interesting:
The splice runs much faster using the software raid, if I transfer to a
file on a mounted
filesystem, instead of the raw md block device.
-Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists