[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090616182459.GC11363@kernel.dk>
Date: Tue, 16 Jun 2009 20:24:59 +0200
From: Jens Axboe <jens.axboe@...cle.com>
To: Steve Rottinger <steve@...tek.com>
Cc: Leon Woestenberg <leon.woestenberg@...il.com>,
linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [RFC][PATCH] add support for shrinking/growing a pipe (Was "Re:
splice methods in character device driver")
On Tue, Jun 16 2009, Steve Rottinger wrote:
> >> Although, I think that most of the overhead that I was experiencing
> >> came from the cumulative
> >> overhead of each splice system call. I increased my pipe size using
> >> Jens' pipe size patch,
> >> from 16 to 256 pages, and this had a huge effect -- the speed of my
> >> transfers more than doubled.
> >> Pipe sizes larger that 256 pages, cause my kernel to crash.
> >>
> >
> > Yes, the system call is more expensive. Increasing the pipe size can
> > definitely help there.
> >
> >
> I know that you have been asked this before, but is there any chance
> that we can
> get the pipe size patch into the kernel mainline? It seems like it is
> essential to
> moving data fast using the splice interface.
Sure, the only unresolved issue with it is what sort of interface to
export for changing the pipe size. I went with fcntl().
Linus, I think we discussed this years ago. The patch in question is
here:
http://git.kernel.dk/?p=linux-2.6-block.git;a=commit;h=24547ac4d97bebb58caf9ce58bd507a95c812a3f
I'd like to get it in now, there has been several requests for this in
the past. But I didn't want to push it before this was resolved.
I don't know whether other operating systems allow this functionality,
and if they do what interface they use. I suspect that our need is
somewhat special, since we have splice.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists