[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <E1OFC1b-0000Yx-80@pomaz-ex.szeredi.hu>
Date: Thu, 20 May 2010 22:07:23 +0200
From: Miklos Szeredi <miklos@...redi.hu>
To: Linus Torvalds <torvalds@...ux-foundation.org>
CC: miklos@...redi.hu, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
jens.axboe@...cle.com, akpm@...ux-foundation.org
Subject: Re: [RFC PATCH] fuse: support splice() reading from fuse device
On Thu, 20 May 2010, Linus Torvalds wrote:
> But that's a damn big if. Does it ever trigger in practice? I doubt it. In
> practice, you'll have to fill the pages with something in the first place.
> In practice, the destination of the data is such that you'll often end up
> copying anyway - it won't be /dev/null.
>
> That's why I claim your benchmark is meaningless. It does NOT even say
> what you claim it says. It does not say 1% CPU on a 200MB/s transfer,
> exactly the same way my stupid pipe zero-copy didn't mean that people
> could magically get MB/s throughput with 1% CPU on pipes.
I'm talking about *overhead* not actual CPU usage. And I know that
caches tend to reduce the effect of multiple copies, but that depends
on a lot of things as well (size of request, delay between copies,
etc.) Generally I've seen pretty significant reductions in overhead
for eliminating each copy.
I'm not saying it will always be zero copy all the way, I'm saying
that less copies will tend to mean less overhead. And the same is
true for making requests larger.
> It says nothing at all, in short. You need to have a real source, and a
> real destination. Not some empty filesystem and /dev/null destination.
Sure, I will do that. It's just a lot harder to measure the effects
on hardware I have access to, where the CPU speed is just damn too
large compared to I/O speed.
Miklos
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists