lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 20 May 2010 10:49:46 -0700 (PDT)
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Miklos Szeredi <miklos@...redi.hu>
cc:	linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, jens.axboe@...cle.com,
	akpm@...ux-foundation.org
Subject: Re: [RFC PATCH] fuse: support splice() reading from fuse device



On Thu, 20 May 2010, Miklos Szeredi wrote:
> 
> With Jens' pipe growing patch and additional fuse patches it was
> possible to achieve a 20GBytes/s write throghput on my laptop in a
> "null" filesystem (no page cache, data goes to /dev/null).

Btw, I don't think that is a very interesting benchmark.

The reason I say that is that many man years ago I played with doing 
zero-copy pipe read/write system calls (no splice, just automatic "follow 
the page tables, mark things read-only etc" things). It was considered 
sexy to do things like that during the mid-90's - there were all the crazy 
ukernel people with Mach etc doing magic things with moving pages around.

It got me a couple of gigabytes per second back then (when memcpy() speeds 
were in the tens of megabytes) on benchmarks like lmbench that just wrote 
the same buffer over and over again without ever touching the data.

It was totally worthless on _any_ real load. In fact, it made things 
worse. I never found a single case where it helped.

So please don't ever benchmark things that don't make sense, and then use 
the numbers as any kind of reason to do anything. It's worse than 
worthless. It actually adds negative value to show "look ma, no hands" for 
things that nobody does. It makes people think it's a good idea, and 
optimizes the wrong thing entirely.

Are there actual real loads that get improved? I don't care if it means 
that the improvement goes from three orders of magnitude to just a couple 
of percent. The "couple of percent on actual loads" is a lot more 
important than "many orders of magnitude on a made-up benchmark".

		Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ