[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080825134801.GN1408@mit.edu>
Date: Mon, 25 Aug 2008 09:48:01 -0400
From: Theodore Tso <tytso@....edu>
To: Peter Zijlstra <peterz@...radead.org>
Cc: edwin <edwintorok@...il.com>, Ingo Molnar <mingo@...e.hu>,
rml@...h9.net, Linux Kernel <linux-kernel@...r.kernel.org>,
"Thomas Gleixner mingo@...hat.com" <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: Quad core CPUs loaded at only 50% when running a CPU and mmap
intensive multi-threaded task
On Mon, Aug 25, 2008 at 01:41:17PM +0200, Peter Zijlstra wrote:
>
> I would certainly consider this for small (< 1M?) files. With mmap the
> faults and pte overhead aren't free either, and the extra memcpy from
> pread() isn't that much.
>
Even for very big files, if you're only doing a single sequential pass
over a very large file (for example when converting a Canon raw image
file to TIFF format --- I know because I was trying to optimize dcraw
a while aback), you take the page fault for each 4k page, and so
simply using read/pread is faster. And that's on a single-threded
program. With a multithreaded program, the locking issues come on top
of that.
Maybe if I had used hugepages it would have been a win, I suppose, but
I never tried the experiment. And this was several years ago, on much
older hardware, so maybe the relative times of doing the memory copy
versus the page fault, but I wouldn't be surprised if it's even more
expensive to do the mmap, relatively speaking.
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists