[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55E7132E.104@plexistor.com>
Date: Wed, 02 Sep 2015 18:18:06 +0300
From: Boaz Harrosh <boaz@...xistor.com>
To: Dave Hansen <dave.hansen@...ux.intel.com>,
Boaz Harrosh <boaz@...xistor.com>,
Dave Chinner <david@...morbit.com>,
Ross Zwisler <ross.zwisler@...ux.intel.com>,
Christoph Hellwig <hch@....de>, linux-kernel@...r.kernel.org,
Alexander Viro <viro@...iv.linux.org.uk>,
Andrew Morton <akpm@...l.org>,
"H. Peter Anvin" <hpa@...or.com>, Hugh Dickins <hughd@...gle.com>,
Ingo Molnar <mingo@...hat.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-nvdimm@...ts.01.org, Matthew Wilcox <willy@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>, x86@...nel.org
Subject: Re: [PATCH] dax, pmem: add support for msync
On 09/02/2015 05:23 PM, Dave Hansen wrote:
<>
> I'd be curious what the cost is in practice. Do you have any actual
> numbers of the cost of doing it this way?
>
> Even if the instruction is a "noop", I'd really expect the overhead to
> really add up for a tens-of-gigabytes mapping, no matter how much the
> CPU optimizes it.
What tens-of-gigabytes mapping? I have yet to encounter an application
that does that. Our tests show that usually the mmaps are small.
I can send you a micro benchmark results of an mmap vs direct-io random
write. Our code will jump over holes in the file BTW, but I'll ask to also
run it with falloc that will make all blocks allocated.
Give me a few days to collect this.
I guess one optimization we should do is jump over holes and zero-extents.
This will save the case of a mostly sparse very big file.
Thanks
Boaz
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists