lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55E71D00.4050103@plexistor.com>
Date:	Wed, 02 Sep 2015 19:00:00 +0300
From:	Boaz Harrosh <boaz@...xistor.com>
To:	Dave Hansen <dave.hansen@...ux.intel.com>,
	Dave Chinner <david@...morbit.com>,
	Ross Zwisler <ross.zwisler@...ux.intel.com>,
	Christoph Hellwig <hch@....de>, linux-kernel@...r.kernel.org,
	Alexander Viro <viro@...iv.linux.org.uk>,
	Andrew Morton <akpm@...l.org>,
	"H. Peter Anvin" <hpa@...or.com>, Hugh Dickins <hughd@...gle.com>,
	Ingo Molnar <mingo@...hat.com>,
	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
	linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
	linux-nvdimm@...ts.01.org, Matthew Wilcox <willy@...ux.intel.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>, x86@...nel.org
Subject: Re: [PATCH] dax, pmem: add support for msync

On 09/02/2015 06:39 PM, Dave Hansen wrote:
> On 09/02/2015 08:18 AM, Boaz Harrosh wrote:
>> On 09/02/2015 05:23 PM, Dave Hansen wrote:
>>>> I'd be curious what the cost is in practice.  Do you have any actual
>>>> numbers of the cost of doing it this way?
>>>>
>>>> Even if the instruction is a "noop", I'd really expect the overhead to
>>>> really add up for a tens-of-gigabytes mapping, no matter how much the
>>>> CPU optimizes it.
>> What tens-of-gigabytes mapping? I have yet to encounter an application
>> that does that. Our tests show that usually the mmaps are small.
> 
> We are going to have 2-socket systems with 6TB of persistent memory in
> them.  I think it's important to design this mechanism so that it scales
> to memory sizes like that and supports large mmap()s.
> 
> I'm not sure the application you've seen thus far are very
> representative of what we want to design for.
> 

We have a patch pending to introduce a new mmap flag that pmem aware
applications can set to eliminate any kind of flushing. MMAP_PMEM_AWARE.

This is good for the like of libnvdimm that does one large mmap of the
all 6T and does not want the clflush penalty on unmap.

>> I can send you a micro benchmark results of an mmap vs direct-io random
>> write. Our code will jump over holes in the file BTW, but I'll ask to also
>> run it with falloc that will make all blocks allocated.
> 
> I'm really just more curious about actual clflush performance on large
> ranges.  I'm curious how good the CPU is at optimizing it.
> 

Again our test does not do this, because it will only flush written-extents
of the file. the most we have in one machine is 64G of pmem, so even on a
very large mmap the most that can be is 64G of data, and the actual modify
of 64G of data will be much slower then the added clflush to each cache_line.

Thanks
Boaz

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ