lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <94D0CD8314A33A4D9D801C0FE68B40295A92F4EF@G9W0745.americas.hpqcorp.net>
Date:	Fri, 29 May 2015 22:24:57 +0000
From:	"Elliott, Robert (Server Storage)" <Elliott@...com>
To:	Andy Lutomirski <luto@...capital.net>
CC:	Dan Williams <dan.j.williams@...el.com>,
	"Kani, Toshimitsu" <toshi.kani@...com>,
	Borislav Petkov <bp@...en8.de>,
	Ross Zwisler <ross.zwisler@...ux.intel.com>,
	"H. Peter Anvin" <hpa@...or.com>,
	"Thomas Gleixner" <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Arnd Bergmann <arnd@...db.de>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	X86 ML <x86@...nel.org>,
	"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>,
	Juergen Gross <jgross@...e.com>,
	Stefan Bader <stefan.bader@...onical.com>,
	"Henrique de Moraes Holschuh" <hmh@....eng.br>,
	Yigal Korman <yigal@...xistor.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
	Luis Rodriguez <mcgrof@...e.com>,
	Christoph Hellwig <hch@....de>,
	Matthew Wilcox <willy@...ux.intel.com>
Subject: RE: [PATCH v10 12/12] drivers/block/pmem: Map NVDIMM with
 ioremap_wt()



---
Robert Elliott, HP Server Storage

> -----Original Message-----
> From: Andy Lutomirski [mailto:luto@...capital.net]
> Sent: Friday, May 29, 2015 4:46 PM
> To: Elliott, Robert (Server Storage)
> Cc: Dan Williams; Kani, Toshimitsu; Borislav Petkov; Ross Zwisler;
> H. Peter Anvin; Thomas Gleixner; Ingo Molnar; Andrew Morton; Arnd
> Bergmann; linux-mm@...ck.org; linux-kernel@...r.kernel.org; X86 ML;
> linux-nvdimm@...ts.01.org; Juergen Gross; Stefan Bader; Henrique de
> Moraes Holschuh; Yigal Korman; Konrad Rzeszutek Wilk; Luis
> Rodriguez; Christoph Hellwig; Matthew Wilcox
> Subject: Re: [PATCH v10 12/12] drivers/block/pmem: Map NVDIMM with
> ioremap_wt()
> 
> On Fri, May 29, 2015 at 2:29 PM, Elliott, Robert (Server Storage)
> <Elliott@...com> wrote:
> >> -----Original Message-----
> >> From: Andy Lutomirski [mailto:luto@...capital.net]
> >> Sent: Friday, May 29, 2015 1:35 PM
> > ...
> >> Whoa, there!  Why would we use non-temporal stores to WB memory
> to
> >> access persistent memory?  I can see two reasons not to:
> >
> > Data written to a block storage device (here, the NVDIMM) is
> unlikely
> > to be read or written again any time soon.  It's not like the code
> > and data that a program has in memory, where there might be a loop
> > accessing the location every CPU clock; it's storage I/O to
> > historically very slow (relative to the CPU clock speed) devices.
> > The source buffer for that data might be frequently accessed,
> > but not the NVDIMM storage itself.
> >
> > Non-temporal stores avoid wasting cache space on these "one-time"
> > accesses.  The same applies for reads and non-temporal loads.
> > Keep the CPU data cache lines free for the application.
> >
> > DAX and mmap() do change that; the application is now free to
> > store frequently accessed data structures directly in persistent
> > memory.  But, that's not available if btt is used, and
> > application loads and stores won't go through the memcpy()
> > calls inside pmem anyway.  The non-temporal instructions are
> > cache coherent, so data integrity won't get confused by them
> > if I/O going through pmem's block storage APIs happens
> > to overlap with the application's mmap() regions.
> >
> 
> You answered the wrong question. :)  I understand the point of the
> non-temporal stores -- I don't understand the point of using
> non-temporal stores to *WB memory*.  I think we should be okay with
> having the kernel mapping use WT instead.

The cache type that the application chooses for its mmap()
view has to be compatible with that already selected by the 
kernel, or we run into:

Intel SDM 11.12.4 Programming the PAT
...
"The PAT allows any memory type to be specified in the page tables,
and therefore it is possible to have a single physical page mapped
to two or more different linear addresses, each with different
memory types. Intel does not support this practice because it may
lead to undefined operations that can result in a system failure. 
In particular, a WC page must never be aliased to a cacheable page
because WC writes may not check the processor caches."

Right now, application memory is always WB, so WB is the
only safe choice from this perspective (the system must have
ADR for safety from other perspectives). That might not be 
the best choice for all applications, though; some applications
might not want CPU caching all the data they run through here 
and prefer WC.  On a non-ADR system, WT might be the only 
safe choice.

Should there be a way for the application to specify a cache
type in its mmap() call? The type already selected by the
kernel driver could (carefully) be changed on the fly if 
it's different.

Non-temporal store performance is excellent under WB, WC, and WT;
if anything, I think WC edges ahead because it need not snoop
the cache. It's still poor under UC.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ