lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 20 Mar 2015 16:31:36 -0400
From:	Matthew Wilcox <willy@...ux.intel.com>
To:	Rik van Riel <riel@...hat.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Dan Williams <dan.j.williams@...el.com>,
	linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
	axboe@...nel.dk, linux-nvdimm@...1.01.org,
	Dave Hansen <dave.hansen@...ux.intel.com>,
	linux-raid@...r.kernel.org, mgorman@...e.de, hch@...radead.org,
	linux-fsdevel@...r.kernel.org,
	"Michael S. Tsirkin" <mst@...hat.com>
Subject: Re: [RFC PATCH 0/7] evacuate struct page from the block layer

On Fri, Mar 20, 2015 at 12:21:34PM -0400, Rik van Riel wrote:
> On 03/19/2015 09:43 AM, Matthew Wilcox wrote:
> 
> > 1. Construct struct pages for persistent memory
> > 1a. Permanently
> > 1b. While the pages are under I/O
> 
> Michael Tsirkin and I have been doing some thinking about what
> it would take to allocate struct pages per 2MB area permanently,
> and allocate additional struct pages for 4kB pages on demand,
> when a 2MB area is broken up into 4kB pages.

Ah!  I've looked at that a couple of times as well.  I asked our database
performance team what impact freeing up the memmap would have on their
performance.  They told me that doubling the amount of memory generally
resulted in approximately a 40% performance improvement.  So freeing up
1.5% additional memory would result in about 0.6% performance improvement,
which I thought was probably too small a return on investment to justify
turning memmap into a two-level data structure.

Persistent memory might change that calculation somewhat ... but I'm
not convinced.  Certainly, if we already had the ability to allocate
'struct superpage', I wouldn't be pushing for page-less I/Os, I'd just
allocate these data structures for PM.  Even if they were 128 bytes in
size, that's only a 25MB overhead per 400GB NV-DIMM, which feels quite
reasonable to me.

> This should work for both DRAM and persistent memory.
> 
> I am still not convinced it is worthwhile to have struct pages
> for persistent memory though, but I am willing to change my mind.

There's a lot of code out there that relies on struct page being PAGE_SIZE
bytes.  I'm cool with replacing 'struct page' with 'struct superpage'
[1] in the biovec and auditing all of the code which touches it ... but
that's going to be a lot of code!  I'm not sure it's less code than
going directly to 'just do I/O on PFNs'.

[1] Please, somebody come up with a better name!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ