lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 28 Aug 2014 11:45:27 -0400
From:	Matthew Wilcox <willy@...ux.intel.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Matthew Wilcox <matthew.r.wilcox@...el.com>,
	linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v10 00/21] Support ext4 on NV-DIMMs

On Wed, Aug 27, 2014 at 02:46:22PM -0700, Andrew Morton wrote:
> > > Sat down to read all this but I'm finding it rather unwieldy - it's
> > > just a great blob of code.  Is there some overall
> > > what-it-does-and-how-it-does-it roadmap?
> > 
> > The overall goal is to map persistent memory / NV-DIMMs directly to
> > userspace.  We have that functionality in the XIP code, but the way
> > it's structured is unsuitable for filesystems like ext4 & XFS, and
> > it has some pretty ugly races.
> 
> When thinking about looking at the patchset I wonder things like how
> does mmap work, in what situations does a page get COWed, how do we
> handle partial pages at EOF, etc.  I guess that's all part of the
> filemap_xip legacy, the details of which I've totally forgotten.

mmap works by installing a PTE that points to the storage.  This implies
that the NV-DIMM has to be the kind that always has everything mapped
(there are other types that require commands to be sent to move windows
around that point into the storage ... DAX is not for these types
of DIMMs).

We use a VM_MIXEDMAP vma.  The PTEs pointing to PFNs will just get
copied across on fork.  Read-faults on holes are covered by a read-only
page cache page.  On a write to a hole, any page cache page covering it
will be unmapped and evicted from the page cache.  The mapping for the
faulting task will be replaced with a mapping to the newly established
block, but other mappings will take a fresh fault on their next reference.

Partial pages are mmapable, just as they are with page-cache based
files.  You can even store beyond EOF, just as with page-cache files.
Those stores are, of course, going to end up on persistence, but they
might well end up being zeroed if the file is extended ... again, this
is no different to page-cache based files.

> > > Performance testing results?
> > 
> > I haven't been running any performance tests.  What sort of performance
> > tests would be interesting for you to see?
> 
> fs benchmarks?  `dd' would be a good start ;)
> 
> I assume (because I wasn't told!) that there are two objectives here:
> 
> 1) reduce memory consumption by not maintaining pagecache and
> 2) reduce CPU cost by avoiding the double-copies.
> 
> These things are pretty easily quantified.  And really they must be
> quantified as part of the developer testing, because if you find
> they've worsened then holy cow, what went wrong.

It's really a functionality argument; the users we anticipate for NV-DIMMs
really want to directly map them into memory and do a lot of work through
loads and stores with the kernel not being involved at all, so we don't
actually have any performance targets for things like read/write.
That said, when running xfstests and comparing results between ext4
with and without DAX, I do see many of the tests completing quicker
with DAX than without (others "run for thirty seconds" so there's no
time difference between with/without).

> None of the patch titles identify the subsystem(s) which they're
> hitting.  eg, "Introduce IS_DAX(inode)" is an ext2 patch, but nobody
> would know that from browsing the titles.

I actually see that one as being a VFS patch ... ext2 changing is just
a side-effect.  I can re-split that patch if desired.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ