lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 3 Feb 2016 18:54:11 +1100
From:	Dave Chinner <david@...morbit.com>
To:	Jared Hulbert <jaredeh@...il.com>
Cc:	Dan Williams <dan.j.williams@...el.com>,
	Al Viro <viro@...iv.linux.org.uk>,
	Ross Zwisler <ross.zwisler@...ux.intel.com>,
	Jeff Layton <jlayton@...chiereds.net>,
	linux-nvdimm <linux-nvdimm@...1.01.org>,
	LKML <linux-kernel@...r.kernel.org>,
	XFS Developers <xfs@....sgi.com>,
	"J. Bruce Fields" <bfields@...ldses.org>, Jan Kara <jack@...e.com>,
	Linux FS Devel <linux-fsdevel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] dax: allow DAX to look up an inode's block device

On Tue, Feb 02, 2016 at 04:33:16PM -0800, Jared Hulbert wrote:
> On Tue, Feb 2, 2016 at 3:41 PM, Dan Williams <dan.j.williams@...el.com> wrote:
> > On Tue, Feb 2, 2016 at 3:36 PM, Jared Hulbert <jaredeh@...il.com> wrote:
> >> On Tue, Feb 2, 2016 at 3:19 PM, Al Viro <viro@...iv.linux.org.uk> wrote:
> >>>
> >>> On Tue, Feb 02, 2016 at 04:11:42PM -0700, Ross Zwisler wrote:
> >>>
> >>> > However, for raw block devices and for XFS with a real-time device, the
> >>> > value in inode->i_sb->s_bdev is not correct.  With the code as it is
> >>> > currently written, an fsync or msync to a DAX enabled raw block device will
> >>> > cause a NULL pointer dereference kernel BUG.  For this to work correctly we
> >>> > need to ask the block device or filesystem what struct block_device is
> >>> > appropriate for our inode.
> >>> >
> >>> > To that end, add a get_bdev(struct inode *) entry point to struct
> >>> > super_operations.  If this function pointer is non-NULL, this notifies DAX
> >>> > that it needs to use it to look up the correct block_device.  If
> >>> > i_sb->get_bdev() is NULL DAX will default to inode->i_sb->s_bdev.
> >>>
> >>> Umm...  It assumes that bdev will stay pinned for as long as inode is
> >>> referenced, presumably?  If so, that needs to be documented (and verified
> >>> for existing fs instances).  In principle, multi-disk fs might want to
> >>> support things like "silently move the inodes backed by that disk to other
> >>> ones"...
> >>
> >> Dan, This is exactly the kind of thing I'm taking about WRT the
> >> weirder device models and directly calling bdev_direct_access().
> >> Filesystems don't have the monogamous relationship with a device that
> >> is implicitly assumed in DAX, you have to ask the filesystem what the
> >> relationship is and is migrating to, and allow the filesystem to
> >> update DAX when the relationship is changing.
> >
> > That's precisely what ->get_bdev() does.  When the answer
> > inode->i_sb->s_bdev lookup is invalid, use ->get_bdev().
> >
> >> As we start to see many
> >> DIMM's and 10s TiB pmem systems this is going be an even bigger deal
> >> as load balancing, wear leveling, and fault tolerance concerned are
> >> inevitably driven by the filesystem.
> >
> > No, there are no plans on the horizon for an fs to manage these media
> > specific concerns for persistent memory.
> 
> So the filesystem is now directly in charge of mapping user pages to
> physical memory.  The filesystem is effectively bypassing NUMA and
> zones and all that stuff that tries to balance memory bus and QPI
> traffic etc.

No, it's isn't bypassing NUMA, zones, etc.

The pmem block device can linearise a typical NUMA layout quite
sanely.  i.e. if there is 10GB of pmem per node, the pmem device
would need to map that as:

	node	   block device offsets
	 0		 0..10GB
	 1		10..20GB
	 2		20..30GB
	 ....
	 n		 N..(N+1)GB

i.e. present a *linear concatentation* of discrete nodes in a linear
address space.

Then, we can use the fact that XFS has a piecewise address space
architecture that can map linear chunks of the block device address
space to different logical domains. Each piece of an XFS filesystem
is an allocation group. Hence we tell mkfs.xfs to set the allocation
group size to 10GB, thereby mapping each individual allocation group
to a different physical node of pmem.  Suddenly all the filesystem
allocation control algorithms become physical device locality
control algorithms.

Then we simply map process locality control (cpusets or
memcgs or whatever is being used for that now) to the allocator -
instead of selecting AGs for allocation based on inode/parent inode
locality, we select AGs based on the allowed CPU/numa node mask of
the process that is running...

An even better architecture would be to present a pmem device per
discrete node and then use DM to build the concat as required. Or
enable us to stripe across nodes for higher performance in large
concurrent applications, or configure RAID mirrors in physically
separate parts of the NUMA topology for redundancy (e.g a water leak
that physically destroys a rack doesn't cause data loss because the
copies are in different racks (i.e. located in different failure
domains)) then we can concat/stripe those mirrors together, etc.

IOWs, we've already got all the pieces in place that we need to
handle pmem in just about any way you can imagine in NUMA machines;
the filesystem is just one of the pieces.

This is just another example of how yet another new-fangled storage
technology maps precisely to a well known, long serving storage
architecture that we already have many, many experts out there that
know to build reliable, performant storage from... :)

> You don't think the filesystem will therefore be in
> charge of memory bus hotspots?

Filesystems and DM are already in charge of avoiding hotspots on
disks, RAID arrays or in storage fabrics that can sustain tens of
GB/s throughput. This really is a solved problem - pmem on NUMA
systems is not very different to having tens of GB/s available on a
multi-pathed SAN.

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ