[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BY2PR21MB0036C7BBC487D7C101F36DCFCB700@BY2PR21MB0036.namprd21.prod.outlook.com>
Date: Sat, 21 Jan 2017 16:28:52 +0000
From: Matthew Wilcox <mawilcox@...rosoft.com>
To: Dan Williams <dan.j.williams@...el.com>,
"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>
CC: Brian Boylston <brian.boylston@....com>,
Tony Luck <tony.luck@...el.com>, Jan Kara <jack@...e.cz>,
Toshi Kani <toshi.kani@....com>,
Mike Snitzer <snitzer@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>, Jeff Moyer <jmoyer@...hat.com>,
Christoph Hellwig <hch@....de>, Jens Axboe <axboe@...com>,
"dm-devel@...hat.com" <dm-devel@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
Al Viro <viro@...iv.linux.org.uk>,
"H. Peter Anvin" <hpa@...or.com>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
"Linus Torvalds" <torvalds@...ux-foundation.org>,
Ross Zwisler <ross.zwisler@...ux.intel.com>
Subject: RE: [PATCH 00/13] dax, pmem: move cpu cache maintenance to libnvdimm
From: Dan Williams [mailto:dan.j.williams@...el.com]
> A couple weeks back, in the course of reviewing the memcpy_nocache()
> proposal from Brian, Linus subtly suggested that the pmem specific
> memcpy_to_pmem() routine be moved to be implemented at the driver
> level [1]:
Of course, there may not be a backing device either! That will depend on the filesystem.
I see two possible routes here:
1. Add a new address_space_operation:
const struct dax_operations *(*get_dax_ops)(struct address_space *);
2. Add two of the dax_operations to address_space_operations:
size_t (*copy_from_iter)(struct address_space *, void *, size_t, struct iov_iter *);
void (*flush)(struct address_space *, void *, size_t);
(we won't need ->direct_access as an address_space op because that'll be handled a different way in the brave new world that supports non-bdev-based filesystems)
Obviously in either case we'd have generic bdev versions for ext4, xfs and other block based filesystems, but filesystems with a character device or a network protocol behind them would do whatever it is they need to do.
I kind of prefer the second option, but does anyone else have a preference?
Powered by blists - more mailing lists