lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 16 Nov 2015 12:34:55 -0800 From: Dan Williams <dan.j.williams@...el.com> To: Ross Zwisler <ross.zwisler@...ux.intel.com>, Dan Williams <dan.j.williams@...el.com>, Jan Kara <jack@...e.cz>, Andreas Dilger <adilger@...ger.ca>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "H. Peter Anvin" <hpa@...or.com>, "J. Bruce Fields" <bfields@...ldses.org>, "Theodore Ts'o" <tytso@....edu>, Alexander Viro <viro@...iv.linux.org.uk>, Dave Chinner <david@...morbit.com>, Ingo Molnar <mingo@...hat.com>, Jan Kara <jack@...e.com>, Jeff Layton <jlayton@...chiereds.net>, Matthew Wilcox <willy@...ux.intel.com>, Thomas Gleixner <tglx@...utronix.de>, linux-ext4 <linux-ext4@...r.kernel.org>, linux-fsdevel <linux-fsdevel@...r.kernel.org>, Linux MM <linux-mm@...ck.org>, "linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>, X86 ML <x86@...nel.org>, XFS Developers <xfs@....sgi.com>, Andrew Morton <akpm@...ux-foundation.org>, Matthew Wilcox <matthew.r.wilcox@...el.com>, Dave Hansen <dave.hansen@...ux.intel.com> Subject: Re: [PATCH v2 03/11] pmem: enable REQ_FUA/REQ_FLUSH handling On Mon, Nov 16, 2015 at 11:48 AM, Ross Zwisler <ross.zwisler@...ux.intel.com> wrote: > On Mon, Nov 16, 2015 at 09:28:59AM -0800, Dan Williams wrote: >> On Mon, Nov 16, 2015 at 6:05 AM, Jan Kara <jack@...e.cz> wrote: >> > On Mon 16-11-15 14:37:14, Jan Kara wrote: [..] > Is there any reason why this wouldn't work or wouldn't be a good idea? We don't have numbers to support the claim that pcommit is so expensive as to need be deferred, especially if the upper layers are already taking the hit on doing the flushes. REQ_FLUSH, means flush your volatile write cache. Currently all I/O through the driver never hits a volatile cache so there's no need to tell the block layer that we have a volatile write cache, especially when you have the core mm taking responsibility for doing cache maintenance for dax-mmap ranges. We also don't have numbers on if/when wbinvd is a more performant solution. tl;dr Now that we have a baseline implementation can we please use data to make future arch decisions? -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists