[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1446070176-14568-1-git-send-email-ross.zwisler@linux.intel.com>
Date: Wed, 28 Oct 2015 16:09:34 -0600
From: Ross Zwisler <ross.zwisler@...ux.intel.com>
To: linux-kernel@...r.kernel.org
Cc: Ross Zwisler <ross.zwisler@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>,
Dan Williams <dan.j.williams@...el.com>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-nvdimm@...ts.01.org, x86@...nel.org,
Dave Chinner <david@...morbit.com>, Jan Kara <jack@...e.com>
Subject: [PATCH 0/2] "big hammer" for DAX msync/fsync correctness
This series implements the very slow but correct handling for
blkdev_issue_flush() with DAX mappings, as discussed here:
https://lkml.org/lkml/2015/10/26/116
I don't think that we can actually do the
on_each_cpu(sync_cache, ...);
...where sync_cache is something like:
cache_disable();
wbinvd();
pcommit();
cache_enable();
solution as proposed by Dan because WBINVD + PCOMMIT doesn't guarantee that
your writes actually make it durably onto the DIMMs. I believe you really do
need to loop through the cache lines, flush them with CLWB, then fence and
PCOMMIT.
I do worry that the cost of blindly flushing the entire PMEM namespace on each
fsync or msync will be prohibitively expensive, and that we'll by very
incentivized to move to the radix tree based dirty page tracking as soon as
possible. :)
Ross Zwisler (2):
pmem: add wb_cache_pmem() to the PMEM API
pmem: Add simple and slow fsync/msync support
arch/x86/include/asm/pmem.h | 11 ++++++-----
drivers/nvdimm/pmem.c | 10 +++++++++-
include/linux/pmem.h | 22 +++++++++++++++++++++-
3 files changed, 36 insertions(+), 7 deletions(-)
--
2.1.0
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists