[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49lhamljhe.fsf@segfault.boston.devel.redhat.com>
Date: Wed, 28 Oct 2015 18:24:29 -0400
From: Jeff Moyer <jmoyer@...hat.com>
To: Ross Zwisler <ross.zwisler@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org, linux-nvdimm@...1.01.org,
Dave Chinner <david@...morbit.com>, x86@...nel.org,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>, Jan Kara <jack@...e.com>
Subject: Re: [PATCH 0/2] "big hammer" for DAX msync/fsync correctness
Ross Zwisler <ross.zwisler@...ux.intel.com> writes:
> This series implements the very slow but correct handling for
> blkdev_issue_flush() with DAX mappings, as discussed here:
>
> https://lkml.org/lkml/2015/10/26/116
>
> I don't think that we can actually do the
>
> on_each_cpu(sync_cache, ...);
>
> ...where sync_cache is something like:
>
> cache_disable();
> wbinvd();
> pcommit();
> cache_enable();
>
> solution as proposed by Dan because WBINVD + PCOMMIT doesn't guarantee that
> your writes actually make it durably onto the DIMMs. I believe you really do
> need to loop through the cache lines, flush them with CLWB, then fence and
> PCOMMIT.
*blink*
*blink*
So much for not violating the principal of least surprise. I suppose
you've asked the hardware folks, and they've sent you down this path?
> I do worry that the cost of blindly flushing the entire PMEM namespace on each
> fsync or msync will be prohibitively expensive, and that we'll by very
> incentivized to move to the radix tree based dirty page tracking as soon as
> possible. :)
Sure, but wbinvd would be quite costly as well. Either way I think a
better solution will be required in the near term.
Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists