lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151103205131.GH19199@dastard>
Date:	Wed, 4 Nov 2015 07:51:31 +1100
From:	Dave Chinner <david@...morbit.com>
To:	Dan Williams <dan.j.williams@...el.com>
Cc:	Jens Axboe <axboe@...com>, Jan Kara <jack@...e.cz>,
	"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Ross Zwisler <ross.zwisler@...ux.intel.com>,
	Christoph Hellwig <hch@....de>
Subject: Re: [PATCH v3 14/15] dax: dirty extent notification

On Mon, Nov 02, 2015 at 11:20:49PM -0800, Dan Williams wrote:
> On Mon, Nov 2, 2015 at 9:40 PM, Dave Chinner <david@...morbit.com> wrote:
> > On Mon, Nov 02, 2015 at 08:56:24PM -0800, Dan Williams wrote:
> >> No, we definitely can't do that.   I think your mental model of the
> >> cache flushing is similar to the disk model where a small buffer is
> >> flushed after a large streaming write.  Both Ross' patches and my
> >> approach suffer from the same horror that the cache flushing is O(N)
> >> currently, so we don't want to make it responsible for more data
> >> ranges areas than is strictly necessary.
> >
> > I didn't see anything that was O(N) in Ross's patches. What part of
> > the fsync algorithm that Ross proposed are you refering to here?
> 
> We have to issue clflush per touched virtual address rather than a
> constant number of physical ways, or a flush-all instruction.
.....
> > So don't tell me that tracking dirty pages in the radix tree too
> > slow for DAX and that DAX should not be used for POSIX IO based
> > applications - it should be as fast as buffered IO, if not faster,
> > and if it isn't then we've screwed up real bad. And right now, we're
> > screwing up real bad.
> 
> Again, it's not the dirty tracking in the radix I'm worried about it's
> looping through all the virtual addresses within those pages..

So, let me summarise what I think you've just said. You are

1. fine with looping through the virtual addresses doing cache flushes
   synchronously when doing IO despite it having significant
   latency and performance costs.

2. Happy to hack a method into DAX to bypass the filesystems by
   pushing information to the block device for it to track regions that
   need cache flushes, then add infrastructure to the block device to
   track those dirty regions and then walk those addresses and issue
   cache flushes when the filesystem issues a REQ_FLUSH IO regardless
   of whether the filesystem actually needs those cachelines flushed
   for that specific IO?

3. Not happy to use the generic mm/vfs level infrastructure
   architectected specifically to provide the exact asynchronous
   cache flushing/writeback semantics we require because it will
   cause too many cache flushes, even though the number of cache
   flushes will be, at worst, the same as in 2).


1) will work, but as we can see it is *slow*. 3) is what Ross is
implementing - it's a tried and tested architecture that all mm/fs
developers understand, and his explanation of why it will work for
pmem is pretty solid and completely platform/hardware architecture
independent.

Which leaves this question: How does 2) save us anything in terms of
avoiding iterating virtual addresses and issuing cache flushes
over 3)? And is it sufficient to justify hacking a bypass into DAX
and the additional driver level complexity of having to add dirty
region tracking, flushing and cleaning to REQ_FLUSH operations?

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ