lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4jAAAtRc7GSOqDZixxpQfM4bzHtkwmrsjLJ0Bqba+0KRA@mail.gmail.com>
Date:	Tue, 5 Jan 2016 09:20:47 -0800
From:	Dan Williams <dan.j.williams@...el.com>
To:	Ross Zwisler <ross.zwisler@...ux.intel.com>,
	Jan Kara <jack@...e.cz>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"H. Peter Anvin" <hpa@...or.com>,
	"J. Bruce Fields" <bfields@...ldses.org>,
	"Theodore Ts'o" <tytso@....edu>,
	Alexander Viro <viro@...iv.linux.org.uk>,
	Andreas Dilger <adilger.kernel@...ger.ca>,
	Dave Chinner <david@...morbit.com>,
	Ingo Molnar <mingo@...hat.com>, Jan Kara <jack@...e.com>,
	Jeff Layton <jlayton@...chiereds.net>,
	Matthew Wilcox <willy@...ux.intel.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	linux-ext4 <linux-ext4@...r.kernel.org>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>,
	Linux MM <linux-mm@...ck.org>,
	"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>,
	X86 ML <x86@...nel.org>, XFS Developers <xfs@....sgi.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Dan Williams <dan.j.williams@...el.com>,
	Matthew Wilcox <matthew.r.wilcox@...el.com>,
	Dave Hansen <dave.hansen@...ux.intel.com>
Subject: Re: [PATCH v6 4/7] dax: add support for fsync/msync

On Tue, Jan 5, 2016 at 9:12 AM, Ross Zwisler
<ross.zwisler@...ux.intel.com> wrote:
> On Tue, Jan 05, 2016 at 12:13:58PM +0100, Jan Kara wrote:
>> On Wed 23-12-15 12:39:17, Ross Zwisler wrote:
>> > To properly handle fsync/msync in an efficient way DAX needs to track dirty
>> > pages so it is able to flush them durably to media on demand.
>> >
>> > The tracking of dirty pages is done via the radix tree in struct
>> > address_space.  This radix tree is already used by the page writeback
>> > infrastructure for tracking dirty pages associated with an open file, and
>> > it already has support for exceptional (non struct page*) entries.  We
>> > build upon these features to add exceptional entries to the radix tree for
>> > DAX dirty PMD or PTE pages at fault time.
>> >
>> > Signed-off-by: Ross Zwisler <ross.zwisler@...ux.intel.com>
>> ...
>> > +static int dax_writeback_one(struct block_device *bdev,
>> > +           struct address_space *mapping, pgoff_t index, void *entry)
>> > +{
>> > +   struct radix_tree_root *page_tree = &mapping->page_tree;
>> > +   int type = RADIX_DAX_TYPE(entry);
>> > +   struct radix_tree_node *node;
>> > +   struct blk_dax_ctl dax;
>> > +   void **slot;
>> > +   int ret = 0;
>> > +
>> > +   spin_lock_irq(&mapping->tree_lock);
>> > +   /*
>> > +    * Regular page slots are stabilized by the page lock even
>> > +    * without the tree itself locked.  These unlocked entries
>> > +    * need verification under the tree lock.
>> > +    */
>> > +   if (!__radix_tree_lookup(page_tree, index, &node, &slot))
>> > +           goto unlock;
>> > +   if (*slot != entry)
>> > +           goto unlock;
>> > +
>> > +   /* another fsync thread may have already written back this entry */
>> > +   if (!radix_tree_tag_get(page_tree, index, PAGECACHE_TAG_TOWRITE))
>> > +           goto unlock;
>> > +
>> > +   radix_tree_tag_clear(page_tree, index, PAGECACHE_TAG_TOWRITE);
>> > +
>> > +   if (WARN_ON_ONCE(type != RADIX_DAX_PTE && type != RADIX_DAX_PMD)) {
>> > +           ret = -EIO;
>> > +           goto unlock;
>> > +   }
>> > +
>> > +   dax.sector = RADIX_DAX_SECTOR(entry);
>> > +   dax.size = (type == RADIX_DAX_PMD ? PMD_SIZE : PAGE_SIZE);
>> > +   spin_unlock_irq(&mapping->tree_lock);
>> > +
>> > +   /*
>> > +    * We cannot hold tree_lock while calling dax_map_atomic() because it
>> > +    * eventually calls cond_resched().
>> > +    */
>> > +   ret = dax_map_atomic(bdev, &dax);
>> > +   if (ret < 0)
>> > +           return ret;
>> > +
>> > +   if (WARN_ON_ONCE(ret < dax.size)) {
>> > +           ret = -EIO;
>> > +           dax_unmap_atomic(bdev, &dax);
>> > +           return ret;
>> > +   }
>> > +
>> > +   spin_lock_irq(&mapping->tree_lock);
>> > +   /*
>> > +    * We need to revalidate our radix entry while holding tree_lock
>> > +    * before we do the writeback.
>> > +    */
>>
>> Do we really need to revalidate here? dax_map_atomic() makes sure the addr
>> & size is still part of the device. I guess you are concerned that due to
>> truncate or similar operation those sectors needn't belong to the same file
>> anymore but we don't really care about flushing sectors for someone else,
>> do we?
>>
>> Otherwise the patch looks good to me.
>
> Yep, the concern is that we could have somehow raced against a truncate
> operation while we weren't holding the tree_lock, and that now the address we
> are about to flush belongs to another file or is unallocated by the
> filesystem.
>
> I agree that this should be non-destructive - if you think the additional
> check and locking isn't worth the overhead, I'm happy to take it out.  I don't
> have a strong opinion either way.
>

My concern is whether flushing potentially invalid virtual addresses
is problematic on some architectures.  Maybe it's just FUD, but it's
less work in my opinion to just revalidate the address versus auditing
each arch for this concern.

At a minimum we can change the comment to not say "We need to" and
instead say "TODO: are all archs ok with flushing potentially invalid
addresses?"
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ