[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160208094430.GA9451@quack.suse.cz>
Date: Mon, 8 Feb 2016 10:44:30 +0100
From: Jan Kara <jack@...e.cz>
To: Dmitry Monakhov <dmonlist@...il.com>
Cc: Ross Zwisler <ross.zwisler@...ux.intel.com>,
linux-kernel@...r.kernel.org, "H. Peter Anvin" <hpa@...or.com>,
"J. Bruce Fields" <bfields@...ldses.org>,
Theodore Ts'o <tytso@....edu>,
Alexander Viro <viro@...iv.linux.org.uk>,
Andreas Dilger <adilger.kernel@...ger.ca>,
Andrew Morton <akpm@...ux-foundation.org>,
Dan Williams <dan.j.williams@...el.com>,
Dave Chinner <david@...morbit.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Ingo Molnar <mingo@...hat.com>, Jan Kara <jack@...e.com>,
Jeff Layton <jlayton@...chiereds.net>,
Matthew Wilcox <matthew.r.wilcox@...el.com>,
Matthew Wilcox <willy@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-ext4@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, linux-nvdimm@...ts.01.org, x86@...nel.org,
xfs@....sgi.com
Subject: Re: [PATCH v8 6/9] dax: add support for fsync/msync
On Sat 06-02-16 17:33:07, Dmitry Monakhov wrote:
> > +int dax_writeback_mapping_range(struct address_space *mapping, loff_t start,
> > + loff_t end)
> > +{
> > + struct inode *inode = mapping->host;
> > + struct block_device *bdev = inode->i_sb->s_bdev;
> > + pgoff_t indices[PAGEVEC_SIZE];
> > + pgoff_t start_page, end_page;
> > + struct pagevec pvec;
> > + void *entry;
> > + int i, ret = 0;
> > +
> > + if (WARN_ON_ONCE(inode->i_blkbits != PAGE_SHIFT))
> > + return -EIO;
> > +
> > + rcu_read_lock();
> > + entry = radix_tree_lookup(&mapping->page_tree, start & PMD_MASK);
> > + rcu_read_unlock();
> > +
> > + /* see if the start of our range is covered by a PMD entry */
> > + if (entry && RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
> > + start &= PMD_MASK;
> > +
> > + start_page = start >> PAGE_CACHE_SHIFT;
> > + end_page = end >> PAGE_CACHE_SHIFT;
> > +
> > + tag_pages_for_writeback(mapping, start_page, end_page);
> > +
> > + pagevec_init(&pvec, 0);
> > + while (1) {
> > + pvec.nr = find_get_entries_tag(mapping, start_page,
> > + PAGECACHE_TAG_TOWRITE, PAGEVEC_SIZE,
> > + pvec.pages, indices);
> > +
> > + if (pvec.nr == 0)
> > + break;
> > +
> > + for (i = 0; i < pvec.nr; i++) {
> > + ret = dax_writeback_one(bdev, mapping, indices[i],
> > + pvec.pages[i]);
> > + if (ret < 0)
> > + return ret;
> > + }
> I think it would be more efficient to use batched locking like follows:
> spin_lock_irq(&mapping->tree_lock);
> for (i = 0; i < pvec.nr; i++) {
> struct blk_dax_ctl dax[PAGEVEC_SIZE];
> radix_tree_tag_clear(page_tree, indices[i], PAGECACHE_TAG_TOWRITE);
> /* It is also reasonable to merge adjacent dax
> * regions in to one */
> dax[i].sector = RADIX_DAX_SECTOR(entry);
> dax[i].size = (type == RADIX_DAX_PMD ? PMD_SIZE : PAGE_SIZE);
>
> }
> spin_unlock_irq(&mapping->tree_lock);
> if (blk_queue_enter(q, true) != 0)
> goto error;
> for (i = 0; i < pvec.nr; i++) {
> rc = bdev_direct_access(bdev, dax[i]);
> wb_cache_pmem(dax[i].addr, dax[i].size);
> }
> ret = blk_queue_exit(q, true)
We need to clear the radix tree tag only after flushing caches. But in
principle I agree that some batching of radix tree tag manipulations should
be doable. But frankly so far we have issues with correctness so speed is
not our main concern.
> > + }
> > + wmb_pmem();
> > + return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(dax_writeback_mapping_range);
> > +
> > static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh,
> > struct vm_area_struct *vma, struct vm_fault *vmf)
> > {
> > @@ -363,6 +532,11 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh,
> > }
> > dax_unmap_atomic(bdev, &dax);
> >
> > + error = dax_radix_entry(mapping, vmf->pgoff, dax.sector, false,
> > + vmf->flags & FAULT_FLAG_WRITE);
> > + if (error)
> > + goto out;
> > +
> > error = vm_insert_mixed(vma, vaddr, dax.pfn);
> >
> > out:
> > @@ -487,6 +661,7 @@ int __dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
> > delete_from_page_cache(page);
> > unlock_page(page);
> > page_cache_release(page);
> > + page = NULL;
> > }
> I've realized that I do not understand why dax_fault code works at all.
> During dax_fault we want to remove page from mapping and insert dax-entry
> Basically code looks like follows:
> 0 page = find_get_page()
> 1 lock_page(page)
> 2 delete_from_page_cache(page);
> 3 unlock_page(page);
> 4 dax_insert_mapping(inode, &bh, vma, vmf);
>
> BUT what on earth protects us from other process to reinsert page again
> after step(2) but before (4)?
Nothing, it's a bug and Ross / Matthew are working on fixing it...
Honza
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists