[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170616194540.GB20742@linux.intel.com>
Date: Fri, 16 Jun 2017 13:45:40 -0600
From: Ross Zwisler <ross.zwisler@...ux.intel.com>
To: Jan Kara <jack@...e.cz>
Cc: Ross Zwisler <ross.zwisler@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
"Darrick J. Wong" <darrick.wong@...cle.com>,
Theodore Ts'o <tytso@....edu>,
Alexander Viro <viro@...iv.linux.org.uk>,
Andreas Dilger <adilger.kernel@...ger.ca>,
Christoph Hellwig <hch@....de>,
Dan Williams <dan.j.williams@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
Ingo Molnar <mingo@...hat.com>,
Jonathan Corbet <corbet@....net>,
Matthew Wilcox <mawilcox@...rosoft.com>,
Steven Rostedt <rostedt@...dmis.org>,
linux-doc@...r.kernel.org, linux-ext4@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-nvdimm@...ts.01.org, linux-xfs@...r.kernel.org
Subject: Re: [PATCH v2 3/3] dax: use common 4k zero page for dax mmap reads
On Thu, Jun 15, 2017 at 04:58:56PM +0200, Jan Kara wrote:
> On Wed 14-06-17 11:22:11, Ross Zwisler wrote:
> > @@ -216,17 +217,6 @@ static void dax_unlock_mapping_entry(struct address_space *mapping,
> > dax_wake_mapping_entry_waiter(mapping, index, entry, false);
> > }
> >
> > -static void put_locked_mapping_entry(struct address_space *mapping,
> > - pgoff_t index, void *entry)
> > -{
> > - if (!radix_tree_exceptional_entry(entry)) {
> > - unlock_page(entry);
> > - put_page(entry);
> > - } else {
> > - dax_unlock_mapping_entry(mapping, index);
> > - }
> > -}
> > -
>
> The naming becomes asymetric with this. So I'd prefer keeping
> put_locked_mapping_entry() as a trivial wrapper around
> dax_unlock_mapping_entry() unless we can craft more sensible naming / API
> for entry grabbing (and that would be a separate patch anyway).
Sure, that works for me. I'll fix for v3.
> > -static int dax_load_hole(struct address_space *mapping, void **entry,
> > +static int dax_load_hole(struct address_space *mapping, void *entry,
> > struct vm_fault *vmf)
> > {
> > struct inode *inode = mapping->host;
> > - struct page *page;
> > - int ret;
> > -
> > - /* Hole page already exists? Return it... */
> > - if (!radix_tree_exceptional_entry(*entry)) {
> > - page = *entry;
> > - goto finish_fault;
> > - }
> > + unsigned long vaddr = vmf->address;
> > + int ret = VM_FAULT_NOPAGE;
> > + struct page *zero_page;
> > + void *entry2;
> >
> > - /* This will replace locked radix tree entry with a hole page */
> > - page = find_or_create_page(mapping, vmf->pgoff,
> > - vmf->gfp_mask | __GFP_ZERO);
>
> With this gone, you can also remove the special DAX handling from
> mm/filemap.c: page_cache_tree_insert() and remove from dax.h
> dax_wake_mapping_entry_waiter(), dax_radix_locked_entry() and RADIX_DAX
> definitions. Yay! As a separate patch please.
Oh, yay! :) Sure, I'll have this patch for v3.
Powered by blists - more mailing lists