[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170719175112.GA24588@linux.intel.com>
Date: Wed, 19 Jul 2017 11:51:12 -0600
From: Ross Zwisler <ross.zwisler@...ux.intel.com>
To: Jan Kara <jack@...e.cz>
Cc: Ross Zwisler <ross.zwisler@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
"Darrick J. Wong" <darrick.wong@...cle.com>,
Theodore Ts'o <tytso@....edu>,
Alexander Viro <viro@...iv.linux.org.uk>,
Andreas Dilger <adilger.kernel@...ger.ca>,
Christoph Hellwig <hch@....de>,
Dan Williams <dan.j.williams@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
Ingo Molnar <mingo@...hat.com>,
Jonathan Corbet <corbet@....net>,
Matthew Wilcox <mawilcox@...rosoft.com>,
Steven Rostedt <rostedt@...dmis.org>,
linux-doc@...r.kernel.org, linux-ext4@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-nvdimm@...ts.01.org, linux-xfs@...r.kernel.org
Subject: Re: [PATCH v3 1/5] mm: add vm_insert_mixed_mkwrite()
On Wed, Jul 19, 2017 at 04:16:59PM +0200, Jan Kara wrote:
> On Wed 28-06-17 16:01:48, Ross Zwisler wrote:
> > To be able to use the common 4k zero page in DAX we need to have our PTE
> > fault path look more like our PMD fault path where a PTE entry can be
> > marked as dirty and writeable as it is first inserted, rather than waiting
> > for a follow-up dax_pfn_mkwrite() => finish_mkwrite_fault() call.
> >
> > Right now we can rely on having a dax_pfn_mkwrite() call because we can
> > distinguish between these two cases in do_wp_page():
> >
> > case 1: 4k zero page => writable DAX storage
> > case 2: read-only DAX storage => writeable DAX storage
> >
> > This distinction is made by via vm_normal_page(). vm_normal_page() returns
> > false for the common 4k zero page, though, just as it does for DAX ptes.
> > Instead of special casing the DAX + 4k zero page case, we will simplify our
> > DAX PTE page fault sequence so that it matches our DAX PMD sequence, and
> > get rid of dax_pfn_mkwrite() completely.
> >
> > This means that insert_pfn() needs to follow the lead of insert_pfn_pmd()
> > and allow us to pass in a 'mkwrite' flag. If 'mkwrite' is set insert_pfn()
> > will do the work that was previously done by wp_page_reuse() as part of the
> > dax_pfn_mkwrite() call path.
> >
> > Signed-off-by: Ross Zwisler <ross.zwisler@...ux.intel.com>
>
> Just one small comment below.
>
> > @@ -1658,14 +1658,26 @@ static int insert_pfn(struct vm_area_struct *vma, unsigned long addr,
> > if (!pte)
> > goto out;
> > retval = -EBUSY;
> > - if (!pte_none(*pte))
> > - goto out_unlock;
> > + if (!pte_none(*pte)) {
> > + if (mkwrite) {
> > + entry = *pte;
> > + goto out_mkwrite;
>
> Can we maybe check here that (pte_pfn(*pte) == pfn_t_to_pfn(pfn)) and
> return -EBUSY otherwise? That way we are sure insert_pfn() isn't doing
> anything we don't expect
Sure, that's fine. I'll add it as a WARN_ON_ONCE() so it's a very loud
failure. If the pfns don't match I think we're insane (and would have been
insane prior to this patch series as well) because we are getting a page fault
and somehow have a different PFN already mapped at that location.
> and if I understand the code right, we need to
> invalidate all zero page mappings at given file offset (via
> unmap_mapping_range()) before mapping an allocated block there and thus the
> case of filling the hole won't be affected by this?
Correct. Here's the call tree if we already have a zero page mapped and are
now faulting in an allocated block:
dax_iomap_pte_fault()
dax_insert_mapping()
dax_insert_mapping_entry()
unmap_mapping_range() for our zero page
vm_insert_mixed_mkwrite() installs the new PTE. We have pte_none(), so we
skip the new mkwrite goto in insert_pfn().
Powered by blists - more mailing lists