[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170522144457.GE25118@quack2.suse.cz>
Date: Mon, 22 May 2017 16:44:57 +0200
From: Jan Kara <jack@...e.cz>
To: Ross Zwisler <ross.zwisler@...ux.intel.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
"Darrick J. Wong" <darrick.wong@...cle.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
Christoph Hellwig <hch@....de>,
Dan Williams <dan.j.williams@...el.com>,
Dave Hansen <dave.hansen@...el.com>, Jan Kara <jack@...e.cz>,
Matthew Wilcox <mawilcox@...rosoft.com>,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-nvdimm@...ts.01.org,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Pawel Lebioda <pawel.lebioda@...el.com>,
Dave Jiang <dave.jiang@...el.com>,
Xiong Zhou <xzhou@...hat.com>, Eryu Guan <eguan@...hat.com>,
stable@...r.kernel.org
Subject: Re: [PATCH 2/2] dax: Fix race between colliding PMD & PTE entries
On Wed 17-05-17 11:16:39, Ross Zwisler wrote:
> We currently have two related PMD vs PTE races in the DAX code. These can
> both be easily triggered by having two threads reading and writing
> simultaneously to the same private mapping, with the key being that private
> mapping reads can be handled with PMDs but private mapping writes are
> always handled with PTEs so that we can COW.
>
> Here is the first race:
>
> CPU 0 CPU 1
>
> (private mapping write)
> __handle_mm_fault()
> create_huge_pmd() - FALLBACK
> handle_pte_fault()
> passes check for pmd_devmap()
>
> (private mapping read)
> __handle_mm_fault()
> create_huge_pmd()
> dax_iomap_pmd_fault() inserts PMD
>
> dax_iomap_pte_fault() does a PTE fault, but we already have a DAX PMD
> installed in our page tables at this spot.
>
> Here's the second race:
>
> CPU 0 CPU 1
>
> (private mapping write)
> __handle_mm_fault()
> create_huge_pmd() - FALLBACK
> (private mapping read)
> __handle_mm_fault()
> passes check for pmd_none()
> create_huge_pmd()
>
> handle_pte_fault()
> dax_iomap_pte_fault() inserts PTE
> dax_iomap_pmd_fault() inserts PMD,
> but we already have a PTE at
> this spot.
>
> The core of the issue is that while there is isolation between faults to
> the same range in the DAX fault handlers via our DAX entry locking, there
> is no isolation between faults in the code in mm/memory.c. This means for
> instance that this code in __handle_mm_fault() can run:
>
> if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) {
> ret = create_huge_pmd(&vmf);
>
> But by the time we actually get to run the fault handler called by
> create_huge_pmd(), the PMD is no longer pmd_none() because a racing PTE
> fault has installed a normal PMD here as a parent. This is the cause of
> the 2nd race. The first race is similar - there is the following check in
> handle_pte_fault():
>
> } else {
> /* See comment in pte_alloc_one_map() */
> if (pmd_devmap(*vmf->pmd) || pmd_trans_unstable(vmf->pmd))
> return 0;
>
> So if a pmd_devmap() PMD (a DAX PMD) has been installed at vmf->pmd, we
> will bail and retry the fault. This is correct, but there is nothing
> preventing the PMD from being installed after this check but before we
> actually get to the DAX PTE fault handlers.
>
> In my testing these races result in the following types of errors:
>
> BUG: Bad rss-counter state mm:ffff8800a817d280 idx:1 val:1
> BUG: non-zero nr_ptes on freeing mm: 15
>
> Fix this issue by having the DAX fault handlers verify that it is safe to
> continue their fault after they have taken an entry lock to block other
> racing faults.
>
> Signed-off-by: Ross Zwisler <ross.zwisler@...ux.intel.com>
> Reported-by: Pawel Lebioda <pawel.lebioda@...el.com>
> Cc: stable@...r.kernel.org
>
> ---
>
> I've written a new xfstest for this race, which I will send in response to
> this patch series. This series has also survived an xfstest run without
> any new issues.
>
> ---
> fs/dax.c | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index c22eaf1..3cc02d1 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -1155,6 +1155,15 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf,
> }
>
> /*
> + * It is possible, particularly with mixed reads & writes to private
> + * mappings, that we have raced with a PMD fault that overlaps with
> + * the PTE we need to set up. Now that we have a locked mapping entry
> + * we can safely unmap the huge PMD so that we can install our PTE in
> + * our page tables.
> + */
> + split_huge_pmd(vmf->vma, vmf->pmd, vmf->address);
> +
Can we just check the PMD and if is isn't as we want it, bail out and retry
the fault? IMHO it will be more obvious that way (and also more in line
like these races are handled for the classical THP). Otherwise the patch
looks good to me.
Honza
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists