[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190418220545.GF11645@redhat.com>
Date: Thu, 18 Apr 2019 18:05:45 -0400
From: Jerome Glisse <jglisse@...hat.com>
To: Laurent Dufour <ldufour@...ux.ibm.com>
Cc: akpm@...ux-foundation.org, mhocko@...nel.org, peterz@...radead.org,
kirill@...temov.name, ak@...ux.intel.com, dave@...olabs.net,
jack@...e.cz, Matthew Wilcox <willy@...radead.org>,
aneesh.kumar@...ux.ibm.com, benh@...nel.crashing.org,
mpe@...erman.id.au, paulus@...ba.org,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, hpa@...or.com,
Will Deacon <will.deacon@....com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
sergey.senozhatsky.work@...il.com,
Andrea Arcangeli <aarcange@...hat.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
kemi.wang@...el.com, Daniel Jordan <daniel.m.jordan@...cle.com>,
David Rientjes <rientjes@...gle.com>,
Ganesh Mahendran <opensource.ganesh@...il.com>,
Minchan Kim <minchan@...nel.org>,
Punit Agrawal <punitagrawal@...il.com>,
vinayak menon <vinayakm.list@...il.com>,
Yang Shi <yang.shi@...ux.alibaba.com>,
zhong jiang <zhongjiang@...wei.com>,
Haiyan Song <haiyanx.song@...el.com>,
Balbir Singh <bsingharora@...il.com>, sj38.park@...il.com,
Michel Lespinasse <walken@...gle.com>,
Mike Rapoport <rppt@...ux.ibm.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
haren@...ux.vnet.ibm.com, npiggin@...il.com,
paulmck@...ux.vnet.ibm.com, Tim Chen <tim.c.chen@...ux.intel.com>,
linuxppc-dev@...ts.ozlabs.org, x86@...nel.org
Subject: Re: [PATCH v12 06/31] mm: introduce pte_spinlock for
FAULT_FLAG_SPECULATIVE
On Tue, Apr 16, 2019 at 03:44:57PM +0200, Laurent Dufour wrote:
> When handling page fault without holding the mmap_sem the fetch of the
> pte lock pointer and the locking will have to be done while ensuring
> that the VMA is not touched in our back.
>
> So move the fetch and locking operations in a dedicated function.
>
> Signed-off-by: Laurent Dufour <ldufour@...ux.ibm.com>
Reviewed-by: Jérôme Glisse <jglisse@...hat.com>
> ---
> mm/memory.c | 15 +++++++++++----
> 1 file changed, 11 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index fc3698d13cb5..221ccdf34991 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2073,6 +2073,13 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
> }
> EXPORT_SYMBOL_GPL(apply_to_page_range);
>
> +static inline bool pte_spinlock(struct vm_fault *vmf)
> +{
> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
> + spin_lock(vmf->ptl);
> + return true;
> +}
> +
> static inline bool pte_map_lock(struct vm_fault *vmf)
> {
> vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
> @@ -3656,8 +3663,8 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
> * validation through pte_unmap_same(). It's of NUMA type but
> * the pfn may be screwed if the read is non atomic.
> */
> - vmf->ptl = pte_lockptr(vma->vm_mm, vmf->pmd);
> - spin_lock(vmf->ptl);
> + if (!pte_spinlock(vmf))
> + return VM_FAULT_RETRY;
> if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {
> pte_unmap_unlock(vmf->pte, vmf->ptl);
> goto out;
> @@ -3850,8 +3857,8 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
> if (pte_protnone(vmf->orig_pte) && vma_is_accessible(vmf->vma))
> return do_numa_page(vmf);
>
> - vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
> - spin_lock(vmf->ptl);
> + if (!pte_spinlock(vmf))
> + return VM_FAULT_RETRY;
> entry = vmf->orig_pte;
> if (unlikely(!pte_same(*vmf->pte, entry)))
> goto unlock;
> --
> 2.21.0
>
Powered by blists - more mailing lists