[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190418224857.GI11645@redhat.com>
Date: Thu, 18 Apr 2019 18:48:57 -0400
From: Jerome Glisse <jglisse@...hat.com>
To: Laurent Dufour <ldufour@...ux.ibm.com>
Cc: akpm@...ux-foundation.org, mhocko@...nel.org, peterz@...radead.org,
kirill@...temov.name, ak@...ux.intel.com, dave@...olabs.net,
jack@...e.cz, Matthew Wilcox <willy@...radead.org>,
aneesh.kumar@...ux.ibm.com, benh@...nel.crashing.org,
mpe@...erman.id.au, paulus@...ba.org,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, hpa@...or.com,
Will Deacon <will.deacon@....com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
sergey.senozhatsky.work@...il.com,
Andrea Arcangeli <aarcange@...hat.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
kemi.wang@...el.com, Daniel Jordan <daniel.m.jordan@...cle.com>,
David Rientjes <rientjes@...gle.com>,
Ganesh Mahendran <opensource.ganesh@...il.com>,
Minchan Kim <minchan@...nel.org>,
Punit Agrawal <punitagrawal@...il.com>,
vinayak menon <vinayakm.list@...il.com>,
Yang Shi <yang.shi@...ux.alibaba.com>,
zhong jiang <zhongjiang@...wei.com>,
Haiyan Song <haiyanx.song@...el.com>,
Balbir Singh <bsingharora@...il.com>, sj38.park@...il.com,
Michel Lespinasse <walken@...gle.com>,
Mike Rapoport <rppt@...ux.ibm.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
haren@...ux.vnet.ibm.com, npiggin@...il.com,
paulmck@...ux.vnet.ibm.com, Tim Chen <tim.c.chen@...ux.intel.com>,
linuxppc-dev@...ts.ozlabs.org, x86@...nel.org
Subject: Re: [PATCH v12 09/31] mm: VMA sequence count
On Tue, Apr 16, 2019 at 03:45:00PM +0200, Laurent Dufour wrote:
> From: Peter Zijlstra <peterz@...radead.org>
>
> Wrap the VMA modifications (vma_adjust/unmap_page_range) with sequence
> counts such that we can easily test if a VMA is changed.
>
> The calls to vm_write_begin/end() in unmap_page_range() are
> used to detect when a VMA is being unmap and thus that new page fault
> should not be satisfied for this VMA. If the seqcount hasn't changed when
> the page table are locked, this means we are safe to satisfy the page
> fault.
>
> The flip side is that we cannot distinguish between a vma_adjust() and
> the unmap_page_range() -- where with the former we could have
> re-checked the vma bounds against the address.
>
> The VMA's sequence counter is also used to detect change to various VMA's
> fields used during the page fault handling, such as:
> - vm_start, vm_end
> - vm_pgoff
> - vm_flags, vm_page_prot
> - vm_policy
^ All above are under mmap write lock ?
> - anon_vma
^ This is either under mmap write lock or under page table lock
So my question is do we need the complexity of seqcount_t for this ?
It seems that using regular int as counter and also relying on vm_flags
when vma is unmap should do the trick.
vma_delete(struct vm_area_struct *vma)
{
...
/*
* Make sure the vma is mark as invalid ie neither read nor write
* so that speculative fault back off. A racing speculative fault
* will either see the flags as 0 or the new seqcount.
*/
vma->vm_flags = 0;
smp_wmb();
vma->seqcount++;
...
}
Then:
speculative_fault_begin(struct vm_area_struct *vma,
struct spec_vmf *spvmf)
{
...
spvmf->seqcount = vma->seqcount;
smp_rmb();
spvmf->vm_flags = vma->vm_flags;
if (!spvmf->vm_flags) {
// Back off the vma is dying ...
...
}
}
bool speculative_fault_commit(struct vm_area_struct *vma,
struct spec_vmf *spvmf)
{
...
seqcount = vma->seqcount;
smp_rmb();
vm_flags = vma->vm_flags;
if (spvmf->vm_flags != vm_flags || seqcount != spvmf->seqcount) {
// Something did change for the vma
return false;
}
return true;
}
This would also avoid the lockdep issue described below. But maybe what
i propose is stupid and i will see it after further reviewing thing.
Cheers,
Jérôme
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
>
> [Port to 4.12 kernel]
> [Build depends on CONFIG_SPECULATIVE_PAGE_FAULT]
> [Introduce vm_write_* inline function depending on
> CONFIG_SPECULATIVE_PAGE_FAULT]
> [Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
> using vm_raw_write* functions]
> [Fix a lock dependency warning in mmap_region() when entering the error
> path]
> [move sequence initialisation INIT_VMA()]
> [Review the patch description about unmap_page_range()]
> Signed-off-by: Laurent Dufour <ldufour@...ux.ibm.com>
> ---
> include/linux/mm.h | 44 ++++++++++++++++++++++++++++++++++++++++
> include/linux/mm_types.h | 3 +++
> mm/memory.c | 2 ++
> mm/mmap.c | 30 +++++++++++++++++++++++++++
> 4 files changed, 79 insertions(+)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 2ceb1d2869a6..906b9e06f18e 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1410,6 +1410,9 @@ struct zap_details {
> static inline void INIT_VMA(struct vm_area_struct *vma)
> {
> INIT_LIST_HEAD(&vma->anon_vma_chain);
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> + seqcount_init(&vma->vm_sequence);
> +#endif
> }
>
> struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
> @@ -1534,6 +1537,47 @@ static inline void unmap_shared_mapping_range(struct address_space *mapping,
> unmap_mapping_range(mapping, holebegin, holelen, 0);
> }
>
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +static inline void vm_write_begin(struct vm_area_struct *vma)
> +{
> + write_seqcount_begin(&vma->vm_sequence);
> +}
> +static inline void vm_write_begin_nested(struct vm_area_struct *vma,
> + int subclass)
> +{
> + write_seqcount_begin_nested(&vma->vm_sequence, subclass);
> +}
> +static inline void vm_write_end(struct vm_area_struct *vma)
> +{
> + write_seqcount_end(&vma->vm_sequence);
> +}
> +static inline void vm_raw_write_begin(struct vm_area_struct *vma)
> +{
> + raw_write_seqcount_begin(&vma->vm_sequence);
> +}
> +static inline void vm_raw_write_end(struct vm_area_struct *vma)
> +{
> + raw_write_seqcount_end(&vma->vm_sequence);
> +}
> +#else
> +static inline void vm_write_begin(struct vm_area_struct *vma)
> +{
> +}
> +static inline void vm_write_begin_nested(struct vm_area_struct *vma,
> + int subclass)
> +{
> +}
> +static inline void vm_write_end(struct vm_area_struct *vma)
> +{
> +}
> +static inline void vm_raw_write_begin(struct vm_area_struct *vma)
> +{
> +}
> +static inline void vm_raw_write_end(struct vm_area_struct *vma)
> +{
> +}
> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
> +
> extern int access_process_vm(struct task_struct *tsk, unsigned long addr,
> void *buf, int len, unsigned int gup_flags);
> extern int access_remote_vm(struct mm_struct *mm, unsigned long addr,
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index fd7d38ee2e33..e78f72eb2576 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -337,6 +337,9 @@ struct vm_area_struct {
> struct mempolicy *vm_policy; /* NUMA policy for the VMA */
> #endif
> struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> + seqcount_t vm_sequence;
> +#endif
> } __randomize_layout;
>
> struct core_thread {
> diff --git a/mm/memory.c b/mm/memory.c
> index d5bebca47d98..423fa8ea0569 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1256,6 +1256,7 @@ void unmap_page_range(struct mmu_gather *tlb,
> unsigned long next;
>
> BUG_ON(addr >= end);
> + vm_write_begin(vma);
> tlb_start_vma(tlb, vma);
> pgd = pgd_offset(vma->vm_mm, addr);
> do {
> @@ -1265,6 +1266,7 @@ void unmap_page_range(struct mmu_gather *tlb,
> next = zap_p4d_range(tlb, vma, pgd, addr, next, details);
> } while (pgd++, addr = next, addr != end);
> tlb_end_vma(tlb, vma);
> + vm_write_end(vma);
> }
>
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 5ad3a3228d76..a4e4d52a5148 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -726,6 +726,30 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
> long adjust_next = 0;
> int remove_next = 0;
>
> + /*
> + * Why using vm_raw_write*() functions here to avoid lockdep's warning ?
> + *
> + * Locked is complaining about a theoretical lock dependency, involving
> + * 3 locks:
> + * mapping->i_mmap_rwsem --> vma->vm_sequence --> fs_reclaim
> + *
> + * Here are the major path leading to this dependency :
> + * 1. __vma_adjust() mmap_sem -> vm_sequence -> i_mmap_rwsem
> + * 2. move_vmap() mmap_sem -> vm_sequence -> fs_reclaim
> + * 3. __alloc_pages_nodemask() fs_reclaim -> i_mmap_rwsem
> + * 4. unmap_mapping_range() i_mmap_rwsem -> vm_sequence
> + *
> + * So there is no way to solve this easily, especially because in
> + * unmap_mapping_range() the i_mmap_rwsem is grab while the impacted
> + * VMAs are not yet known.
> + * However, the way the vm_seq is used is guarantying that we will
> + * never block on it since we just check for its value and never wait
> + * for it to move, see vma_has_changed() and handle_speculative_fault().
> + */
> + vm_raw_write_begin(vma);
> + if (next)
> + vm_raw_write_begin(next);
> +
> if (next && !insert) {
> struct vm_area_struct *exporter = NULL, *importer = NULL;
>
> @@ -950,6 +974,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
> * "vma->vm_next" gap must be updated.
> */
> next = vma->vm_next;
> + if (next)
> + vm_raw_write_begin(next);
> } else {
> /*
> * For the scope of the comment "next" and
> @@ -996,6 +1022,10 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
> if (insert && file)
> uprobe_mmap(insert);
>
> + if (next && next != vma)
> + vm_raw_write_end(next);
> + vm_raw_write_end(vma);
> +
> validate_mm(mm);
>
> return 0;
> --
> 2.21.0
>
Powered by blists - more mailing lists