lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <44849c10-bc67-b55e-5788-d3c6bb5e7ad1@linux.vnet.ibm.com> Date: Wed, 13 Sep 2017 18:56:25 +0200 From: Laurent Dufour <ldufour@...ux.vnet.ibm.com> To: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com> Cc: paulmck@...ux.vnet.ibm.com, peterz@...radead.org, akpm@...ux-foundation.org, kirill@...temov.name, ak@...ux.intel.com, mhocko@...nel.org, dave@...olabs.net, jack@...e.cz, Matthew Wilcox <willy@...radead.org>, benh@...nel.crashing.org, mpe@...erman.id.au, paulus@...ba.org, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, hpa@...or.com, Will Deacon <will.deacon@....com>, Sergey Senozhatsky <sergey.senozhatsky@...il.com>, linux-kernel@...r.kernel.org, linux-mm@...ck.org, haren@...ux.vnet.ibm.com, khandual@...ux.vnet.ibm.com, npiggin@...il.com, bsingharora@...il.com, Tim Chen <tim.c.chen@...ux.intel.com>, linuxppc-dev@...ts.ozlabs.org, x86@...nel.org Subject: Re: [PATCH v3 04/20] mm: VMA sequence count Hi Sergey, On 13/09/2017 13:53, Sergey Senozhatsky wrote: > Hi, > > On (09/08/17 20:06), Laurent Dufour wrote: > [..] >> @@ -903,6 +910,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, >> mm->map_count--; >> mpol_put(vma_policy(next)); >> kmem_cache_free(vm_area_cachep, next); >> + write_seqcount_end(&next->vm_sequence); >> /* >> * In mprotect's case 6 (see comments on vma_merge), >> * we must remove another next too. It would clutter >> @@ -932,11 +940,14 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, >> if (remove_next == 2) { >> remove_next = 1; >> end = next->vm_end; >> + write_seqcount_end(&vma->vm_sequence); >> goto again; >> - } >> - else if (next) >> + } else if (next) { >> + if (next != vma) >> + write_seqcount_begin_nested(&next->vm_sequence, >> + SINGLE_DEPTH_NESTING); >> vma_gap_update(next); >> - else { >> + } else { >> /* >> * If remove_next == 2 we obviously can't >> * reach this path. >> @@ -962,6 +973,10 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, >> if (insert && file) >> uprobe_mmap(insert); >> >> + if (next && next != vma) >> + write_seqcount_end(&next->vm_sequence); >> + write_seqcount_end(&vma->vm_sequence); > > > ok, so what I got on my box is: > > vm_munmap() -> down_write_killable(&mm->mmap_sem) > do_munmap() > __split_vma() > __vma_adjust() -> write_seqcount_begin(&vma->vm_sequence) > -> write_seqcount_begin_nested(&next->vm_sequence, SINGLE_DEPTH_NESTING) > > so this gives 3 dependencies ->mmap_sem -> ->vm_seq > ->vm_seq -> ->vm_seq/1 > ->mmap_sem -> ->vm_seq/1 > > > SyS_mremap() -> down_write_killable(¤t->mm->mmap_sem) > move_vma() -> write_seqcount_begin(&vma->vm_sequence) > -> write_seqcount_begin_nested(&new_vma->vm_sequence, SINGLE_DEPTH_NESTING); > move_page_tables() > __pte_alloc() > pte_alloc_one() > __alloc_pages_nodemask() > fs_reclaim_acquire() > > > I think here we have prepare_alloc_pages() call, that does > > -> fs_reclaim_acquire(gfp_mask) > -> fs_reclaim_release(gfp_mask) > > so that adds one more dependency ->mmap_sem -> ->vm_seq -> fs_reclaim > ->mmap_sem -> ->vm_seq/1 -> fs_reclaim > > > now, under memory pressure we hit the slow path and perform direct > reclaim. direct reclaim is done under fs_reclaim lock, so we end up > with the following call chain > > __alloc_pages_nodemask() > __alloc_pages_slowpath() > __perform_reclaim() -> fs_reclaim_acquire(gfp_mask); > try_to_free_pages() > shrink_node() > shrink_active_list() > rmap_walk_file() -> i_mmap_lock_read(mapping); > > > and this break the existing dependency. since we now take the leaf lock > (fs_reclaim) first and the the root lock (->mmap_sem). Thanks for looking at this. I'm sorry, I should have miss something. My understanding is that there are 2 chains of locks: 1. from __vma_adjust() mmap_sem -> i_mmap_rwsem -> vm_seq 2. from move_vmap() mmap_sem -> vm_seq -> fs_reclaim 2. from __alloc_pages_nodemask() fs_reclaim -> i_mmap_rwsem So the solution would be to have in __vma_adjust() mmap_sem -> vm_seq -> i_mmap_rwsem But this will raised the following dependency from unmap_mapping_range() unmap_mapping_range() -> i_mmap_rwsem unmap_mapping_range_tree() unmap_mapping_range_vma() zap_page_range_single() unmap_single_vma() unmap_page_range() -> vm_seq And there is no way to get rid of it easily as in unmap_mapping_range() there is no VMA identified yet. That's being said I can't see any clear way to get lock dependency cleaned here. Furthermore, this is not clear to me how a deadlock could happen as vm_seq is a sequence lock, and there is no way to get blocked here. Cheers, Laurent. > > well, seems to be the case. > > -ss >
Powered by blists - more mailing lists