lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 11 Oct 2023 10:59:50 -0400
From:   "Liam R. Howlett" <Liam.Howlett@...cle.com>
To:     Peng Zhang <zhangpeng.00@...edance.com>
Cc:     corbet@....net, akpm@...ux-foundation.org, willy@...radead.org,
        brauner@...nel.org, surenb@...gle.com, michael.christie@...cle.com,
        mjguzik@...il.com, mathieu.desnoyers@...icios.com,
        npiggin@...il.com, peterz@...radead.org, oliver.sang@...el.com,
        mst@...hat.com, maple-tree@...ts.infradead.org, linux-mm@...ck.org,
        linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v4 10/10] fork: Use __mt_dup() to duplicate maple tree in
 dup_mmap()

* Peng Zhang <zhangpeng.00@...edance.com> [231011 03:00]:
> 
> 
> 在 2023/10/11 09:28, Liam R. Howlett 写道:
...
> > 
> > > +	unmap_region(mm, &vmi.mas, vma, NULL, NULL, 0, tree_end, tree_end, true);
> > > +
> > 
> > I really don't like having to modify unmap_region() and free_pgtables()
> > for a rare error case.  Looking into the issue, you are correct in the
> > rounding that is happening in free_pgd_range() and this alignment to
> > avoid "unnecessary work" is causing us issues.  However, if we open code
> > it a lot like what exit_mmap() does, we can avoid changing these
> > functions:
> > 
> > +       lru_add_drain();
> > +       tlb_gather_mmu(&tlb, mm);
> > +       update_hiwater_rss(mm);
> > +       unmap_vmas(&tlb, &vmi.mas, vma, 0, tree_end, tree_end, true);
> > +       vma_iter_set(&vmi, vma->vm_end);
> > +       free_pgtables(&tlb, &vmi.mas, vma, FIRST_USER_ADDRESS, vma_end->vm_start,
> > +                     true);
> > +       free_pgd_range(&tlb, vma->vm_start, vma_end->vm_start,
> > +                      FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
> I think both approaches are valid. If you feel that this method is better,
> I can make the necessary changes accordingly. However, take a look at the
> following code:
> 
> if (is_vm_hugetlb_page(vma)) {
> 	hugetlb_free_pgd_range(tlb, addr, vma->vm_end,
> 		floor, next ? next->vm_start : ceiling);
> }
> 
> In free_pgtables(), there is also a possibility of using
> hugetlb_free_pgd_range() to free the page tables. By adding an
> additional call to free_pgd_range() instead of hugetlb_free_pgd_range(),
> I'm not sure if it would cause any potential issues.

Okay.  It is safe for the general case, but I've no idea about powerpc
and other variants.  After looking at the ppc stuff, I don't think it's
safe (for our sanity) to proceed with my plan.

I think we go back to your v2 attempt at this and store XA_ZERO, then
modify unmap_vmas(), free_pgtables(), and the (already done in v2) exit
path loop.  Then we just let the normal failure path be taken in
exit_mmap().  Sorry for going back on this, but there's no tidy way to
proceed.


>From your v2 [1]:
+			if (unlikely(mas_is_err(&vmi.mas))) {
+				retval = xa_err(vmi.mas.node);
+				mas_reset(&vmi.mas);
+				if (mas_find(&vmi.mas, ULONG_MAX))
+					mas_store(&vmi.mas, XA_ZERO_ENTRY);
+				goto loop_out;
+			}

You can do this instead:
+			if (unlikely(mas_is_err(&vmi.mas))) {
+				retval = xa_err(vmi.mas.node);
+				mas_set_range(&vim.mas, mntp->vm_start,
mntp->vm_end -1);
+				mas_store(&vmi.mas, XA_ZERO_ENTRY);
+				goto loop_out;
+			}

We'll have to be careful that the first VMA isn't XA_ZERO in the two
functions as well, but I think it will be better than having 7 arguments
to the free_pgtables() with the last two being the same for all but one
case, and/or our own clean up for exit.  Even with a wrapping function,
this is too messy.

[1]. https://lore.kernel.org/lkml/20230830125654.21257-7-zhangpeng.00@bytedance.com/

Thanks,
Liam

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ