lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 25 Sep 2023 20:46:52 -0400
From:   Rik van Riel <riel@...riel.com>
To:     Mike Kravetz <mike.kravetz@...cle.com>
Cc:     linux-kernel@...r.kernel.org, kernel-team@...a.com,
        linux-mm@...ck.org, akpm@...ux-foundation.org,
        muchun.song@...ux.dev, leit@...a.com, willy@...radead.org
Subject: Re: [PATCH 2/3] hugetlbfs: close race between MADV_DONTNEED and
 page fault

On Mon, 2023-09-25 at 15:25 -0700, Mike Kravetz wrote:
> On 09/25/23 16:28, riel@...riel.com wrote:
> > 
> > -void __unmap_hugepage_range_final(struct mmu_gather *tlb,
> > -                         struct vm_area_struct *vma, unsigned long
> > start,
> > -                         unsigned long end, struct page *ref_page,
> > -                         zap_flags_t zap_flags)
> > +void __hugetlb_zap_begin(struct vm_area_struct *vma,
> > +                        unsigned long *start, unsigned long *end)
> >  {
> > +       adjust_range_if_pmd_sharing_possible(vma, start, end);
> >         hugetlb_vma_lock_write(vma);
> >         i_mmap_lock_write(vma->vm_file->f_mapping);
> > +}
> 
> __unmap_hugepage_range_final() was called from unmap_single_vma.
> unmap_single_vma has two callers, zap_page_range_single and
> unmap_vmas.
> 
> When the locking was moved into hugetlb_zap_begin, it was only added
> to the
> zap_page_range_single call path.  Calls from unmap_vmas are missing
> the
> locking.

Oh, that's a fun one.

I suppose the locking of the f_mapping lock, and calling
adjust_range_if_pmd_sharing_possible matters for the call
from unmap_vmas, while the call tho hugetlb_vma_lock_write
really doesn't matter, since unmap_vmas is called with the
mmap_sem held for write, which already excludes page faults.

I'll add the call there for v4.

Good catch.

-- 
All Rights Reversed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ