lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 25 Oct 2022 16:37:51 -0400
From:   Rik van Riel <riel@...riel.com>
To:     Mike Kravetz <mike.kravetz@...cle.com>
Cc:     Chris Mason <clm@...a.com>, David Hildenbrand <david@...hat.com>,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        kernel-team@...a.com, Andrew Morton <akpm@...ux-foundation.org>
Subject: [BUG] hugetlbfs_no_page vs MADV_DONTNEED race leading to SIGBUS

Hi Mike,

After getting promising results initially, we discovered there
is yet another bug left with hugetlbfs MADV_DONTNEED.

This one involves a page fault on a hugetlbfs address, while
another thread in the same process is in the middle of MADV_DONTNEED
on that same memory address.

The code in __unmap_hugepage_range() will clear the page table
entry, and then at some point later the lazy TLB code will 
actually free the huge page back into the hugetlbfs free page
pool.

Meanwhile, hugetlb_no_page will call alloc_huge_page, and that
will fail because the code calling __unmap_hugepage_range() has
not actually returned the page to the free list yet.

The result is that the process gets killed with SIGBUS.

I have thought of a few different solutions to this problem, but
none of them look good:
- Make MADV_DONTNEED take a write lock on mmap_sem, to exclude
  page faults. This could make MADV_DONTNEED on VMAs with 4kB
  pages unacceptably slow.
- Some sort of atomic counter kept by __unmap_hugepage_range()
  that huge pages may be getting placed in the tlb gather, and
  freed later by tlb_finish_mmu().  This would involve changes
  to the MMU gather code, outside of hugetlbfs.
- Some sort of generation counter that tracks tlb_gather_mmu
  cycles in progress, with the alloc_huge_page failure path
  waiting until all mmu gather operations that started before
  it to finish, before retrying the allocation. This requires
  changes to the generic code, outside of hugetlbfs.

What are the reasonable alternatives here?

Should we see if anybody can come up with a simple solution
to the problem, or would it be better to just disable
MADV_DONTNEED on hugetlbfs for now?

-- 
All Rights Reversed.

Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ