lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <787e7ac7-917c-71eb-8050-a01f6a96a4cc@redhat.com>
Date:   Sat, 19 Mar 2022 11:50:45 +0100
From:   David Hildenbrand <david@...hat.com>
To:     Yang Shi <shy828301@...il.com>
Cc:     linux-kernel@...r.kernel.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Hugh Dickins <hughd@...gle.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        David Rientjes <rientjes@...gle.com>,
        Shakeel Butt <shakeelb@...gle.com>,
        John Hubbard <jhubbard@...dia.com>,
        Jason Gunthorpe <jgg@...dia.com>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Mike Rapoport <rppt@...ux.ibm.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        Matthew Wilcox <willy@...radead.org>,
        Vlastimil Babka <vbabka@...e.cz>, Jann Horn <jannh@...gle.com>,
        Michal Hocko <mhocko@...nel.org>,
        Nadav Amit <namit@...are.com>, Rik van Riel <riel@...riel.com>,
        Roman Gushchin <guro@...com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Peter Xu <peterx@...hat.com>,
        Donald Dutile <ddutile@...hat.com>,
        Christoph Hellwig <hch@....de>,
        Oleg Nesterov <oleg@...hat.com>, Jan Kara <jack@...e.cz>,
        Liang Zhang <zhangliang5@...wei.com>,
        Pedro Gomes <pedrodemargomes@...il.com>,
        Oded Gabbay <oded.gabbay@...il.com>, linux-mm@...ck.org
Subject: Re: [PATCH v2 11/15] mm: remember exclusively mapped anonymous pages
 with PG_anon_exclusive

On 19.03.22 11:21, David Hildenbrand wrote:
> On 18.03.22 21:29, Yang Shi wrote:
>> On Thu, Mar 17, 2022 at 2:06 AM David Hildenbrand <david@...hat.com> wrote:
>>>
>>> On 16.03.22 22:23, Yang Shi wrote:
>>>> On Tue, Mar 15, 2022 at 3:52 AM David Hildenbrand <david@...hat.com> wrote:
>>>>>
>>>>> Let's mark exclusively mapped anonymous pages with PG_anon_exclusive as
>>>>> exclusive, and use that information to make GUP pins reliable and stay
>>>>> consistent with the page mapped into the page table even if the
>>>>> page table entry gets write-protected.
>>>>>
>>>>> With that information at hand, we can extend our COW logic to always
>>>>> reuse anonymous pages that are exclusive. For anonymous pages that
>>>>> might be shared, the existing logic applies.
>>>>>
>>>>> As already documented, PG_anon_exclusive is usually only expressive in
>>>>> combination with a page table entry. Especially PTE vs. PMD-mapped
>>>>> anonymous pages require more thought, some examples: due to mremap() we
>>>>> can easily have a single compound page PTE-mapped into multiple page tables
>>>>> exclusively in a single process -- multiple page table locks apply.
>>>>> Further, due to MADV_WIPEONFORK we might not necessarily write-protect
>>>>> all PTEs, and only some subpages might be pinned. Long story short: once
>>>>> PTE-mapped, we have to track information about exclusivity per sub-page,
>>>>> but until then, we can just track it for the compound page in the head
>>>>> page and not having to update a whole bunch of subpages all of the time
>>>>> for a simple PMD mapping of a THP.
>>>>>
>>>>> For simplicity, this commit mostly talks about "anonymous pages", while
>>>>> it's for THP actually "the part of an anonymous folio referenced via
>>>>> a page table entry".
>>>>>
>>>>> To not spill PG_anon_exclusive code all over the mm code-base, we let
>>>>> the anon rmap code to handle all PG_anon_exclusive logic it can easily
>>>>> handle.
>>>>>
>>>>> If a writable, present page table entry points at an anonymous (sub)page,
>>>>> that (sub)page must be PG_anon_exclusive. If GUP wants to take a reliably
>>>>> pin (FOLL_PIN) on an anonymous page references via a present
>>>>> page table entry, it must only pin if PG_anon_exclusive is set for the
>>>>> mapped (sub)page.
>>>>>
>>>>> This commit doesn't adjust GUP, so this is only implicitly handled for
>>>>> FOLL_WRITE, follow-up commits will teach GUP to also respect it for
>>>>> FOLL_PIN without !FOLL_WRITE, to make all GUP pins of anonymous pages
>>>>> fully reliable.
>>>>>
>>>>> Whenever an anonymous page is to be shared (fork(), KSM), or when
>>>>> temporarily unmapping an anonymous page (swap, migration), the relevant
>>>>> PG_anon_exclusive bit has to be cleared to mark the anonymous page
>>>>> possibly shared. Clearing will fail if there are GUP pins on the page:
>>>>> * For fork(), this means having to copy the page and not being able to
>>>>>   share it. fork() protects against concurrent GUP using the PT lock and
>>>>>   the src_mm->write_protect_seq.
>>>>> * For KSM, this means sharing will fail. For swap this means, unmapping
>>>>>   will fail, For migration this means, migration will fail early. All
>>>>>   three cases protect against concurrent GUP using the PT lock and a
>>>>>   proper clear/invalidate+flush of the relevant page table entry.
>>>>>
>>>>> This fixes memory corruptions reported for FOLL_PIN | FOLL_WRITE, when a
>>>>> pinned page gets mapped R/O and the successive write fault ends up
>>>>> replacing the page instead of reusing it. It improves the situation for
>>>>> O_DIRECT/vmsplice/... that still use FOLL_GET instead of FOLL_PIN,
>>>>> if fork() is *not* involved, however swapout and fork() are still
>>>>> problematic. Properly using FOLL_PIN instead of FOLL_GET for these
>>>>> GUP users will fix the issue for them.
>>>>>
>>>>> I. Details about basic handling
>>>>>
>>>>> I.1. Fresh anonymous pages
>>>>>
>>>>> page_add_new_anon_rmap() and hugepage_add_new_anon_rmap() will mark the
>>>>> given page exclusive via __page_set_anon_rmap(exclusive=1). As that is
>>>>> the mechanism fresh anonymous pages come into life (besides migration
>>>>> code where we copy the page->mapping), all fresh anonymous pages will
>>>>> start out as exclusive.
>>>>>
>>>>> I.2. COW reuse handling of anonymous pages
>>>>>
>>>>> When a COW handler stumbles over a (sub)page that's marked exclusive, it
>>>>> simply reuses it. Otherwise, the handler tries harder under page lock to
>>>>> detect if the (sub)page is exclusive and can be reused. If exclusive,
>>>>> page_move_anon_rmap() will mark the given (sub)page exclusive.
>>>>>
>>>>> Note that hugetlb code does not yet check for PageAnonExclusive(), as it
>>>>> still uses the old COW logic that is prone to the COW security issue
>>>>> because hugetlb code cannot really tolerate unnecessary/wrong COW as
>>>>> huge pages are a scarce resource.
>>>>>
>>>>> I.3. Migration handling
>>>>>
>>>>> try_to_migrate() has to try marking an exclusive anonymous page shared
>>>>> via page_try_share_anon_rmap(). If it fails because there are GUP pins
>>>>> on the page, unmap fails. migrate_vma_collect_pmd() and
>>>>> __split_huge_pmd_locked() are handled similarly.
>>>>>
>>>>> Writable migration entries implicitly point at shared anonymous pages.
>>>>> For readable migration entries that information is stored via a new
>>>>> "readable-exclusive" migration entry, specific to anonymous pages.
>>>>>
>>>>> When restoring a migration entry in remove_migration_pte(), information
>>>>> about exlusivity is detected via the migration entry type, and
>>>>> RMAP_EXCLUSIVE is set accordingly for
>>>>> page_add_anon_rmap()/hugepage_add_anon_rmap() to restore that
>>>>> information.
>>>>>
>>>>> I.4. Swapout handling
>>>>>
>>>>> try_to_unmap() has to try marking the mapped page possibly shared via
>>>>> page_try_share_anon_rmap(). If it fails because there are GUP pins on the
>>>>> page, unmap fails. For now, information about exclusivity is lost. In the
>>>>> future, we might want to remember that information in the swap entry in
>>>>> some cases, however, it requires more thought, care, and a way to store
>>>>> that information in swap entries.
>>>>>
>>>>> I.5. Swapin handling
>>>>>
>>>>> do_swap_page() will never stumble over exclusive anonymous pages in the
>>>>> swap cache, as try_to_migrate() prohibits that. do_swap_page() always has
>>>>> to detect manually if an anonymous page is exclusive and has to set
>>>>> RMAP_EXCLUSIVE for page_add_anon_rmap() accordingly.
>>>>>
>>>>> I.6. THP handling
>>>>>
>>>>> __split_huge_pmd_locked() has to move the information about exclusivity
>>>>> from the PMD to the PTEs.
>>>>>
>>>>> a) In case we have a readable-exclusive PMD migration entry, simply insert
>>>>> readable-exclusive PTE migration entries.
>>>>>
>>>>> b) In case we have a present PMD entry and we don't want to freeze
>>>>> ("convert to migration entries"), simply forward PG_anon_exclusive to
>>>>> all sub-pages, no need to temporarily clear the bit.
>>>>>
>>>>> c) In case we have a present PMD entry and want to freeze, handle it
>>>>> similar to try_to_migrate(): try marking the page shared first. In case
>>>>> we fail, we ignore the "freeze" instruction and simply split ordinarily.
>>>>> try_to_migrate() will properly fail because the THP is still mapped via
>>>>> PTEs.
>>>
>>> Hi,
>>>
>>> thanks for the review!
>>>
>>>>
>>>> How come will try_to_migrate() fail? The afterward pvmw will find
>>>> those PTEs then convert them to migration entries anyway IIUC.
>>>>
>>>
>>> It will run into that code:
>>>
>>>>> @@ -1903,6 +1938,15 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma,
>>>>>                                 page_vma_mapped_walk_done(&pvmw);
>>>>>                                 break;
>>>>>                         }
>>>>> +                       VM_BUG_ON_PAGE(pte_write(pteval) && PageAnon(page) &&
>>>>> +                                      !anon_exclusive, page);
>>>>> +                       if (anon_exclusive &&
>>>>> +                           page_try_share_anon_rmap(subpage)) {
>>>>> +                               set_pte_at(mm, address, pvmw.pte, pteval);
>>>>> +                               ret = false;
>>>>> +                               page_vma_mapped_walk_done(&pvmw);
>>>>> +                               break;
>>>>> +                       }
>>>
>>> and similarly fail the page_try_share_anon_rmap(), at which point
>>> try_to_migrate() stops and the caller will still observe a
>>> "page_mapped() == true".
>>
>> Thanks, I missed that. Yes, the page will still be mapped. This should
>> trigger the VM_WARN_ON_ONCE in unmap_page(), if this change will make
>> this happen more often, we may consider removing that warning even
>> though it is "once" since seeing a mapped page may become a normal
>> case (once DIO is switched to FOLL_PIN, it may be more often). Anyway
>> we don't have to remove it right now.
> 
> Oh, very good catch! I wasn't able to trigger that warning in my testing
> so far. Interestingly, arch_unmap_one() could theoretically make this
> fail already and trigger the warning.
> 
> Apart from that warning, split_huge_page_to_list() should work as
> expected: freezing the refcount will fail if still mapped and we'll remap.
> 
> I'll include a separate patch to just remove that VM_WARN_ON_ONCE -- thanks!
> 

>From e6e983d841cd2aa2a9c8dc71779211881cf0d96f Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david@...hat.com>
Date: Sat, 19 Mar 2022 11:49:39 +0100
Subject: [PATCH] mm/huge_memory: remove outdated VM_WARN_ON_ONCE_PAGE from
 unmap_page()

We can already theoretically fail to unmap (still having page_mapped()) in
case arch_unmap_one() fails. Although this applies only to anonymous pages
for now, get rid of the VM_WARN_ON_ONCE_PAGE() completely: the caller --
split_huge_page_to_list() -- will fail to freeze the refcount and
remap the page via remap_page(). So the caller can handle unmap errors
just fine.

This is a preparation for making try_to_migrate() fail on anonymous pages
with GUP pins.

Reported-by: Yang Shi <shy828301@...il.com>
Signed-off-by: David Hildenbrand <david@...hat.com>
---
 mm/huge_memory.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 0b6fb409b9e4..0fe0ab3ec3fc 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2263,8 +2263,6 @@ static void unmap_page(struct page *page)
 		try_to_migrate(page, ttu_flags);
 	else
 		try_to_unmap(page, ttu_flags | TTU_IGNORE_MLOCK);
-
-	VM_WARN_ON_ONCE_PAGE(page_mapped(page), page);
 }
 
 static void remap_page(struct page *page, unsigned int nr)
-- 
2.35.1


-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ