lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211115134951.85286-2-peterx@redhat.com>
Date:   Mon, 15 Nov 2021 21:49:50 +0800
From:   Peter Xu <peterx@...hat.com>
To:     linux-kernel@...r.kernel.org, linux-mm@...ck.org
Cc:     David Hildenbrand <david@...hat.com>, peterx@...hat.com,
        Andrea Arcangeli <aarcange@...hat.com>,
        Yang Shi <shy828301@...il.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Hugh Dickins <hughd@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Alistair Popple <apopple@...dia.com>,
        "Kirill A . Shutemov" <kirill@...temov.name>
Subject: [PATCH RFC v2 1/2] mm: Don't skip swap entry even if zap_details specified

This check existed since the 1st git commit of Linux repository, but at that
time there's no page migration yet so I think it's okay.

With page migration enabled, it should logically be possible that we zap some
shmem pages during migration.  When that happens, IIUC the old code could have
the RSS counter accounted wrong on MM_SHMEMPAGES because we will zap the ptes
without decreasing the counters for the migrating entries.  I have no unit test
to prove it as I don't know an easy way to trigger this condition, though.

Besides, the optimization itself is already confusing IMHO to me in a few points:

  - The wording "skip swap entries" is confusing, because we're not skipping all
    swap entries - we handle device private/exclusive pages before that.

  - The skip behavior is enabled as long as zap_details pointer passed over.
    It's very hard to figure that out for a new zap caller because it's unclear
    why we should skip swap entries when we have zap_details specified.

  - With modern systems, especially performance critical use cases, swap
    entries should be rare, so I doubt the usefulness of this optimization
    since it should be on a slow path anyway.

  - It is not aligned with what we do with huge pmd swap entries, where in
    zap_huge_pmd() we'll do the accounting unconditionally.

This patch drops that trick, so we handle swap ptes coherently.  Meanwhile we
should do the same mapping check upon migration entries too.

Signed-off-by: Peter Xu <peterx@...hat.com>
---
 mm/memory.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 8f1de811a1dc..e454f3c6aeb9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1382,16 +1382,14 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
 			continue;
 		}
 
-		/* If details->check_mapping, we leave swap entries. */
-		if (unlikely(details))
-			continue;
-
 		if (!non_swap_entry(entry))
 			rss[MM_SWAPENTS]--;
 		else if (is_migration_entry(entry)) {
 			struct page *page;
 
 			page = pfn_swap_entry_to_page(entry);
+			if (unlikely(zap_skip_check_mapping(details, page)))
+				continue;
 			rss[mm_counter(page)]--;
 		}
 		if (unlikely(!free_swap_and_cache(entry)))
-- 
2.32.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ