[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210917164756.8586-1-peterx@redhat.com>
Date: Fri, 17 Sep 2021 12:47:53 -0400
From: Peter Xu <peterx@...hat.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: Vlastimil Babka <vbabka@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>,
Andrea Arcangeli <aarcange@...hat.com>, peterx@...hat.com,
Matthew Wilcox <willy@...radead.org>
Subject: [PATCH 0/3] mm/smaps: Fixes and optimizations on shmem swap handling
This series grows from the patch previously posted here:
[PATCH] mm/smaps: Use vma->vm_pgoff directly when counting partial swap
https://lore.kernel.org/lkml/20210916215839.95177-1-peterx@redhat.com/
Vlastimil reported a bug that is even more important to fix than the cleanup,
so I put it as patch 1 here. There's a test program we can use to verify the
bug before/after the patch. I used the same program to test patch 2/3 because
it covers walking shmem swap both in page cache and in pgtables.
Patch 2 is the original patch, though with a tiny touchup as Vlastimil
suggested.
Patch 3 is a further cleanup of the shmem swap logic, hopefully make it even
cleaner.
Please review, thanks.
Peter Xu (3):
mm/smaps: Fix shmem pte hole swap calculation
mm/smaps: Use vma->vm_pgoff directly when counting partial swap
mm/smaps: Simplify shmem handling of pte holes
fs/proc/task_mmu.c | 28 ++++++++++++++++------------
mm/shmem.c | 5 ++---
2 files changed, 18 insertions(+), 15 deletions(-)
--
2.31.1
Powered by blists - more mailing lists