[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200910183318.20139-5-willy@infradead.org>
Date: Thu, 10 Sep 2020 19:33:14 +0100
From: "Matthew Wilcox (Oracle)" <willy@...radead.org>
To: linux-mm@...ck.org
Cc: "Matthew Wilcox (Oracle)" <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>,
William Kucharski <william.kucharski@...cle.com>,
Jani Nikula <jani.nikula@...ux.intel.com>,
Alexey Dobriyan <adobriyan@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Chris Wilson <chris@...is-wilson.co.uk>,
Matthew Auld <matthew.auld@...el.com>,
Huang Ying <ying.huang@...el.com>,
intel-gfx@...ts.freedesktop.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH v2 4/8] proc: Optimise smaps for shmem entries
Avoid bumping the refcount on pages when we're only interested in the
swap entries.
Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
Acked-by: Johannes Weiner <hannes@...xchg.org>
---
fs/proc/task_mmu.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 5066b0251ed8..e42d9e5e9a3c 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -520,16 +520,10 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
page = device_private_entry_to_page(swpent);
} else if (unlikely(IS_ENABLED(CONFIG_SHMEM) && mss->check_shmem_swap
&& pte_none(*pte))) {
- page = find_get_entry(vma->vm_file->f_mapping,
+ page = xa_load(&vma->vm_file->f_mapping->i_pages,
linear_page_index(vma, addr));
- if (!page)
- return;
-
if (xa_is_value(page))
mss->swap += PAGE_SIZE;
- else
- put_page(page);
-
return;
}
--
2.28.0
Powered by blists - more mailing lists