[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191101075727.26683-3-ying.huang@intel.com>
Date: Fri, 1 Nov 2019 15:57:19 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Huang Ying <ying.huang@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>, Rik van Riel <riel@...hat.com>,
Mel Gorman <mgorman@...e.de>, Ingo Molnar <mingo@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Dan Williams <dan.j.williams@...el.com>,
Fengguang Wu <fengguang.wu@...el.com>
Subject: [RFC 02/10] autonuma: Reduce cache footprint when scanning page tables
From: Huang Ying <ying.huang@...el.com>
In auto NUMA balancing page table scanning, if the pte_protnone() is
true, the PTE needs not to be changed because it's in target state
already. So other checking on corresponding struct page is
unnecessary too.
So, if we check pte_protnone() firstly for each PTE, we can avoid
unnecessary struct page accessing, so that reduce the cache footprint
of NUMA balancing page table scanning.
In the performance test of pmbench memory accessing benchmark with
80:20 read/write ratio and normal access address distribution on a 2
socket Intel server with Optance DC Persistent Memory, perf profiling
shows that the autonuma page table scanning time reduces from 1.23% to
0.97% (that is, reduced 21%) with the patch.
Signed-off-by: "Huang, Ying" <ying.huang@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Rik van Riel <riel@...hat.com>
Cc: Mel Gorman <mgorman@...e.de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Dan Williams <dan.j.williams@...el.com>
Cc: Fengguang Wu <fengguang.wu@...el.com>
Cc: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org
---
mm/mprotect.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/mprotect.c b/mm/mprotect.c
index bf38dfbbb4b4..d69b9913388e 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -80,6 +80,10 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
if (prot_numa) {
struct page *page;
+ /* Avoid TLB flush if possible */
+ if (pte_protnone(oldpte))
+ continue;
+
page = vm_normal_page(vma, addr, oldpte);
if (!page || PageKsm(page))
continue;
@@ -97,10 +101,6 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
if (page_is_file_cache(page) && PageDirty(page))
continue;
- /* Avoid TLB flush if possible */
- if (pte_protnone(oldpte))
- continue;
-
/*
* Don't mess with PTEs if page is already on the node
* a single-threaded process is running on.
--
2.23.0
Powered by blists - more mailing lists