[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-7904ba8a66f400182a204893c92098994e22a88d@git.kernel.org>
Date: Thu, 27 Sep 2018 11:55:52 -0700
From: tip-bot for Peter Zijlstra <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: dave.hansen@...el.com, tglx@...utronix.de, peterz@...radead.org,
bin.yang@...el.com, mark.gross@...el.com, mingo@...nel.org,
linux-kernel@...r.kernel.org, hpa@...or.com
Subject: [tip:x86/mm] x86/mm/cpa: Optimize __cpa_flush_range()
Commit-ID: 7904ba8a66f400182a204893c92098994e22a88d
Gitweb: https://git.kernel.org/tip/7904ba8a66f400182a204893c92098994e22a88d
Author: Peter Zijlstra <peterz@...radead.org>
AuthorDate: Wed, 19 Sep 2018 10:50:24 +0200
Committer: Thomas Gleixner <tglx@...utronix.de>
CommitDate: Thu, 27 Sep 2018 20:39:42 +0200
x86/mm/cpa: Optimize __cpa_flush_range()
If we IPI for WBINDV, then we might as well kill the entire TLB too.
But if we don't have to invalidate cache, there is no reason not to
use a range TLB flush.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Reviewed-by: Dave Hansen <dave.hansen@...el.com>
Cc: Bin Yang <bin.yang@...el.com>
Cc: Mark Gross <mark.gross@...el.com>
Link: https://lkml.kernel.org/r/20180919085948.195633798@infradead.org
---
arch/x86/mm/pageattr.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index dc552824e86a..62bb30b4bd2a 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -291,7 +291,7 @@ static bool __cpa_flush_range(unsigned long start, int numpages, int cache)
WARN_ON(PAGE_ALIGN(start) != start);
- if (!static_cpu_has(X86_FEATURE_CLFLUSH)) {
+ if (cache && !static_cpu_has(X86_FEATURE_CLFLUSH)) {
cpa_flush_all(cache);
return true;
}
Powered by blists - more mailing lists