[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1335603099-2624-4-git-send-email-alex.shi@intel.com>
Date: Sat, 28 Apr 2012 16:51:39 +0800
From: Alex Shi <alex.shi@...el.com>
To: andi.kleen@...el.com, tim.c.chen@...ux.intel.com, jeremy@...p.org,
chrisw@...s-sol.org, akataria@...are.com, tglx@...utronix.de,
mingo@...hat.com, hpa@...or.com, rostedt@...dmis.org,
fweisbec@...il.com
Cc: riel@...hat.com, luto@....edu, alex.shi@...el.com, avi@...hat.com,
len.brown@...el.com, paul.gortmaker@...driver.com,
dhowells@...hat.com, fenghua.yu@...el.com, borislav.petkov@....com,
yinghai@...nel.org, cpw@....com, steiner@....com,
linux-kernel@...r.kernel.org, yongjie.ren@...el.com
Subject: [PATCH 3/3] x86/tlb: fall back to flush all when meet a THP large page
We don't need to flush large pages by PAGE_SIZE step, that just waste
time. and actually, large page don't need 'invlpg' optimizing according
to our macro benchmark. So, just flush whole TLB is enough for them.
The following result is tested on a 2CPU * 4cores * 2HT NHM EP machine,
with THP 'always' setting.
Multi-thread testing, '-t' paramter is thread number:
without this patch with this patch
./mprotect -t 1 14ns 13ns
./mprotect -t 2 13ns 13ns
./mprotect -t 4 12ns 11ns
./mprotect -t 8 14ns 10ns
./mprotect -t 16 28ns 28ns
./mprotect -t 32 54ns 52ns
./mprotect -t 128 200ns 200ns
Signed-off-by: Alex Shi <alex.shi@...el.com>
---
arch/x86/mm/tlb.c | 18 ++++++++++++++++--
1 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index c4e694d..049fcdf 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -316,12 +316,20 @@ void flush_tlb_mm(struct mm_struct *mm)
#define FLUSHALL_BAR 16
+/* Is better to have a vm_flags to show if large pages exist in a VMA? */
+static inline int in_large_page(struct mm_struct *mm, unsigned long addr){
+ pmd_t *pmd;
+ pmd = pmd_offset(pud_offset(pgd_offset(mm, addr), addr), addr);
+ return pmd_large(*pmd);
+}
+
void flush_tlb_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
struct mm_struct *mm;
if (!cpu_has_invlpg || vma->vm_flags & VM_HUGETLB) {
+flush_all:
flush_tlb_mm(vma->vm_mm);
return;
}
@@ -345,9 +353,15 @@ void flush_tlb_range(struct vm_area_struct *vma,
local_flush_tlb();
else {
for (addr = start; addr <= end;
- addr += PAGE_SIZE)
+ addr += HPAGE_SIZE)
+ if (in_large_page(mm, addr)) {
+ preempt_enable();
+ goto flush_all;
+ }
+ for (addr = start; addr <= end;
+ addr += PAGE_SIZE) {
__flush_tlb_single(addr);
-
+ }
if (cpumask_any_but(mm_cpumask(mm),
smp_processor_id()) < nr_cpu_ids)
flush_tlb_others(mm_cpumask(mm), mm,
--
1.7.5.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists