[<prev] [next>] [day] [month] [year] [list]
Message-ID: <tip-71b54f8263860a37dd9f50f81880a9d681fd9c10@git.kernel.org>
Date: Sat, 25 Jan 2014 06:25:08 -0800
From: tip-bot for Mel Gorman <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, hpa@...or.com, mingo@...nel.org,
torvalds@...ux-foundation.org, peterz@...radead.org,
riel@...hat.com, alex.shi@...aro.org, akpm@...ux-foundation.org,
mgorman@...e.de, tglx@...utronix.de, davidlohr@...com
Subject: [tip:x86/urgent] x86/mm:
Eliminate redundant page table walk during TLB range flushing
Commit-ID: 71b54f8263860a37dd9f50f81880a9d681fd9c10
Gitweb: http://git.kernel.org/tip/71b54f8263860a37dd9f50f81880a9d681fd9c10
Author: Mel Gorman <mgorman@...e.de>
AuthorDate: Tue, 21 Jan 2014 14:33:19 -0800
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Sat, 25 Jan 2014 09:10:43 +0100
x86/mm: Eliminate redundant page table walk during TLB range flushing
When choosing between doing an address space or ranged flush,
the x86 implementation of flush_tlb_mm_range takes into account
whether there are any large pages in the range. A per-page
flush typically requires fewer entries than would covered by a
single large page and the check is redundant.
There is one potential exception. THP migration flushes single
THP entries and it conceivably would benefit from flushing a
single entry instead of the mm. However, this flush is after a
THP allocation, copy and page table update potentially with any
other threads serialised behind it. In comparison to that, the
flush is noise. It makes more sense to optimise balancing to
require fewer flushes than to optimise the flush itself.
This patch deletes the redundant huge page check.
Signed-off-by: Mel Gorman <mgorman@...e.de>
Tested-by: Davidlohr Bueso <davidlohr@...com>
Reviewed-by: Rik van Riel <riel@...hat.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Cc: Alex Shi <alex.shi@...aro.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Link: http://lkml.kernel.org/n/tip-sgei1drpOcburujPsfh6ovmo@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
arch/x86/mm/tlb.c | 28 +---------------------------
1 file changed, 1 insertion(+), 27 deletions(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 5176526..dd8dda1 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -158,32 +158,6 @@ void flush_tlb_current_task(void)
preempt_enable();
}
-/*
- * It can find out the THP large page, or
- * HUGETLB page in tlb_flush when THP disabled
- */
-static inline unsigned long has_large_page(struct mm_struct *mm,
- unsigned long start, unsigned long end)
-{
- pgd_t *pgd;
- pud_t *pud;
- pmd_t *pmd;
- unsigned long addr = ALIGN(start, HPAGE_SIZE);
- for (; addr < end; addr += HPAGE_SIZE) {
- pgd = pgd_offset(mm, addr);
- if (likely(!pgd_none(*pgd))) {
- pud = pud_offset(pgd, addr);
- if (likely(!pud_none(*pud))) {
- pmd = pmd_offset(pud, addr);
- if (likely(!pmd_none(*pmd)))
- if (pmd_large(*pmd))
- return addr;
- }
- }
- }
- return 0;
-}
-
void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
unsigned long end, unsigned long vmflag)
{
@@ -218,7 +192,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
nr_base_pages = (end - start) >> PAGE_SHIFT;
/* tlb_flushall_shift is on balance point, details in commit log */
- if (nr_base_pages > act_entries || has_large_page(mm, start, end)) {
+ if (nr_base_pages > act_entries) {
count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
local_flush_tlb();
} else {
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists