[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1337155239.27694.131.camel@twins>
Date: Wed, 16 May 2012 10:00:39 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Alex Shi <alex.shi@...el.com>
Cc: Nick Piggin <npiggin@...il.com>, tglx@...utronix.de,
mingo@...hat.com, hpa@...or.com, arnd@...db.de,
rostedt@...dmis.org, fweisbec@...il.com, jeremy@...p.org,
riel@...hat.com, luto@....edu, avi@...hat.com, len.brown@...el.com,
dhowells@...hat.com, fenghua.yu@...el.com, borislav.petkov@....com,
yinghai@...nel.org, ak@...ux.intel.com, cpw@....com,
steiner@....com, akpm@...ux-foundation.org, penberg@...nel.org,
hughd@...gle.com, rientjes@...gle.com,
kosaki.motohiro@...fujitsu.com, n-horiguchi@...jp.nec.com,
tj@...nel.org, oleg@...hat.com, axboe@...nel.dk, jmorris@...ei.org,
kamezawa.hiroyu@...fujitsu.com, viro@...iv.linux.org.uk,
linux-kernel@...r.kernel.org, yongjie.ren@...el.com,
linux-arch@...r.kernel.org
Subject: Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm
On Wed, 2012-05-16 at 14:46 +0800, Alex Shi wrote:
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index 75e888b..ed6642a 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -86,6 +86,8 @@ struct mmu_gather {
> #ifdef CONFIG_HAVE_RCU_TABLE_FREE
> struct mmu_table_batch *batch;
> #endif
> + unsigned long start;
> + unsigned long end;
> unsigned int need_flush : 1, /* Did free PTEs */
> fast_mode : 1; /* No batching */
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 6105f47..b176172 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -206,6 +206,8 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm)
> tlb->mm = mm;
>
> tlb->fullmm = fullmm;
> + tlb->start = -1UL;
> + tlb->end = 0;
> tlb->need_flush = 0;
> tlb->fast_mode = (num_possible_cpus() == 1);
> tlb->local.next = NULL;
> @@ -248,6 +250,8 @@ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long e
> {
> struct mmu_gather_batch *batch, *next;
>
> + tlb->start = start;
> + tlb->end = end;
> tlb_flush_mmu(tlb);
>
> /* keep the page table cache within bounds */
> @@ -1204,6 +1208,8 @@ again:
> */
> if (force_flush) {
> force_flush = 0;
> + tlb->start = addr;
> + tlb->end = end;
> tlb_flush_mmu(tlb);
> if (addr != end)
> goto again;
ARGH.. no. What bit about you don't need to modify the generic code
don't you get?
Both ARM and IA64 (and possible others) already do range tracking, you
don't need to modify mm/memory.c _AT_ALL_.
Also, if you modify include/asm-generic/tlb.h to include the ranges it
would be very nice to make that optional, most archs using it won't use
this.
Now IF you're going to change the tlb interface like this, you're going
to get to do it for all architectures, along with a sane benchmark to
show its beneficial to track ranges like this.
But as it stands, people are still questioning the validity of your
mprotect micro-bench, so no, you don't get to change the tlb interface.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists