[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <74020842196c81ae6b3910690931fec09b7eac1c.camel@surriel.com>
Date: Mon, 24 Sep 2018 14:50:54 -0400
From: Rik van Riel <riel@...riel.com>
To: linux-kernel@...r.kernel.org
Cc: peterz@...radead.org, kernel-team@...com, songliubraving@...com,
mingo@...nel.org, will.deacon@....com, hpa@...or.com,
luto@...nel.org, npiggin@...il.com
Subject: Re: [PATCH 0/7] x86/mm/tlb: make lazy TLB mode even lazier
On Mon, 2018-09-24 at 14:37 -0400, Rik van Riel wrote:
> Linus asked me to come up with a smaller patch set to get the
> benefits
> of lazy TLB mode, so I spent some time trying out various
> permutations
> of the code, with a few workloads that do lots of context switches,
> and
> also happen to have a fair number of TLB flushes a second.
I made a nice list of which patches this code
is based on, but I forgot to copy it into my
intro email.
The patches are based on current -tip, plus:
- tip x86/core: 012e77a903d ("x86/nmi: Fix NMI uaccess race against CR3
switching")
- arm64 tlb/asm-generic branch, including
- faaadaf315b4 ("asm-generic/tlb: Guard with #ifdef CONFIG_MMU")
- 22a61c3c4f13 ("asm-generic/tlb: Track freeing of page-table
directories in struct mmu_gather")
- a6d60245d6d9 ("asm-generic/tlb: Track which levels of the page
tables have been cleared")
--
All Rights Reversed.
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists