[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrV530yi=C9+XOSyY8kvF7EN6PVtSBV4xVauAFC1q5UW8w@mail.gmail.com>
Date: Wed, 26 Jun 2019 09:50:35 -0700
From: Andy Lutomirski <luto@...nel.org>
To: Nadav Amit <namit@...are.com>
Cc: Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...el.com>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
X86 ML <x86@...nel.org>, Thomas Gleixner <tglx@...utronix.de>,
Dave Hansen <dave.hansen@...ux.intel.com>
Subject: Re: [PATCH 5/9] x86/mm/tlb: Optimize local TLB flushes
On Wed, Jun 26, 2019 at 9:39 AM Nadav Amit <namit@...are.com> wrote:
>
> > On Jun 26, 2019, at 9:33 AM, Andy Lutomirski <luto@...nel.org> wrote:
> >
> > On Tue, Jun 25, 2019 at 2:36 PM Dave Hansen <dave.hansen@...el.com> wrote:
> >> On 6/12/19 11:48 PM, Nadav Amit wrote:
> >>> While the updated smp infrastructure is capable of running a function on
> >>> a single local core, it is not optimized for this case.
> >>
> >> OK, so flush_tlb_multi() is optimized for flushing local+remote at the
> >> same time and is also (near?) the most optimal way to flush remote-only.
> >> But, it's not as optimized at doing local-only flushes. But,
> >> flush_tlb_on_cpus() *is* optimized for local-only flushes.
> >
> > Can we stick the optimization into flush_tlb_multi() in the interest
> > of keeping this stuff readable?
>
> flush_tlb_on_cpus() will be much simpler once I remove the fallback
> path that is in there for Xen and hyper-v. I can then open-code it in
> flush_tlb_mm_range() and arch_tlbbatch_flush().
>
> >
> > Also, would this series be easier to understand if there was a patch
> > to just remove the UV optimization before making other changes?
>
> If you just want me to remove it, I can do it. I don’t know who uses it and
> what the impact might be.
>
Only if you think it simplifies things. The impact will be somewhat
slower flushes on affected hardware. The UV maintainers know how to
fix this more sustainably, and maybe this will encourage them to do it
:)
Powered by blists - more mailing lists