[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <E5102C9C-732D-43AC-8A24-9F26F5E2EFD4@vmware.com>
Date: Wed, 26 Jun 2019 01:22:38 +0000
From: Nadav Amit <namit@...are.com>
To: Dave Hansen <dave.hansen@...el.com>
CC: Peter Zijlstra <peterz@...radead.org>,
Andy Lutomirski <luto@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
the arch/x86 maintainers <x86@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Dave Hansen <dave.hansen@...ux.intel.com>
Subject: Re: [PATCH 8/9] x86/tlb: Privatize cpu_tlbstate
> On Jun 25, 2019, at 2:52 PM, Dave Hansen <dave.hansen@...el.com> wrote:
>
> On 6/12/19 11:48 PM, Nadav Amit wrote:
>> cpu_tlbstate is mostly private and only the variable is_lazy is shared.
>> This causes some false-sharing when TLB flushes are performed.
>
> Presumably, all CPUs doing TLB flushes read 'is_lazy'. Because of this,
> when we write to it we have to do the cache coherency dance to get rid
> of all the CPUs that might have a read-only copy.
>
> I would have *thought* that we only do writes when we enter or exist
> lazy mode. That's partially true. We do write in enter_lazy_tlb(), but
> we also *unconditionally* write in switch_mm_irqs_off(). That seems
> like it might be responsible for a chunk (or even a vast majority) of
> the cacheline bounces.
>
> Is there anything preventing us from turning the switch_mm_irqs_off()
> write into:
>
> if (was_lazy)
> this_cpu_write(cpu_tlbstate.is_lazy, false);
>
> ?
>
> I think this patch is probably still a good general idea, but I just
> wonder if reducing the writes is a better way to reduce bounces.
Sounds good. I will add another patch based on your idea.
Powered by blists - more mailing lists