[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1606879302.tdngvs3yq4.astroid@bobo.none>
Date: Wed, 02 Dec 2020 13:47:40 +1000
From: Nicholas Piggin <npiggin@...il.com>
To: Christian Borntraeger <borntraeger@...ibm.com>,
Catalin Marinas <catalin.marinas@....com>,
Dave Hansen <dave.hansen@...el.com>,
Vasily Gorbik <gor@...ux.ibm.com>,
Heiko Carstens <hca@...ux.ibm.com>,
Andy Lutomirski <luto@...nel.org>,
Will Deacon <will@...nel.org>
Cc: Anton Blanchard <anton@...abs.org>, Arnd Bergmann <arnd@...db.de>,
linux-arch <linux-arch@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>,
linuxppc-dev <linuxppc-dev@...ts.ozlabs.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Peter Zijlstra <peterz@...radead.org>, X86 ML <x86@...nel.org>
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb
option
Excerpts from Andy Lutomirski's message of December 1, 2020 4:31 am:
> other arch folk: there's some background here:
>
> https://lkml.kernel.org/r/CALCETrVXUbe8LfNn-Qs+DzrOQaiw+sFUg1J047yByV31SaTOZw@mail.gmail.com
>
> On Sun, Nov 29, 2020 at 12:16 PM Andy Lutomirski <luto@...nel.org> wrote:
>>
>> On Sat, Nov 28, 2020 at 7:54 PM Andy Lutomirski <luto@...nel.org> wrote:
>> >
>> > On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin <npiggin@...il.com> wrote:
>> > >
>> > > On big systems, the mm refcount can become highly contented when doing
>> > > a lot of context switching with threaded applications (particularly
>> > > switching between the idle thread and an application thread).
>> > >
>> > > Abandoning lazy tlb slows switching down quite a bit in the important
>> > > user->idle->user cases, so so instead implement a non-refcounted scheme
>> > > that causes __mmdrop() to IPI all CPUs in the mm_cpumask and shoot down
>> > > any remaining lazy ones.
>> > >
>> > > Shootdown IPIs are some concern, but they have not been observed to be
>> > > a big problem with this scheme (the powerpc implementation generated
>> > > 314 additional interrupts on a 144 CPU system during a kernel compile).
>> > > There are a number of strategies that could be employed to reduce IPIs
>> > > if they turn out to be a problem for some workload.
>> >
>> > I'm still wondering whether we can do even better.
>> >
>>
>> Hold on a sec.. __mmput() unmaps VMAs, frees pagetables, and flushes
>> the TLB. On x86, this will shoot down all lazies as long as even a
>> single pagetable was freed. (Or at least it will if we don't have a
>> serious bug, but the code seems okay. We'll hit pmd_free_tlb, which
>> sets tlb->freed_tables, which will trigger the IPI.) So, on
>> architectures like x86, the shootdown approach should be free. The
>> only way it ought to have any excess IPIs is if we have CPUs in
>> mm_cpumask() that don't need IPI to free pagetables, which could
>> happen on paravirt.
>
> Indeed, on x86, we do this:
>
> [ 11.558844] flush_tlb_mm_range.cold+0x18/0x1d
> [ 11.559905] tlb_finish_mmu+0x10e/0x1a0
> [ 11.561068] exit_mmap+0xc8/0x1a0
> [ 11.561932] mmput+0x29/0xd0
> [ 11.562688] do_exit+0x316/0xa90
> [ 11.563588] do_group_exit+0x34/0xb0
> [ 11.564476] __x64_sys_exit_group+0xf/0x10
> [ 11.565512] do_syscall_64+0x34/0x50
>
> and we have info->freed_tables set.
>
> What are the architectures that have large systems like?
>
> x86: we already zap lazies, so it should cost basically nothing to do
This is not zapping lazies, this is freeing the user page tables.
"lazy mm" is where a switch to a kernel thread takes on the
previous mm for its kernel mapping rather than switch to init_mm.
> a little loop at the end of __mmput() to make sure that no lazies are
> left. If we care about paravirt performance, we could implement one
> of the optimizations I mentioned above to fix up the refcounts instead
> of sending an IPI to any remaining lazies.
It might be possible x86's scheme you could scan mm_cpumask
carefully synchronized or something when the last user reference
gets dropped that frees the lazy at that point, but I don't know
what that would buy you because you're still having to maintain
the mm_cpumask on switches. powerpc's characteristics are just
different here so it makes sense whereas I don't know if it
would on x86.
>
> arm64: AFAICT arm64's flush uses magic arm64 hardware support for
> remote flushes, so any lazy mm references will still exist after
> exit_mmap(). (arm64 uses lazy TLB, right?) So this is kind of like
> the x86 paravirt case. Are there large enough arm64 systems that any
> of this matters?
>
> s390x: The code has too many acronyms for me to understand it fully,
> but I think it's more or less the same situation as arm64. How big do
> s390x systems come?
>
> power: Ridiculously complicated, seems to vary by system and kernel config.
>
> So, Nick, your unconditional IPI scheme is apparently a big
> improvement for power, and it should be an improvement and have low
> cost for x86.
As said, the tradeoffs are different, I'm not so sure. It was a big
improvement on a very big system with the powerpc mm_cpumask switching
model on a microbenchmark designed to stress this, which is about all
I can say for it.
> On arm64 and s390x it will add more IPIs on process
> exit but reduce contention on context switching depending on how lazy
> TLB works. I suppose we could try it for all architectures without
> any further optimizations.
It will remain opt-in but certainly try it out and see. There are some
requirements as documented in the config option text.
> Or we could try one of the perhaps
> excessively clever improvements I linked above. arm64, s390x people,
> what do you think?
>
I'm not against improvements to the scheme. e.g., from the patch
+ /*
+ * IPI overheads have not found to be expensive, but they could
+ * be reduced in a number of possible ways, for example (in
+ * roughly increasing order of complexity):
+ * - A batch of mms requiring IPIs could be gathered and freed
+ * at once.
+ * - CPUs could store their active mm somewhere that can be
+ * remotely checked without a lock, to filter out
+ * false-positives in the cpumask.
+ * - After mm_users or mm_count reaches zero, switching away
+ * from the mm could clear mm_cpumask to reduce some IPIs
+ * (some batching or delaying would help).
+ * - A delayed freeing and RCU-like quiescing sequence based on
+ * mm switching to avoid IPIs completely.
+ */
But would like to have numbers before being too clever.
Thanks,
Nick
Powered by blists - more mailing lists