[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJF2gTRQgqRwjOYKB9Z6OdYoogsHWWVTw5anwNqoQjhmK_A41g@mail.gmail.com>
Date: Sat, 19 Nov 2022 11:37:39 +0800
From: Guo Ren <guoren@...nel.org>
To: Sergey Matyukevich <geomatsi@...il.com>
Cc: anup@...infault.org, paul.walmsley@...ive.com, palmer@...belt.com,
conor.dooley@...rochip.com, heiko@...ech.de,
philipp.tomsich@...ll.eu, alex@...ti.fr, hch@....de,
ajones@...tanamicro.com, gary@...yguo.net, jszhang@...nel.org,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
Guo Ren <guoren@...ux.alibaba.com>,
Anup Patel <apatel@...tanamicro.com>,
Palmer Dabbelt <palmer@...osinc.com>
Subject: Re: [PATCH V3] riscv: asid: Fixup stale TLB entry cause application crash
On Sat, Nov 19, 2022 at 4:57 AM Sergey Matyukevich <geomatsi@...il.com> wrote:
>
> Hi Guo Ren,
>
>
> > After use_asid_allocator is enabled, the userspace application will
> > crash by stale TLB entries. Because only using cpumask_clear_cpu without
> > local_flush_tlb_all couldn't guarantee CPU's TLB entries were fresh.
> > Then set_mm_asid would cause the user space application to get a stale
> > value by stale TLB entry, but set_mm_noasid is okay.
>
> ... [snip]
>
> > + /*
> > + * The mm_cpumask indicates which harts' TLBs contain the virtual
> > + * address mapping of the mm. Compared to noasid, using asid
> > + * can't guarantee that stale TLB entries are invalidated because
> > + * the asid mechanism wouldn't flush TLB for every switch_mm for
> > + * performance. So when using asid, keep all CPUs footmarks in
> > + * cpumask() until mm reset.
> > + */
> > + cpumask_set_cpu(cpu, mm_cpumask(next));
> > + if (static_branch_unlikely(&use_asid_allocator)) {
> > + set_mm_asid(next, cpu);
> > + } else {
> > + cpumask_clear_cpu(cpu, mm_cpumask(prev));
> > + set_mm_noasid(next);
> > + }
> > }
>
> I observe similar user-space crashes on my SMP systems with enabled ASID.
> My attempt to fix the issue was a bit different, see the following patch:
>
> https://lore.kernel.org/linux-riscv/20220829205219.283543-1-geomatsi@gmail.com/
>
> In brief, the idea was borrowed from flush_icache_mm handling:
> - keep track of CPUs not running the task
> - perform per-ASID TLB flush on such CPUs only if the task is switched there
>
> Your patch also works fine in my tests fixing those crashes. I have a
> question though, regarding removed cpumask_clear_cpu. How CPUs no more
> running the task are removed from its mm_cpumask ? If they are not
> removed, then flush_tlb_mm/flush_tlb_page will broadcast unnecessary
> TLB flushes to those CPUs when ASID is enabled.
A task would be migrated to any CPU by the scheduler. So keeping TLB
contents synced with cpumask_set/clear needs additional tlb_flush just
like noasid, and your patch still follows that style. The worth of
ASID is avoiding tlb_flush during the context switch. Yes, my patch
would increase some tlb_flush IPI costs. But when mapping is stable,
no tlb_flush is needed during the switch_mm (Hackbench would be
beneficiary because no more TLB flush is needed at his hot point
path). Here are my points:
- We copied the arm64 globally unique asid mechanism into riscv,
which depends on hardware broadcast TLB flush. My fixup patch is
closer to the original principle design, proven in the arm64 world.
- If riscv continues local TLB flush hw design in ISA spec, please
try x86's per-CPU array of ASID. But that is a significant change;
let's fix the current issue with the smallest patch first.
In the end, thx your review and test.
--
Best Regards
Guo Ren
Powered by blists - more mailing lists