[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrU5iZo5_7+cn=NLYws2r8nvu7huxBQOJcMvanGKsFs7+A@mail.gmail.com>
Date: Sun, 10 Sep 2017 18:46:47 -0700
From: Andy Lutomirski <luto@...nel.org>
To: Rik van Riel <riel@...hat.com>
Cc: Andy Lutomirski <luto@...nel.org>, Borislav Petkov <bp@...en8.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Markus Trippelsdorf <markus@...ppelsdorf.de>,
Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
Tom Lendacky <thomas.lendacky@....com>
Subject: Re: Current mainline git (24e700e291d52bd2) hangs when building e.g. perf
On Sun, Sep 10, 2017 at 6:12 PM, Rik van Riel <riel@...hat.com> wrote:
> On Sat, 2017-09-09 at 12:28 -0700, Andy Lutomirski wrote:
>> -
>> I propose the following fix. If PCID is on, then, in
>> enter_lazy_tlb(), we switch to init_mm with the no-flush flag set.
>> (And we give init_mm its own dedicated ASID to keep it simple and
>> fast
>> -- no need to use the LRU ASID mapping to assign one
>> dynamically.) We
>> clear the bit in mm_cpumask. That is, we more or less just skip the
>> whole lazy TLB optimization and rely on PCID CPUs having reasonably
>> fast CR3 writes. No extra IPIs.
>
> Avoiding the IPIs is probably what matters the most, especially
> on systems with deep C states, and virtual machines where the
> host may be running something else, causing the IPI service time
> to go through the roof for idle VCPUs.
>
>> Also, sorry Rik, this means your old increased laziness optimization
>> is dead in the water. It will have exactly the same speculative load
>> problem.
>
> Doesn't a memory barrier solve that speculative load
> problem?
>
> The memory barrier could be added only to the path
> that potentially skips reloading the TLB, under the
> assumption that a memory barrier is cheaper than a
> TLB reload (even with ASID).
No, nothing stops the problematic speculative load. Here's the issue.
One CPU removes a reference to a page table from a higher-level page
table, flushes, and then frees the page table. Then it re-allocates
it and writes something unrelated there. Another CPU that has CR3
pointing to the page hierarchy in question could have a reference to
the freed table in its paging structure cache. Even if it's
guaranteed to not try to access the addresses in question (because
they're user addresses and the other CPU is in kernel mode, etc), but
there is never a guarantee that the CPU doesn't randomly try to fill
its TLB for the affected addresses. This results in invalid PTEs in
the TLB, possible accesses using bogus memory types, and maybe even
reads from IO space.
It looks like we actually need to propagate flushes everywhere that
could have references to the flushed range, even if the software won't
access that range.
Powered by blists - more mailing lists