[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5760792D.90000@linux.intel.com>
Date: Tue, 14 Jun 2016 14:37:49 -0700
From: Dave Hansen <dave.hansen@...ux.intel.com>
To: Nadav Amit <nadav.amit@...il.com>
Cc: Lukasz Anaczkowski <lukasz.anaczkowski@...el.com>,
LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, ak@...ux.intel.com,
kirill.shutemov@...ux.intel.com, mhocko@...e.com,
Andrew Morton <akpm@...ux-foundation.org>,
"H. Peter Anvin" <hpa@...or.com>, harish.srinivasappa@...el.com,
lukasz.odzioba@...el.com, Andy Lutomirski <luto@...capital.net>
Subject: Re: [PATCH] Linux VM workaround for Knights Landing A/D leak
On 06/14/2016 01:16 PM, Nadav Amit wrote:
> Dave Hansen <dave.hansen@...ux.intel.com> wrote:
>
>> On 06/14/2016 09:47 AM, Nadav Amit wrote:
>>> Lukasz Anaczkowski <lukasz.anaczkowski@...el.com> wrote:
>>>
>>>>> From: Andi Kleen <ak@...ux.intel.com>
>>>>> +void fix_pte_leak(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
>>>>> +{
>>> Here there should be a call to smp_mb__after_atomic() to synchronize with
>>> switch_mm. I submitted a similar patch, which is still pending (hint).
>>>
>>>>> + if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) {
>>>>> + trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL);
>>>>> + flush_tlb_others(mm_cpumask(mm), mm, addr,
>>>>> + addr + PAGE_SIZE);
>>>>> + mb();
>>>>> + set_pte(ptep, __pte(0));
>>>>> + }
>>>>> +}
>>
>> Shouldn't that barrier be incorporated in the TLB flush code itself and
>> not every single caller (like this code is)?
>>
>> It is insane to require individual TLB flushers to be concerned with the
>> barriers.
>
> IMHO it is best to use existing flushing interfaces instead of creating
> new ones.
Yeah, or make these things a _little_ harder to get wrong. That little
snippet above isn't so crazy that we should be depending on open-coded
barriers to get it right.
Should we just add a barrier to mm_cpumask() itself? That should stop
the race. Or maybe we need a new primitive like:
/*
* Call this if a full barrier has been executed since the last
* pagetable modification operation.
*/
static int __other_cpus_need_tlb_flush(struct mm_struct *mm)
{
/* cpumask_any_but() returns >= nr_cpu_ids if no cpus set. */
return cpumask_any_but(mm_cpumask(mm), smp_processor_id()) <
nr_cpu_ids;
}
static int other_cpus_need_tlb_flush(struct mm_struct *mm)
{
/*
* Synchronizes with switch_mm. Makes sure that we do not
* observe a bit having been cleared in mm_cpumask() before
* the other processor has seen our pagetable update. See
* switch_mm().
*/
smp_mb__after_atomic();
return __other_cpus_need_tlb_flush(mm)
}
We should be able to deploy other_cpus_need_tlb_flush() in most of the
cases where we are doing "cpumask_any_but(mm_cpumask(mm),
smp_processor_id()) < nr_cpu_ids".
Right?
> In theory, fix_pte_leak could have used flush_tlb_page. But the problem
> is that flush_tlb_page requires the vm_area_struct as an argument, which
> ptep_get_and_clear (and others) do not have.
That, and we do not want/need to flush the _current_ processor's TLB.
flush_tlb_page() would have done that unnecessarily. That's not the end
of the world here, but it is a downside.
Powered by blists - more mailing lists