lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 14 Jun 2016 19:44:21 -0700
From:	Nadav Amit <nadav.amit@...il.com>
To:	Andy Lutomirski <luto@...capital.net>
Cc:	Dave Hansen <dave.hansen@...ux.intel.com>,
	Lukasz Anaczkowski <lukasz.anaczkowski@...el.com>,
	LKML <linux-kernel@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	Andi Kleen <ak@...ux.intel.com>,
	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
	Michal Hocko <mhocko@...e.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"H. Peter Anvin" <hpa@...or.com>, harish.srinivasappa@...el.com,
	lukasz.odzioba@...el.com
Subject: Re: [PATCH] Linux VM workaround for Knights Landing A/D leak

Andy Lutomirski <luto@...capital.net> wrote:

> On Tue, Jun 14, 2016 at 7:35 PM, Nadav Amit <nadav.amit@...il.com> wrote:
>> Andy Lutomirski <luto@...capital.net> wrote:
>> 
>>> On Tue, Jun 14, 2016 at 2:37 PM, Dave Hansen
>>> <dave.hansen@...ux.intel.com> wrote:
>>>> On 06/14/2016 01:16 PM, Nadav Amit wrote:
>>>>> Dave Hansen <dave.hansen@...ux.intel.com> wrote:
>>>>> 
>>>>>> On 06/14/2016 09:47 AM, Nadav Amit wrote:
>>>>>>> Lukasz Anaczkowski <lukasz.anaczkowski@...el.com> wrote:
>>>>>>> 
>>>>>>>>> From: Andi Kleen <ak@...ux.intel.com>
>>>>>>>>> +void fix_pte_leak(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
>>>>>>>>> +{
>>>>>>> Here there should be a call to smp_mb__after_atomic() to synchronize with
>>>>>>> switch_mm. I submitted a similar patch, which is still pending (hint).
>>>>>>> 
>>>>>>>>> + if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) {
>>>>>>>>> +         trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL);
>>>>>>>>> +         flush_tlb_others(mm_cpumask(mm), mm, addr,
>>>>>>>>> +                          addr + PAGE_SIZE);
>>>>>>>>> +         mb();
>>>>>>>>> +         set_pte(ptep, __pte(0));
>>>>>>>>> + }
>>>>>>>>> +}
>>>>>> 
>>>>>> Shouldn't that barrier be incorporated in the TLB flush code itself and
>>>>>> not every single caller (like this code is)?
>>>>>> 
>>>>>> It is insane to require individual TLB flushers to be concerned with the
>>>>>> barriers.
>>>>> 
>>>>> IMHO it is best to use existing flushing interfaces instead of creating
>>>>> new ones.
>>>> 
>>>> Yeah, or make these things a _little_ harder to get wrong.  That little
>>>> snippet above isn't so crazy that we should be depending on open-coded
>>>> barriers to get it right.
>>>> 
>>>> Should we just add a barrier to mm_cpumask() itself?  That should stop
>>>> the race.  Or maybe we need a new primitive like:
>>>> 
>>>> /*
>>>> * Call this if a full barrier has been executed since the last
>>>> * pagetable modification operation.
>>>> */
>>>> static int __other_cpus_need_tlb_flush(struct mm_struct *mm)
>>>> {
>>>>       /* cpumask_any_but() returns >= nr_cpu_ids if no cpus set. */
>>>>       return cpumask_any_but(mm_cpumask(mm), smp_processor_id()) <
>>>>               nr_cpu_ids;
>>>> }
>>>> 
>>>> 
>>>> static int other_cpus_need_tlb_flush(struct mm_struct *mm)
>>>> {
>>>>       /*
>>>>        * Synchronizes with switch_mm.  Makes sure that we do not
>>>>        * observe a bit having been cleared in mm_cpumask() before
>>>>        * the other processor has seen our pagetable update.  See
>>>>        * switch_mm().
>>>>        */
>>>>       smp_mb__after_atomic();
>>>> 
>>>>       return __other_cpus_need_tlb_flush(mm)
>>>> }
>>>> 
>>>> We should be able to deploy other_cpus_need_tlb_flush() in most of the
>>>> cases where we are doing "cpumask_any_but(mm_cpumask(mm),
>>>> smp_processor_id()) < nr_cpu_ids".
>>> 
>>> IMO this is a bit nuts.  smp_mb__after_atomic() doesn't do anything on
>>> x86.  And, even if it did, why should the flush code assume that the
>>> previous store was atomic?
>>> 
>>> What's the issue being fixed / worked around here?
>> 
>> It does a compiler barrier, which prevents the decision whether a
>> remote TLB shootdown is required to be made before the PTE is set.
>> 
>> I agree that PTEs may not be written atomically in certain cases
>> (although I am unaware of such cases, except on full-mm flush).
> 
> How about plain set_pte?  It's atomic (aligned word-sized write), but
> it's not atomic in the _after_atomic sense.

Can you point me to a place where set_pte is used before a TLB
invalidation/shootdown, excluding this patch and the fullmm case?

I am not claiming there is no such case, but I am unaware of such
one. PTEs are cleared on SMP using xchg, and similarly the dirty bit
is cleared with an atomic operation.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ