lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 31 May 2012 16:40:52 +0800
From:	Alex Shi <alex.shi@...el.com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	tglx@...utronix.de, mingo@...hat.com, hpa@...or.com,
	seto.hidetoshi@...fujitsu.com, borislav.petkov@....com,
	tony.luck@...el.com, luto@....edu, jbeulich@...e.com,
	rostedt@...dmis.org, ak@...ux.intel.com, akpm@...ux-foundation.org,
	eric.dumazet@...il.com, akinobu.mita@...il.com,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86/tlb: replace INVALIDATE_TLB_VECTOR by CALL_FUNCTION_VECTOR


> 
> 
> How about the following new comments? I changed them to match latest
> switch_mm and tlbflush code.
> but as to the 'The interrupt must handle 2 special cases' section, I am
> wondering if it should be kept, since current tlb flush IPI handler
> don't need %%esp at all.
> 
> Comments welcome!


If no further comments here, I am going to use the following comments.
Actually, all of them are useful for full view of code.

I will refresh whole patcheset on 3.5-rc1 kernel. Thanks!

> 
> -------
> /*
>  * The flush IPI assumes that a thread switch happens in this order:
>  * [cpu0: the cpu that switches]
>  * 1) switch_mm() either 1a) or 1b)
>  * 1a) thread switch to a different mm
>  * 1a1) set cpu_tlbstate to TLBSTATE_OK
>  *	Now the tlb flush IPI handler flush_tlb_func won't call leave_mm
>  *	if cpu0 was in lazy tlb mode.
>  * 1a2) update cpu active_mm
>  *	Now cpu0 accepts tlb flushes for the new mm.
>  * 1a3) cpu_set(cpu, new_mm->cpu_vm_mask);
>  *	Now the other cpus will send tlb flush ipis.
>  * 1a4) change cr3.
>  * 1a5) cpu_clear(cpu, old_mm->cpu_vm_mask);
>  *	Stop ipi delivery for the old mm. This is not synchronized with
>  *	the other cpus, but flush_tlb_func ignore flush ipis for the wrong
>  *	mm, and in the worst case we perform a superfluous tlb flush.
>  * 1b) thread switch without mm change
>  *	cpu active_mm is correct, cpu0 already handles
>  *	flush ipis.
>  * 1b1) set cpu_tlbstate to TLBSTATE_OK
>  * 1b2) test_and_set the cpu bit in cpu_vm_mask.
>  *	Atomically set the bit [other cpus will start sending flush ipis],
>  *	and test the bit.
>  * 1b3) if the bit was 0: leave_mm was called, flush the tlb.
>  * 2) switch %%esp, ie current
>  *
>  * The interrupt must handle 2 special cases:
>  * - cr3 is changed before %%esp, ie. it cannot use current->{active_,}mm.
>  * - the cpu performs speculative tlb reads, i.e. even if the cpu only
>  *   runs in kernel space, the cpu could load tlb entries for user space
>  *   pages.
>  *
>  * The good news is that cpu_tlbstate is local to each cpu, no
>  * write/read ordering problems.
>  */
> 
> 
> 


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ