[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1337955821.9783.208.camel@laptop>
Date: Fri, 25 May 2012 16:23:41 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Alex Shi <alex.shi@...el.com>
Cc: tglx@...utronix.de, mingo@...hat.com, hpa@...or.com,
seto.hidetoshi@...fujitsu.com, borislav.petkov@....com,
tony.luck@...el.com, luto@....edu, jbeulich@...e.com,
rostedt@...dmis.org, ak@...ux.intel.com, akpm@...ux-foundation.org,
eric.dumazet@...il.com, akinobu.mita@...il.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86/tlb: replace INVALIDATE_TLB_VECTOR by
CALL_FUNCTION_VECTOR
On Sat, 2012-05-19 at 10:07 +0800, Alex Shi wrote:
>
> /*
> - *
> - * The flush IPI assumes that a thread switch happens in this order:
> - * [cpu0: the cpu that switches]
> - * 1) switch_mm() either 1a) or 1b)
> - * 1a) thread switch to a different mm
> - * 1a1) cpu_clear(cpu, old_mm->cpu_vm_mask);
> - * Stop ipi delivery for the old mm. This is not synchronized with
> - * the other cpus, but smp_invalidate_interrupt ignore flush ipis
> - * for the wrong mm, and in the worst case we perform a superfluous
> - * tlb flush.
> - * 1a2) set cpu mmu_state to TLBSTATE_OK
> - * Now the smp_invalidate_interrupt won't call leave_mm if cpu0
> - * was in lazy tlb mode.
> - * 1a3) update cpu active_mm
> - * Now cpu0 accepts tlb flushes for the new mm.
> - * 1a4) cpu_set(cpu, new_mm->cpu_vm_mask);
> - * Now the other cpus will send tlb flush ipis.
> - * 1a4) change cr3.
> - * 1b) thread switch without mm change
> - * cpu active_mm is correct, cpu0 already handles
> - * flush ipis.
> - * 1b1) set cpu mmu_state to TLBSTATE_OK
> - * 1b2) test_and_set the cpu bit in cpu_vm_mask.
> - * Atomically set the bit [other cpus will start sending flush ipis],
> - * and test the bit.
> - * 1b3) if the bit was 0: leave_mm was called, flush the tlb.
> - * 2) switch %%esp, ie current
> - *
> - * The interrupt must handle 2 special cases:
> - * - cr3 is changed before %%esp, ie. it cannot use current->{active_,}mm.
> - * - the cpu performs speculative tlb reads, i.e. even if the cpu only
> - * runs in kernel space, the cpu could load tlb entries for user space
> - * pages.
> - *
> - * The good news is that cpu mmu_state is local to each cpu, no
> - * write/read ordering problems.
> - */
It would be nice to update that comment instead of removing it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists