Subject: x86,mm: flush TLB on spurious fault It appears that certain x86 CPUs do not automatically flush the TLB entry that caused a page fault, causing spurious faults to loop forever under certain circumstances. Remove the dummy flush_tlb_fix_spurious_fault define, so x86 falls back to the asm-generic version, which does do a local TLB flush. Signed-off-by: Rik van Riel Reported-by: Stanislav Meduna --- arch/x86/include/asm/pgtable.h | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 1e67223..43e7966 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -729,8 +729,6 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, pte_update(mm, addr, ptep); } -#define flush_tlb_fix_spurious_fault(vma, address) do { } while (0) - #define mk_pmd(page, pgprot) pfn_pmd(page_to_pfn(page), (pgprot)) #define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS