[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1487198501.309887329@decadent.org.uk>
Date: Wed, 15 Feb 2017 22:41:41 +0000
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org,
"John David Anglin" <dave.anglin@...l.net>,
"Helge Deller" <deller@....de>
Subject: [PATCH 3.16 281/306] parisc: Purge TLB before setting PTE
3.16.40-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: John David Anglin <dave.anglin@...l.net>
commit c78e710c1c9fbeff43dddc0aa3d0ff458e70b0cc upstream.
The attached change interchanges the order of purging the TLB and
setting the corresponding page table entry. TLB purges are strongly
ordered. It occurred to me one night that setting the PTE first might
have subtle ordering issues on SMP machines and cause random memory
corruption.
A TLB lock guards the insertion of user TLB entries. So after the TLB
is purged, a new entry can't be inserted until the lock is released.
This ensures that the new PTE value is used when the lock is released.
Since making this change, no random segmentation faults have been
observed on the Debian hppa buildd servers.
Signed-off-by: John David Anglin <dave.anglin@...l.net>
Signed-off-by: Helge Deller <deller@....de>
[bwh: Backported to 3.16: adjust context]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
arch/parisc/include/asm/pgtable.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--- a/arch/parisc/include/asm/pgtable.h
+++ b/arch/parisc/include/asm/pgtable.h
@@ -48,8 +48,8 @@ extern void purge_tlb_entries(struct mm_
do { \
unsigned long flags; \
spin_lock_irqsave(&pa_dbit_lock, flags); \
- set_pte(ptep, pteval); \
purge_tlb_entries(mm, addr); \
+ set_pte(ptep, pteval); \
spin_unlock_irqrestore(&pa_dbit_lock, flags); \
} while (0)
@@ -452,8 +452,8 @@ static inline int ptep_test_and_clear_yo
spin_unlock_irqrestore(&pa_dbit_lock, flags);
return 0;
}
- set_pte(ptep, pte_mkold(pte));
purge_tlb_entries(vma->vm_mm, addr);
+ set_pte(ptep, pte_mkold(pte));
spin_unlock_irqrestore(&pa_dbit_lock, flags);
return 1;
}
@@ -466,8 +466,8 @@ static inline pte_t ptep_get_and_clear(s
spin_lock_irqsave(&pa_dbit_lock, flags);
old_pte = *ptep;
- pte_clear(mm,addr,ptep);
purge_tlb_entries(mm, addr);
+ pte_clear(mm,addr,ptep);
spin_unlock_irqrestore(&pa_dbit_lock, flags);
return old_pte;
@@ -477,8 +477,8 @@ static inline void ptep_set_wrprotect(st
{
unsigned long flags;
spin_lock_irqsave(&pa_dbit_lock, flags);
- set_pte(ptep, pte_wrprotect(*ptep));
purge_tlb_entries(mm, addr);
+ set_pte(ptep, pte_wrprotect(*ptep));
spin_unlock_irqrestore(&pa_dbit_lock, flags);
}
Powered by blists - more mailing lists