lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 19 Feb 2017 15:33:44 +0530 From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com> To: akpm@...ux-foundation.org, Rik van Riel <riel@...riel.com>, Mel Gorman <mgorman@...hsingularity.net>, paulus@...abs.org, benh@...nel.crashing.org Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org, "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com> Subject: [PATCH V3 2/3] mm/ksm: Handle protnone saved writes when making page write protect Without this KSM will consider the page write protected, but a numa fault can later mark the page writable. This can result in memory corruption. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@...ux.vnet.ibm.com> --- include/asm-generic/pgtable.h | 8 ++++++++ mm/ksm.c | 9 +++++++-- 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index b6f3a8a4b738..8c8ba48bef0b 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -200,6 +200,10 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres #define pte_mk_savedwrite pte_mkwrite #endif +#ifndef pte_clear_savedwrite +#define pte_clear_savedwrite pte_wrprotect +#endif + #ifndef pmd_savedwrite #define pmd_savedwrite pmd_write #endif @@ -208,6 +212,10 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres #define pmd_mk_savedwrite pmd_mkwrite #endif +#ifndef pmd_clear_savedwrite +#define pmd_clear_savedwrite pmd_wrprotect +#endif + #ifndef __HAVE_ARCH_PMDP_SET_WRPROTECT #ifdef CONFIG_TRANSPARENT_HUGEPAGE static inline void pmdp_set_wrprotect(struct mm_struct *mm, diff --git a/mm/ksm.c b/mm/ksm.c index 9ae6011a41f8..768202831578 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -872,7 +872,8 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page, if (!ptep) goto out_mn; - if (pte_write(*ptep) || pte_dirty(*ptep)) { + if (pte_write(*ptep) || pte_dirty(*ptep) || + (pte_protnone(*ptep) && pte_savedwrite(*ptep))) { pte_t entry; swapped = PageSwapCache(page); @@ -897,7 +898,11 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page, } if (pte_dirty(entry)) set_page_dirty(page); - entry = pte_mkclean(pte_wrprotect(entry)); + + if (pte_protnone(entry)) + entry = pte_mkclean(pte_clear_savedwrite(entry)); + else + entry = pte_mkclean(pte_wrprotect(entry)); set_pte_at_notify(mm, addr, ptep, entry); } *orig_pte = *ptep; -- 2.7.4
Powered by blists - more mailing lists