[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-8e998fc24de47c55b47a887f6c95ab91acd4a720@git.kernel.org>
Date: Mon, 22 Jul 2019 01:22:50 -0700
From: tip-bot for Joerg Roedel <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, jroedel@...e.de, hpa@...or.com,
dave.hansen@...ux.intel.com, mingo@...nel.org, tglx@...utronix.de
Subject: [tip:x86/urgent] x86/mm: Sync also unmappings in vmalloc_sync_all()
Commit-ID: 8e998fc24de47c55b47a887f6c95ab91acd4a720
Gitweb: https://git.kernel.org/tip/8e998fc24de47c55b47a887f6c95ab91acd4a720
Author: Joerg Roedel <jroedel@...e.de>
AuthorDate: Fri, 19 Jul 2019 20:46:51 +0200
Committer: Thomas Gleixner <tglx@...utronix.de>
CommitDate: Mon, 22 Jul 2019 10:18:30 +0200
x86/mm: Sync also unmappings in vmalloc_sync_all()
With huge-page ioremap areas the unmappings also need to be synced between
all page-tables. Otherwise it can cause data corruption when a region is
unmapped and later re-used.
Make the vmalloc_sync_one() function ready to sync unmappings and make sure
vmalloc_sync_all() iterates over all page-tables even when an unmapped PMD
is found.
Fixes: 5d72b4fba40ef ('x86, mm: support huge I/O mapping capability I/F')
Signed-off-by: Joerg Roedel <jroedel@...e.de>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Reviewed-by: Dave Hansen <dave.hansen@...ux.intel.com>
Link: https://lkml.kernel.org/r/20190719184652.11391-3-joro@8bytes.org
---
arch/x86/mm/fault.c | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index e64173db4970..9ceacd1156db 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -177,11 +177,12 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, unsigned long address)
pmd = pmd_offset(pud, address);
pmd_k = pmd_offset(pud_k, address);
- if (!pmd_present(*pmd_k))
- return NULL;
- if (!pmd_present(*pmd))
+ if (pmd_present(*pmd) != pmd_present(*pmd_k))
set_pmd(pmd, *pmd_k);
+
+ if (!pmd_present(*pmd_k))
+ return NULL;
else
BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k));
@@ -203,17 +204,13 @@ void vmalloc_sync_all(void)
spin_lock(&pgd_lock);
list_for_each_entry(page, &pgd_list, lru) {
spinlock_t *pgt_lock;
- pmd_t *ret;
/* the pgt_lock only for Xen */
pgt_lock = &pgd_page_get_mm(page)->page_table_lock;
spin_lock(pgt_lock);
- ret = vmalloc_sync_one(page_address(page), address);
+ vmalloc_sync_one(page_address(page), address);
spin_unlock(pgt_lock);
-
- if (!ret)
- break;
}
spin_unlock(&pgd_lock);
}
Powered by blists - more mailing lists