lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 17 Feb 2017 17:13:24 +0300 From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com> To: Linus Torvalds <torvalds@...ux-foundation.org>, Andrew Morton <akpm@...ux-foundation.org>, x86@...nel.org, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Arnd Bergmann <arnd@...db.de>, "H. Peter Anvin" <hpa@...or.com> Cc: Andi Kleen <ak@...ux.intel.com>, Dave Hansen <dave.hansen@...el.com>, Andy Lutomirski <luto@...capital.net>, linux-arch@...r.kernel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org, "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com> Subject: [PATCHv3 29/33] x86/mm: add sync_global_pgds() for configuration with 5-level paging This basically restores slightly modified version of original sync_global_pgds() which we had before foldedl p4d was introduced. The only modification is protection against 'address' overflow. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com> --- arch/x86/mm/init_64.c | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 07b52f81e9cd..58be5713ec09 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -92,6 +92,42 @@ __setup("noexec32=", nonx32_setup); * When memory was added make sure all the processes MM have * suitable PGD entries in the local PGD level page. */ +#ifdef CONFIG_X86_5LEVEL +void sync_global_pgds(unsigned long start, unsigned long end) +{ + unsigned long address; + + for (address = start; address <= end && address >= start; + address += PGDIR_SIZE) { + const pgd_t *pgd_ref = pgd_offset_k(address); + struct page *page; + + if (pgd_none(*pgd_ref)) + continue; + + spin_lock(&pgd_lock); + list_for_each_entry(page, &pgd_list, lru) { + pgd_t *pgd; + spinlock_t *pgt_lock; + + pgd = (pgd_t *)page_address(page) + pgd_index(address); + /* the pgt_lock only for Xen */ + pgt_lock = &pgd_page_get_mm(page)->page_table_lock; + spin_lock(pgt_lock); + + if (!pgd_none(*pgd_ref) && !pgd_none(*pgd)) + BUG_ON(pgd_page_vaddr(*pgd) + != pgd_page_vaddr(*pgd_ref)); + + if (pgd_none(*pgd)) + set_pgd(pgd, *pgd_ref); + + spin_unlock(pgt_lock); + } + spin_unlock(&pgd_lock); + } +} +#else void sync_global_pgds(unsigned long start, unsigned long end) { unsigned long address; @@ -135,6 +171,7 @@ void sync_global_pgds(unsigned long start, unsigned long end) spin_unlock(&pgd_lock); } } +#endif /* * NOTE: This function is marked __ref because it calls __init function -- 2.11.0
Powered by blists - more mailing lists