[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171127085906.uth5hldrtbbqsnkr@hirez.programming.kicks-ass.net>
Date: Mon, 27 Nov 2017 09:59:06 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Ingo Molnar <mingo@...nel.org>
Cc: linux-kernel@...r.kernel.org,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...capital.net>,
Thomas Gleixner <tglx@...utronix.de>,
"H . Peter Anvin" <hpa@...or.com>, Borislav Petkov <bp@...en8.de>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH] x86/mm/kaiser: Use the other page_table_lock pattern
Subject: x86/mm/kaiser: Use the other page_table_lock pattern
From: Peter Zijlstra <peterz@...radead.org>
Date: Mon Nov 27 09:35:08 CET 2017
Use the other page_table_lock pattern; this removes the free from
under the lock, reducing worst case hold times and makes it a leaf
lock.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
arch/x86/mm/kaiser.c | 24 +++++++++++++++---------
1 file changed, 15 insertions(+), 9 deletions(-)
--- a/arch/x86/mm/kaiser.c
+++ b/arch/x86/mm/kaiser.c
@@ -183,11 +183,13 @@ static pte_t *kaiser_shadow_pagetable_wa
return NULL;
spin_lock(&shadow_table_allocation_lock);
- if (p4d_none(*p4d))
+ if (p4d_none(*p4d)) {
set_p4d(p4d, __p4d(_KERNPG_TABLE | __pa(new_pud_page)));
- else
- free_page(new_pud_page);
+ new_pud_page = 0;
+ }
spin_unlock(&shadow_table_allocation_lock);
+ if (new_pud_page)
+ free_page(new_pud_page);
}
pud = pud_offset(p4d, address);
@@ -202,11 +204,13 @@ static pte_t *kaiser_shadow_pagetable_wa
return NULL;
spin_lock(&shadow_table_allocation_lock);
- if (pud_none(*pud))
+ if (pud_none(*pud)) {
set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page)));
- else
- free_page(new_pmd_page);
+ new_pmd_page = 0;
+ }
spin_unlock(&shadow_table_allocation_lock);
+ if (new_pmd_page)
+ free_page(new_pmd_page);
}
pmd = pmd_offset(pud, address);
@@ -221,11 +225,13 @@ static pte_t *kaiser_shadow_pagetable_wa
return NULL;
spin_lock(&shadow_table_allocation_lock);
- if (pmd_none(*pmd))
+ if (pmd_none(*pmd)) {
set_pmd(pmd, __pmd(_KERNPG_TABLE | __pa(new_pte_page)));
- else
- free_page(new_pte_page);
+ new_pte_page = 0;
+ }
spin_unlock(&shadow_table_allocation_lock);
+ if (new_pte_page)
+ free_page(new_pte_page);
}
pte = pte_offset_kernel(pmd, address);
Powered by blists - more mailing lists