lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 18 Feb 2016 12:23:19 -0800 From: tip-bot for Dave Hansen <tipbot@...or.com> To: linux-tip-commits@...r.kernel.org Cc: hpa@...or.com, torvalds@...ux-foundation.org, dave@...1.net, dave.hansen@...ux.intel.com, peterz@...radead.org, luto@...capital.net, brgerst@...il.com, bp@...en8.de, riel@...hat.com, mingo@...nel.org, akpm@...ux-foundation.org, dvlasenk@...hat.com, linux-kernel@...r.kernel.org, tglx@...utronix.de Subject: [tip:mm/pkeys] x86/mm/pkeys: Optimize fault handling in access_error() Commit-ID: 07f146f53e8de826e4afa3a88ea65bdb13c24959 Gitweb: http://git.kernel.org/tip/07f146f53e8de826e4afa3a88ea65bdb13c24959 Author: Dave Hansen <dave.hansen@...ux.intel.com> AuthorDate: Fri, 12 Feb 2016 13:02:22 -0800 Committer: Ingo Molnar <mingo@...nel.org> CommitDate: Thu, 18 Feb 2016 19:46:28 +0100 x86/mm/pkeys: Optimize fault handling in access_error() We might not strictly have to make modifictions to access_error() to check the VMA here. If we do not, we will do this: 1. app sets VMA pkey to K 2. app touches a !present page 3. do_page_fault(), allocates and maps page, sets pte.pkey=K 4. return to userspace 5. touch instruction reexecutes, but triggers PF_PK 6. do PKEY signal What happens with this patch applied: 1. app sets VMA pkey to K 2. app touches a !present page 3. do_page_fault() notices that K is inaccessible 4. do PKEY signal We basically skip the fault that does an allocation. So what this lets us do is protect areas from even being *populated* unless it is accessible according to protection keys. That seems handy to me and makes protection keys work more like an mprotect()'d mapping. Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com> Reviewed-by: Thomas Gleixner <tglx@...utronix.de> Cc: Andrew Morton <akpm@...ux-foundation.org> Cc: Andy Lutomirski <luto@...capital.net> Cc: Borislav Petkov <bp@...en8.de> Cc: Brian Gerst <brgerst@...il.com> Cc: Dave Hansen <dave@...1.net> Cc: Denys Vlasenko <dvlasenk@...hat.com> Cc: H. Peter Anvin <hpa@...or.com> Cc: Linus Torvalds <torvalds@...ux-foundation.org> Cc: Peter Zijlstra <peterz@...radead.org> Cc: Rik van Riel <riel@...hat.com> Cc: linux-mm@...ck.org Link: http://lkml.kernel.org/r/20160212210222.EBB63D8C@viggo.jf.intel.com Signed-off-by: Ingo Molnar <mingo@...nel.org> --- arch/x86/mm/fault.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 319331a..68ecdff 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -900,10 +900,16 @@ bad_area(struct pt_regs *regs, unsigned long error_code, unsigned long address) static inline bool bad_area_access_from_pkeys(unsigned long error_code, struct vm_area_struct *vma) { + /* This code is always called on the current mm */ + bool foreign = false; + if (!boot_cpu_has(X86_FEATURE_OSPKE)) return false; if (error_code & PF_PK) return true; + /* this checks permission keys on the VMA: */ + if (!arch_vma_access_permitted(vma, (error_code & PF_WRITE), foreign)) + return true; return false; } @@ -1091,6 +1097,8 @@ int show_unhandled_signals = 1; static inline int access_error(unsigned long error_code, struct vm_area_struct *vma) { + /* This is only called for the current mm, so: */ + bool foreign = false; /* * Access or read was blocked by protection keys. We do * this check before any others because we do not want @@ -1099,6 +1107,13 @@ access_error(unsigned long error_code, struct vm_area_struct *vma) */ if (error_code & PF_PK) return 1; + /* + * Make sure to check the VMA so that we do not perform + * faults just to hit a PF_PK as soon as we fill in a + * page. + */ + if (!arch_vma_access_permitted(vma, (error_code & PF_WRITE), foreign)) + return 1; if (error_code & PF_WRITE) { /* write, present and write, not present: */
Powered by blists - more mailing lists