lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 18 Feb 2016 12:21:50 -0800 From: tip-bot for Dave Hansen <tipbot@...or.com> To: linux-tip-commits@...r.kernel.org Cc: tglx@...utronix.de, linux-kernel@...r.kernel.org, mingo@...nel.org, riel@...hat.com, brgerst@...il.com, peterz@...radead.org, bp@...en8.de, dave.hansen@...ux.intel.com, luto@...capital.net, dvlasenk@...hat.com, hpa@...or.com, torvalds@...ux-foundation.org, dave@...1.net, akpm@...ux-foundation.org Subject: [tip:mm/pkeys] x86/mm/gup: Simplify get_user_pages() PTE bit handling Commit-ID: 1874f6895c92d991ccf85edcc55a0d9dd552d71c Gitweb: http://git.kernel.org/tip/1874f6895c92d991ccf85edcc55a0d9dd552d71c Author: Dave Hansen <dave.hansen@...ux.intel.com> AuthorDate: Fri, 12 Feb 2016 13:02:18 -0800 Committer: Ingo Molnar <mingo@...nel.org> CommitDate: Thu, 18 Feb 2016 09:32:44 +0100 x86/mm/gup: Simplify get_user_pages() PTE bit handling The current get_user_pages() code is a wee bit more complicated than it needs to be for pte bit checking. Currently, it establishes a mask of required pte _PAGE_* bits and ensures that the pte it goes after has all those bits. This consolidates the three identical copies of this code. Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com> Reviewed-by: Thomas Gleixner <tglx@...utronix.de> Cc: Andrew Morton <akpm@...ux-foundation.org> Cc: Andy Lutomirski <luto@...capital.net> Cc: Borislav Petkov <bp@...en8.de> Cc: Brian Gerst <brgerst@...il.com> Cc: Dave Hansen <dave@...1.net> Cc: Denys Vlasenko <dvlasenk@...hat.com> Cc: H. Peter Anvin <hpa@...or.com> Cc: Linus Torvalds <torvalds@...ux-foundation.org> Cc: Peter Zijlstra <peterz@...radead.org> Cc: Rik van Riel <riel@...hat.com> Cc: linux-mm@...ck.org Link: http://lkml.kernel.org/r/20160212210218.3A2D4045@viggo.jf.intel.com Signed-off-by: Ingo Molnar <mingo@...nel.org> --- arch/x86/mm/gup.c | 38 ++++++++++++++++++++++---------------- 1 file changed, 22 insertions(+), 16 deletions(-) diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c index ce5e454..2f0a329 100644 --- a/arch/x86/mm/gup.c +++ b/arch/x86/mm/gup.c @@ -75,6 +75,24 @@ static void undo_dev_pagemap(int *nr, int nr_start, struct page **pages) } /* + * 'pteval' can come from a pte, pmd or pud. We only check + * _PAGE_PRESENT, _PAGE_USER, and _PAGE_RW in here which are the + * same value on all 3 types. + */ +static inline int pte_allows_gup(unsigned long pteval, int write) +{ + unsigned long need_pte_bits = _PAGE_PRESENT|_PAGE_USER; + + if (write) + need_pte_bits |= _PAGE_RW; + + if ((pteval & need_pte_bits) != need_pte_bits) + return 0; + + return 1; +} + +/* * The performance critical leaf functions are made noinline otherwise gcc * inlines everything into a single function which results in too much * register pressure. @@ -83,14 +101,9 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, int write, struct page **pages, int *nr) { struct dev_pagemap *pgmap = NULL; - unsigned long mask; int nr_start = *nr; pte_t *ptep; - mask = _PAGE_PRESENT|_PAGE_USER; - if (write) - mask |= _PAGE_RW; - ptep = pte_offset_map(&pmd, addr); do { pte_t pte = gup_get_pte(ptep); @@ -110,7 +123,8 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr, pte_unmap(ptep); return 0; } - } else if ((pte_flags(pte) & (mask | _PAGE_SPECIAL)) != mask) { + } else if (!pte_allows_gup(pte_val(pte), write) || + pte_special(pte)) { pte_unmap(ptep); return 0; } @@ -164,14 +178,10 @@ static int __gup_device_huge_pmd(pmd_t pmd, unsigned long addr, static noinline int gup_huge_pmd(pmd_t pmd, unsigned long addr, unsigned long end, int write, struct page **pages, int *nr) { - unsigned long mask; struct page *head, *page; int refs; - mask = _PAGE_PRESENT|_PAGE_USER; - if (write) - mask |= _PAGE_RW; - if ((pmd_flags(pmd) & mask) != mask) + if (!pte_allows_gup(pmd_val(pmd), write)) return 0; VM_BUG_ON(!pfn_valid(pmd_pfn(pmd))); @@ -231,14 +241,10 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, static noinline int gup_huge_pud(pud_t pud, unsigned long addr, unsigned long end, int write, struct page **pages, int *nr) { - unsigned long mask; struct page *head, *page; int refs; - mask = _PAGE_PRESENT|_PAGE_USER; - if (write) - mask |= _PAGE_RW; - if ((pud_flags(pud) & mask) != mask) + if (!pte_allows_gup(pud_val(pud), write)) return 0; /* hugepages are never "special" */ VM_BUG_ON(pud_flags(pud) & _PAGE_SPECIAL);
Powered by blists - more mailing lists