lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 24 Sep 2020 15:58:46 +0200 From: Christoph Hellwig <hch@....de> To: Andrew Morton <akpm@...ux-foundation.org> Cc: Peter Zijlstra <peterz@...radead.org>, Boris Ostrovsky <boris.ostrovsky@...cle.com>, Juergen Gross <jgross@...e.com>, Stefano Stabellini <sstabellini@...nel.org>, Jani Nikula <jani.nikula@...ux.intel.com>, Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>, Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>, Chris Wilson <chris@...is-wilson.co.uk>, Matthew Auld <matthew.auld@...el.com>, Rodrigo Vivi <rodrigo.vivi@...el.com>, Minchan Kim <minchan@...nel.org>, Matthew Wilcox <willy@...radead.org>, Nitin Gupta <ngupta@...are.org>, x86@...nel.org, xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org, intel-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org, linux-mm@...ck.org Subject: [PATCH 04/11] mm: allow a NULL fn callback in apply_to_page_range Besides calling the callback on each page, apply_to_page_range also has the effect of pre-faulting all PTEs for the range. To support callers that only need the pre-faulting, make the callback optional. Based on a patch from Minchan Kim <minchan@...nel.org>. Signed-off-by: Christoph Hellwig <hch@....de> --- mm/memory.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 469af373ae76e1..a60136046d7fcc 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2231,13 +2231,15 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, arch_enter_lazy_mmu_mode(); - do { - if (create || !pte_none(*pte)) { - err = fn(pte++, addr, data); - if (err) - break; - } - } while (addr += PAGE_SIZE, addr != end); + if (fn) { + do { + if (create || !pte_none(*pte)) { + err = fn(pte++, addr, data); + if (err) + break; + } + } while (addr += PAGE_SIZE, addr != end); + } *mask |= PGTBL_PTE_MODIFIED; arch_leave_lazy_mmu_mode(); -- 2.28.0
Powered by blists - more mailing lists