lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 2 Jun 2020 11:02:55 -0700 From: Andrei Vagin <avagin@...il.com> To: linux-arm-kernel@...ts.infradead.org, Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org> Cc: linux-kernel@...r.kernel.org, Vincenzo Frascino <vincenzo.frascino@....com>, Mark Rutland <mark.rutland@....com>, Thomas Gleixner <tglx@...utronix.de>, Dmitry Safonov <dima@...sta.com>, Andrei Vagin <avagin@...il.com> Subject: [PATCH 2/6] arm64/vdso: Zap vvar pages when switching to a time namespace The VVAR page layout depends on whether a task belongs to the root or non-root time namespace. Whenever a task changes its namespace, the VVAR page tables are cleared and then they will be re-faulted with a corresponding layout. Reviewed-by: Vincenzo Frascino <vincenzo.frascino@....com> Signed-off-by: Andrei Vagin <avagin@...il.com> --- arch/arm64/kernel/vdso.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 031ee1a8d4a8..33df3cdf7982 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -131,6 +131,38 @@ static int __vdso_init(enum arch_vdso_type arch_index) return 0; } +#ifdef CONFIG_TIME_NS +/* + * The vvar page layout depends on whether a task belongs to the root or + * non-root time namespace. Whenever a task changes its namespace, the VVAR + * page tables are cleared and then they will re-faulted with a + * corresponding layout. + * See also the comment near timens_setup_vdso_data() for details. + */ +int vdso_join_timens(struct task_struct *task, struct time_namespace *ns) +{ + struct mm_struct *mm = task->mm; + struct vm_area_struct *vma; + + if (down_write_killable(&mm->mmap_sem)) + return -EINTR; + + for (vma = mm->mmap; vma; vma = vma->vm_next) { + unsigned long size = vma->vm_end - vma->vm_start; + + if (vma_is_special_mapping(vma, vdso_lookup[ARM64_VDSO].dm)) + zap_page_range(vma, vma->vm_start, size); +#ifdef CONFIG_COMPAT_VDSO + if (vma_is_special_mapping(vma, vdso_lookup[ARM64_VDSO32].dm)) + zap_page_range(vma, vma->vm_start, size); +#endif + } + + up_write(&mm->mmap_sem); + return 0; +} +#endif + static vm_fault_t vvar_fault(const struct vm_special_mapping *sm, struct vm_area_struct *vma, struct vm_fault *vmf) { -- 2.24.1
Powered by blists - more mailing lists