lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 16 Jan 2014 17:43:44 +0000
From:	Will Deacon <will.deacon@....com>
To:	Kyle McMartin <kyle@...hat.com>
Cc:	"linux-arm-kernel@...ts.infradead.org" 
	<linux-arm-kernel@...ts.infradead.org>,
	Catalin Marinas <Catalin.Marinas@....com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] aarch64: always map VDSO at worst case alignment

Hi Kyle,

On Wed, Jan 15, 2014 at 09:41:44PM +0000, Kyle McMartin wrote:
> Currently on ARM64 with 4K pages set, GDB fails to load the VDSO with
> the error "Failed to read a valid object file image from memory" as it
> is applying the phdr alignment to the vma, and attempting to read below
> where the VDSO is mapped. This is because our segment alignment is 64K
> in the ELF headers, but the VDSO only has PAGE_SIZE alignment from
> get_unmapped_area.
> 
> Work around this by calling vm_unmapped_area directly, and specifying
> the worst case alignment (64K) directly.
> 
> With this patch applied, I no longer have issues loading the VDSO in
> gdb (and no more error message every time I run a program inside it.)
> 
> Signed-off-by: Kyle McMartin <kyle@...hat.com>
> 
> --- a/arch/arm64/kernel/vdso.c
> +++ b/arch/arm64/kernel/vdso.c
> @@ -163,7 +163,18 @@ int arch_setup_additional_pages(struct linux_binprm *bprm,
>  	vdso_mapping_len = (vdso_pages + 1) << PAGE_SHIFT;
>  
>  	down_write(&mm->mmap_sem);
> -	vdso_base = get_unmapped_area(NULL, 0, vdso_mapping_len, 0, 0);
> +	{
> +		/* the VDSO must be worst-case aligned to 64K */
> +		struct vm_unmapped_area_info info =
> +			{
> +				.flags = 0,
> +				.length = vdso_mapping_len,
> +				.low_limit = mm->mmap_base,
> +				.high_limit = TASK_SIZE,
> +				.align_mask = (1 << 16) - 1,
> +			};
> +		vdso_base = vm_unmapped_area(&info);
> +	}

I don't like this fix. The kernel is perfectly alright mapping the vdso at
the actual page size, as opposed to the maximum. Since the vdso isn't
demand-paged, we can actually just tell the linker not to bother forcing 64k
(worst case) alignment for PT_LOAD segments. Please can you try the patch
below?

Will

--->8

diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
index d8064af42e62..6d20b7d162d8 100644
--- a/arch/arm64/kernel/vdso/Makefile
+++ b/arch/arm64/kernel/vdso/Makefile
@@ -48,7 +48,7 @@ $(obj-vdso): %.o: %.S
 
 # Actual build commands
 quiet_cmd_vdsold = VDSOL $@
-      cmd_vdsold = $(CC) $(c_flags) -Wl,-T $^ -o $@
+      cmd_vdsold = $(CC) $(c_flags) -Wl,-n -Wl,-T $^ -o $@
 quiet_cmd_vdsoas = VDSOA $@
       cmd_vdsoas = $(CC) $(a_flags) -c -o $@ $<
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ