lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 13 Jan 2009 17:59:24 +0000 (GMT)
From:	Hugh Dickins <hugh@...itas.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
cc:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Mike Waychison <mikew@...gle.com>, yinghan@...gle.com,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	rohitseth@...gle.com
Subject: Re: [PATCH]Fix: 32bit binary has 64bit address of stack vma

On Mon, 12 Jan 2009, Andrew Morton wrote:
> On Mon, 12 Jan 2009 15:05:51 -0800
> Mike Waychison <mikew@...gle.com> wrote:
> 
> > > Subject: fs/exec.c: fix value of vma->vm_pgoff for the stack VMA of 32-bit processes
> > > From: Ying Han <yinghan@...gle.com>
> > > 
> > > With a 32 bit binary running on a 64 bit system, the /proc/pid/maps for
> > > the [stack] VMA displays a 64-bit address:
> > > 
> > > ff96c000-ff981000 rwxp 7ffffffea000 00:00 0 [stack]
> > > 
> > > This is because vma->vm_pgoff for that VMA is incorrectly being stored in
> > > units of offset-in-bytes.  It should be stored in units of offset-in-pages.
> > > 
> > 
> > The problem is that the offset was stored without taking into account 
> > the shift.
> 
> Sigh.  Is it efficient to have me sitting here reverse-engineering the
> code, writing your changelog?
> 
> 
> 
> From: Ying Han <yinghan@...gle.com>
> 
> With a 32 bit binary running on a 64 bit system, the /proc/pid/maps for
> the [stack] VMA displays a 64-bit address:
> 
> ff96c000-ff981000 rwxp 7ffffffea000 00:00 0 [stack]
> 
> This is because vma->vm_pgoff for that VMA is not being updated for the
> shift of the stack VMA.
> 
> Signed-off-by: Ying Han <yinghan@...gle.com>
> Cc: Mike Waychison <mikew@...gle.com>
> Cc: Hugh Dickins <hugh@...itas.com>
> Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
> ---

Sorry for all the waste of your time, but
I'm afraid this is entirely bogus, as Kamezawa-san has kindly
discovered for us (in: Re: mmotm 2009-01-12-16-53 uploaded thread).

The code was correct before, there was no shifting issue, and you
cannot muck around with the vm_pgoff of a vma like this patch does.
Thank Nick for the CONFIG_DEBUG_VM BUG_ON which warns that this
patch has inadvertently made these stack pages unreclaimable
(because their indexes no longer match up with vm_pgoff).

All there is here is a cosmetic issue: we're used to seeing 8-hex-
digit offsets in the /proc/pid/maps of a 32-bit process.  Though it
could perfectly well mmap() a large file from a very high offset and
show more hex digits there, couldn't it?

Because this 32-bit process has been started by a 64-bit kernel, it
started off working with the 64-bit stack position before moving into
place, and that happens to get reflected in the final vm_pgoff.

You prefer only to see 8-hex-digit offsets for anonymous vmas of 32-bit
processes?  If so, then perhaps Ying Han could supply a cosmetic patch
for /proc/pid/maps to mask off the upper bits in that case.

Hugh

> 
>  fs/exec.c |    5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff -puN fs/exec.c~fs-execc-fix-value-of-vma-vm_pgoff-for-the-stack-vma-of-32-bit-processes fs/exec.c
> --- a/fs/exec.c~fs-execc-fix-value-of-vma-vm_pgoff-for-the-stack-vma-of-32-bit-processes
> +++ a/fs/exec.c
> @@ -509,6 +509,7 @@ static int shift_arg_pages(struct vm_are
>  	unsigned long length = old_end - old_start;
>  	unsigned long new_start = old_start - shift;
>  	unsigned long new_end = old_end - shift;
> +	unsigned long new_pgoff = new_start >> PAGE_SHIFT;
>  	struct mmu_gather *tlb;
>  
>  	BUG_ON(new_start > new_end);
> @@ -523,7 +524,7 @@ static int shift_arg_pages(struct vm_are
>  	/*
>  	 * cover the whole range: [new_start, old_end)
>  	 */
> -	vma_adjust(vma, new_start, old_end, vma->vm_pgoff, NULL);
> +	vma_adjust(vma, new_start, old_end, new_pgoff, NULL);
>  
>  	/*
>  	 * move the page tables downwards, on failure we rely on
> @@ -556,7 +557,7 @@ static int shift_arg_pages(struct vm_are
>  	/*
>  	 * shrink the vma to just the new range.
>  	 */
> -	vma_adjust(vma, new_start, new_end, vma->vm_pgoff, NULL);
> +	vma_adjust(vma, new_start, new_end, new_pgoff, NULL);
>  
>  	return 0;
>  }
> _
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ