lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aAoAU4vhrpxiXaLF@pollux>
Date: Thu, 24 Apr 2025 11:11:47 +0200
From: Danilo Krummrich <dakr@...nel.org>
To: Kees Cook <kees@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
	Erhard Furtner <erhard_f@...lbox.org>,
	Michal Hocko <mhocko@...e.com>, Vlastimil Babka <vbabka@...e.cz>,
	Uladzislau Rezki <urezki@...il.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org
Subject: Re: [PATCH] mm: vmalloc: Support more granular vrealloc() sizing

On Wed, Apr 23, 2025 at 07:31:23PM -0700, Kees Cook wrote:
> Introduce struct vm_struct::requested_size so that the requested
> (re)allocation size is retained separately from the allocated area
> size. This means that KASAN will correctly poison the correct spans
> of requested bytes. This also means we can support growing the usable
> portion of an allocation that can already be supported by the existing
> area's existing allocation.
> 
> Reported-by: Erhard Furtner <erhard_f@...lbox.org>
> Closes: https://lore.kernel.org/all/20250408192503.6149a816@outsider.home/
> Fixes: 3ddc2fefe6f3 ("mm: vmalloc: implement vrealloc()")
> Signed-off-by: Kees Cook <kees@...nel.org>

Good catch!

One question below, otherwise

	Reviewed-by: Danilo Krummrich <dakr@...nel.org>

> @@ -4088,14 +4093,27 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
>  	 * would be a good heuristic for when to shrink the vm_area?
>  	 */
>  	if (size <= old_size) {
> -		/* Zero out spare memory. */
> -		if (want_init_on_alloc(flags))
> +		/* Zero out "freed" memory. */
> +		if (want_init_on_free())
>  			memset((void *)p + size, 0, old_size - size);
> +		vm->requested_size = size;
>  		kasan_poison_vmalloc(p + size, old_size - size);
>  		kasan_unpoison_vmalloc(p, size, KASAN_VMALLOC_PROT_NORMAL);
>  		return (void *)p;
>  	}
>  
> +	/*
> +	 * We already have the bytes available in the allocation; use them.
> +	 */
> +	if (size <= alloced_size) {
> +		kasan_unpoison_vmalloc(p, size, KASAN_VMALLOC_PROT_NORMAL);
> +		/* Zero out "alloced" memory. */
> +		if (want_init_on_alloc(flags))
> +			memset((void *)p + old_size, 0, size - old_size);
> +		vm->requested_size = size;
> +		kasan_poison_vmalloc(p + size, alloced_size - size);

Do we need this? We know that old_size < size <= alloced_size. And since
previously [p + old_size, p + alloced_size) must have been poisoned,
[p + size, p + alloced_size) must be poisoned already?

Maybe there was a reason, since in the above (size <= old_size) case
kasan_unpoison_vmalloc() seems unnecessary too.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ