lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <66836dd6-b0c2-4f77-b2a3-c43296aa6c93@suse.cz>
Date: Tue, 30 Jul 2024 23:14:16 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Danilo Krummrich <dakr@...nel.org>, akpm@...ux-foundation.org,
 cl@...ux.com, penberg@...nel.org, rientjes@...gle.com,
 iamjoonsoo.kim@....com, roman.gushchin@...ux.dev, 42.hyeyoo@...il.com
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
 kasan-dev <kasan-dev@...glegroups.com>
Subject: Re: [PATCH 1/2] mm: krealloc: consider spare memory for __GFP_ZERO

On 7/30/24 9:42 PM, Danilo Krummrich wrote:
> As long as krealloc() is called with __GFP_ZERO consistently, starting
> with the initial memory allocation, __GFP_ZERO should be fully honored.
> 
> However, if for an existing allocation krealloc() is called with a
> decreased size, it is not ensured that the spare portion the allocation
> is zeroed. Thus, if krealloc() is subsequently called with a larger size
> again, __GFP_ZERO can't be fully honored, since we don't know the
> previous size, but only the bucket size.
> 
> Example:
> 
> 	buf = kzalloc(64, GFP_KERNEL);
> 	memset(buf, 0xff, 64);
> 
> 	buf = krealloc(buf, 48, GFP_KERNEL | __GFP_ZERO);
> 
> 	/* After this call the last 16 bytes are still 0xff. */
> 	buf = krealloc(buf, 64, GFP_KERNEL | __GFP_ZERO);
> 
> Fix this, by explicitly setting spare memory to zero, when shrinking an
> allocation with __GFP_ZERO flag set or init_on_alloc enabled.
> 
> Signed-off-by: Danilo Krummrich <dakr@...nel.org>
> ---
>  mm/slab_common.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 40b582a014b8..cff602cedf8e 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -1273,6 +1273,13 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
>  
>  	/* If the object still fits, repoison it precisely. */
>  	if (ks >= new_size) {
> +		/* Zero out spare memory. */
> +		if (want_init_on_alloc(flags)) {
> +			kasan_disable_current();
> +			memset((void *)p + new_size, 0, ks - new_size);
> +			kasan_enable_current();

If we do kasan_krealloc() first, shouldn't the memset then be legal
afterwards without the disable/enable dance?

> +		}
> +
>  		p = kasan_krealloc((void *)p, new_size, flags);
>  		return (void *)p;
>  	}
> 
> base-commit: 7c3dd6d99f2df6a9d7944ee8505b195ba51c9b68


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ