lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241127165848.42331fd7078565c0f4e0a7e9@linux-foundation.org>
Date: Wed, 27 Nov 2024 16:58:48 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Andrii Nakryiko <andrii@...nel.org>
Cc: linux-mm@...ck.org, urezki@...il.com, hch@...radead.org, vbabka@...e.cz,
 dakr@...nel.org, mhocko@...e.com, linux-kernel@...r.kernel.org,
 bpf@...r.kernel.org, ast@...nel.org
Subject: Re: [PATCH mm/stable] mm: fix vrealloc()'s KASAN poisoning logic

On Mon, 25 Nov 2024 16:52:06 -0800 Andrii Nakryiko <andrii@...nel.org> wrote:

> When vrealloc() reuses already allocated vmap_area, we need to
> re-annotate poisoned and unpoisoned portions of underlying memory
> according to the new size.

What are the consequences of this oversight?

When fixing a flaw, please always remember to describe the visible
effects of that flaw.

> Note, hard-coding KASAN_VMALLOC_PROT_NORMAL might not be exactly
> correct, but KASAN flag logic is pretty involved and spread out
> throughout __vmalloc_node_range_noprof(), so I'm using the bare minimum
> flag here and leaving the rest to mm people to refactor this logic and
> reuse it here.
> 
> Fixes: 3ddc2fefe6f3 ("mm: vmalloc: implement vrealloc()")

Because a cc:stable might be appropriate here.  But without knowing the
effects, it's hard to determine this.

> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -4093,7 +4093,8 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
>  		/* Zero out spare memory. */
>  		if (want_init_on_alloc(flags))
>  			memset((void *)p + size, 0, old_size - size);
> -
> +		kasan_poison_vmalloc(p + size, old_size - size);
> +		kasan_unpoison_vmalloc(p, size, KASAN_VMALLOC_PROT_NORMAL);
>  		return (void *)p;
>  	}
>  


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ