lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <202410041014.7DE8981@keescook>
Date: Fri, 4 Oct 2024 10:23:46 -0700
From: Kees Cook <kees@...nel.org>
To: Przemek Kitszel <przemyslaw.kitszel@...el.com>
Cc: Vlastimil Babka <vbabka@...e.cz>, Christoph Lameter <cl@...ux.com>,
	Pekka Enberg <penberg@...nel.org>,
	David Rientjes <rientjes@...gle.com>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Roman Gushchin <roman.gushchin@...ux.dev>,
	Hyeonggon Yoo <42.hyeyoo@...il.com>,
	"Gustavo A . R . Silva" <gustavoars@...nel.org>,
	Bill Wendling <morbo@...gle.com>,
	Justin Stitt <justinstitt@...gle.com>, Jann Horn <jannh@...gle.com>,
	Marco Elver <elver@...gle.com>, linux-mm@...ck.org,
	Nathan Chancellor <nathan@...nel.org>,
	Nick Desaulniers <ndesaulniers@...gle.com>,
	linux-kernel@...r.kernel.org, llvm@...ts.linux.dev,
	linux-hardening@...r.kernel.org
Subject: Re: [PATCH v3] slab: Introduce kmalloc_obj() and family

On Fri, Aug 23, 2024 at 06:27:58AM +0200, Przemek Kitszel wrote:
> On 8/23/24 01:13, Kees Cook wrote:
> 
> > (...) For cases where the total size of the allocation is needed,
> > the kmalloc_obj_sz(), kmalloc_objs_sz(), and kmalloc_flex_sz() family
> > of macros can be used. For example:
> > 
> > 	info->size = struct_size(ptr, flex_member, count);
> > 	ptr = kmalloc(info->size, gfp);
> > 
> > becomes:
> > 
> > 	kmalloc_flex_sz(ptr, flex_member, count, gfp, &info->size);
> > 
> > Internal introspection of allocated type now becomes possible, allowing
> > for future alignment-aware choices and hardening work. For example,
> > adding __alignof(*ptr) as an argument to the internal allocators so that
> > appropriate/efficient alignment choices can be made, or being able to
> > correctly choose per-allocation offset randomization within a bucket
> > that does not break alignment requirements.
> > 
> > Introduces __flex_count() for when __builtin_get_counted_by() is added
> > by GCC[1] and Clang[2]. The internal use of __flex_count() allows for
> > automatically setting the counter member of a struct's flexible array
> > member when it has been annotated with __counted_by(), avoiding any
> > missed early size initializations while __counted_by() annotations are
> > added to the kernel. Additionally, this also checks for "too large"
> > allocations based on the type size of the counter variable. For example:
> > 
> > 	if (count > type_max(ptr->flex_count))
> > 		fail...;
> > 	info->size = struct_size(ptr, flex_member, count);
> > 	ptr = kmalloc(info->size, gfp);
> > 	ptr->flex_count = count;
> > 
> > becomes (i.e. unchanged from earlier example):
> > 
> > 	kmalloc_flex_sz(ptr, flex_member, count, gfp, &info->size);
> 
> As there could be no __builtin_get_counted_by() available, caller still
> needs to fill the counted-by variable, right? So it is possible to just
> pass the in the struct pointer to fill? (last argument "&f->cnt" of the
> snippet below):
> 
> struct foo {
> 	int cnt;
> 	struct bar[] __counted_by(cnt);
> };
> 
> //...
> 	struct foo *f;
> 
> 	kmalloc_flex_sz(f, cnt, 42, gfp, &f->cnt);

I specifically want to avoid this because it makes adding the
counted_by attribute more difficult -- requiring manual auditing of
all allocation sites, even if we switch all the alloc macros. But if
allocation macros are all replaced with a treewide change, it becomes
trivial to add counted_by annotations without missing "too late" counter
assignments. (And note that the "too late" counter assignments are only
a problem for code built with compilers that support counted_by, so
there's no problem that __builtin_get_counted_by() isn't available.)

Right now we have two cases in kernel code:

case 1:
- allocate
- assign counter
- access array

case 2:
- allocate
- access array
- assign counter

When we add a counted_by annotation, all "case 2" code but be found and
refactored into "case 1". This has proven error-prone already, and we're
still pretty early in adding annotations. The reason refactoring is
needed is because when the compiler supports counted_by instrumentation,
at run-time, we get:

case 1:
- allocate
- assign counter
- access array // no problem!

case 2:
- allocate
- access array // trap!
- assign counter

I want to change this to be:

case 1:
- allocate & assign counter
- assign counter
- access array

case 2:
- allocate & assign counter
- access array
- assign counter

Once the kernel reaches a minimum compiler version where counted_by is
universally available, we can remove all the "open coded" counter
assignments.

-Kees

-- 
Kees Cook

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ