lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 28 Nov 2022 00:01:39 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Hyeonggon Yoo <42.hyeyoo@...il.com>
Cc:     Christoph Lameter <cl@...ux.com>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Pekka Enberg <penberg@...nel.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Matthew Wilcox <willy@...radead.org>, patches@...ts.linux.dev,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 09/12] mm, slub: split out allocations from pre/post hooks

On 11/27/22 11:54, Hyeonggon Yoo wrote:
> On Mon, Nov 21, 2022 at 06:11:59PM +0100, Vlastimil Babka wrote:
>> In the following patch we want to introduce CONFIG_SLUB_TINY allocation
>> paths that don't use the percpu slab. To prepare, refactor the
>> allocation functions:
>> 
>> Split out __slab_alloc_node() from slab_alloc_node() where the former
>> does the actual allocation and the latter calls the pre/post hooks.
>> 
>> Analogically, split out __kmem_cache_alloc_bulk() from
>> kmem_cache_alloc_bulk().
>> 
>> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
>> ---
>>  mm/slub.c | 127 +++++++++++++++++++++++++++++++++---------------------
>>  1 file changed, 77 insertions(+), 50 deletions(-)
> 
> [...]
> 
>> +
>> +/* Note that interrupts must be enabled when calling this function. */
>> +int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
>> +			  void **p)
>> +{
>> +	int i;
>> +	struct obj_cgroup *objcg = NULL;
>> +
>> +	/* memcg and kmem_cache debug support */
>> +	s = slab_pre_alloc_hook(s, NULL, &objcg, size, flags);
>> +	if (unlikely(!s))
>> +		return false;
>> +
>> +	i = __kmem_cache_alloc_bulk(s, flags, size, p, objcg);
>> +
>> +	/*
>> +	 * memcg and kmem_cache debug support and memory initialization.
>> +	 * Done outside of the IRQ disabled fastpath loop.
>> +	 */
>> +	if (i != 0)
>> +		slab_post_alloc_hook(s, objcg, flags, size, p,
>> +				slab_want_init_on_alloc(flags, s));
> 
> This patch looks mostly good but wondering what happens if someone calls it with size == 0
> so it does not call slab_post_alloc_hook()?

Hmm looks like in that case we miss doing obj_cgroup_put(objcg) in
memcg_slab_post_alloc_hook(). Good catch, thanks!

Fixing up, also a "return false" from int function
(existed also before this patch).

----8<----
diff --git a/mm/slub.c b/mm/slub.c
index b74c4664160e..88f0ce49caab 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3888,10 +3888,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
 	int i;
 	struct obj_cgroup *objcg = NULL;
 
+	if (!size)
+		return 0;
+
 	/* memcg and kmem_cache debug support */
 	s = slab_pre_alloc_hook(s, NULL, &objcg, size, flags);
 	if (unlikely(!s))
-		return false;
+		return 0;
 
 	i = __kmem_cache_alloc_bulk(s, flags, size, p, objcg);
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ