lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 10 Apr 2022 01:02:23 +0900
From:   Ohhoon Kwon <ohkwon1043@...il.com>
To:     Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Roman Gushchin <roman.gushchin@...ux.dev>
Cc:     Ohhoon Kwon <ohkwon1043@...il.com>,
        JaeSang Yoo <jsyoo5b@...il.com>,
        Wonhyuk Yang <vvghjk1234@...il.com>,
        Jiyoup Kim <lakroforce@...il.com>,
        Donghyeok Kim <dthex5d@...il.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: [PATCH] mm/slub: check pfmemalloc_match in slab_alloc_node fastpath

If current alloc context does not have __GFP_MEMALLOC in its gfpflags,
then slab objects that were previously created with __GFP_MEMALLOC
should not be given.

This criteria is well kept in slab alloc slowpath:
When gfpflags does not contain __GFP_MEMALLOC but if per-cpu slab page
was allocated with __GFP_MEMALLOC, then allocator first deactivates
per-cpu slab page and then again allocates new slab page with the
current context's gfpflags.

However, this criteria is not checked in fastpath.
It should also be checked in the fastpath, too.

Signed-off-by: Ohhoon Kwon <ohkwon1043@...il.com>
---
 mm/slub.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index 74d92aa4a3a2..c77cd548e106 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3179,7 +3179,8 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l
 	 * there is a suitable cpu freelist.
 	 */
 	if (IS_ENABLED(CONFIG_PREEMPT_RT) ||
-	    unlikely(!object || !slab || !node_match(slab, node))) {
+	    unlikely(!object || !slab || !node_match(slab, node) ||
+			!pfmemalloc_match(slab, gfpflags))) {
 		object = __slab_alloc(s, gfpflags, node, addr, c);
 	} else {
 		void *next_object = get_freepointer_safe(s, object);
-- 
2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ