lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230408142530.800612-1-qiang1.zhang@intel.com>
Date:   Sat,  8 Apr 2023 22:25:30 +0800
From:   Zqiang <qiang1.zhang@...el.com>
To:     urezki@...il.com, paulmck@...nel.org, frederic@...nel.org,
        joel@...lfernandes.org, qiang1.zhang@...el.com
Cc:     qiang.zhang1211@...il.com, rcu@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: [PATCH] rcu/kvfree: Make page cache growing happen on the correct krcp

When invoke add_ptr_to_bulk_krc_lock() to queue ptr, will invoke
krc_this_cpu_lock() return current CPU's krcp structure and get a
bnode object from the krcp structure's ->bulk_head, if return is
empty or the returned bnode object's nr_records is KVFREE_BULK_MAX_ENTR,
when the can_alloc is set, will unlock current CPU's krcp->lock and
allocate bnode, after that, will invoke krc_this_cpu_lock() again to
return current CPU's krcp structure, if the CPU migration occurs,
the krcp obtained at this time will not be consistent with the previous
one, this causes the bnode will be added to the wrong krcp structure's
->bulk_head or trigger fill page work on wrong krcp.

This commit therefore re-hold krcp->lock after allocated page instead
of re-call krc_this_cpu_lock() to ensure the consistency of krcp.

Signed-off-by: Zqiang <qiang1.zhang@...el.com>
---
 kernel/rcu/tree.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 9d9d3772cc45..c9076fa0a954 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3303,7 +3303,7 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp,
 			// scenarios.
 			bnode = (struct kvfree_rcu_bulk_data *)
 				__get_free_page(GFP_KERNEL | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN);
-			*krcp = krc_this_cpu_lock(flags);
+			raw_spin_lock_irqsave(&(*krcp)->lock, *flags);
 		}
 
 		if (!bnode)
-- 
2.32.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ