[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200809204354.20137-3-urezki@gmail.com>
Date: Sun, 9 Aug 2020 22:43:54 +0200
From: "Uladzislau Rezki (Sony)" <urezki@...il.com>
To: LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
"Paul E . McKenney" <paulmck@...nel.org>,
Matthew Wilcox <willy@...radead.org>
Cc: "Theodore Y . Ts'o" <tytso@....edu>,
Joel Fernandes <joel@...lfernandes.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Uladzislau Rezki <urezki@...il.com>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: [PATCH 2/2] rcu/tree: use __GFP_NO_LOCKS flag
Enter the page allocator with newly introduced __GFP_NO_LOCKS flag
instead of former GFP_NOWAIT | __GFP_NOWARN sequence. Such approach
address two concerns. See them below:
a) If built with CONFIG_PROVE_RAW_LOCK_NESTING, the lockdep complains
about violation("BUG: Invalid wait context") of the nesting rules. It
does the raw_spinlock vs. spinlock nesting checks, i.e. it is not legal
to acquire a spinlock_t while holding a raw_spinlock_t.
Internally the kfree_rcu() uses raw_spinlock_t whereas the page allocator
internally deals with spinlock_t to access to its zones. The code also
can be broken from higher level of view:
<snip>
raw_spin_lock(&some_lock);
kfree_rcu(some_pointer, some_field_offset);
<snip>
b) If built with CONFIG_PREEMPT_RT. Please note, in that case spinlock_t
is converted into sleepable variant. Invoking the page allocator from
atomic contexts leads to: "BUG: scheduling while atomic".
Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
---
kernel/rcu/tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 30e7e252b9e7..48cb64800108 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3327,7 +3327,7 @@ kvfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr)
* pages are available.
*/
bnode = (struct kvfree_rcu_bulk_data *)
- __get_free_page(GFP_NOWAIT | __GFP_NOWARN);
+ __get_free_page(__GFP_NO_LOCKS);
}
/* Switch to emergency path. */
--
2.20.1
Powered by blists - more mailing lists