[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200920150638.GA5453@paulmck-ThinkPad-P72>
Date: Sun, 20 Sep 2020 08:06:38 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: "Uladzislau Rezki (Sony)" <urezki@...il.com>
Cc: LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
Thomas Gleixner <tglx@...utronix.de>,
"Theodore Y . Ts'o" <tytso@....edu>,
Joel Fernandes <joel@...lfernandes.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: Re: [PATCH 4/4] rcu/tree: Use schedule_delayed_work() instead of
WQ_HIGHPRI queue
On Fri, Sep 18, 2020 at 09:48:17PM +0200, Uladzislau Rezki (Sony) wrote:
> Recently the separate worker thread has been introduced to
> maintain the local page cache from the regular kernel context,
> instead of kvfree_rcu() contexts. That was done because a caller
> of the k[v]free_rcu() can be any context type what is a problem
> from the allocation point of view.
>
> >From the other hand, the lock-less way of obtaining a page has
> been introduced and directly injected to the k[v]free_rcu() path.
>
> Therefore it is not important anymore to use a high priority "wq"
> for the external job that used to fill a page cache ASAP when it
> was empty.
>
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
And I needed to apply the patch below to make this one pass rcutorture
scenarios SRCU-P and TREE05. Repeat by:
tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 3 --configs "SRCU-P TREE05" --trust-make
Without the patch below, the system hangs very early in boot.
Please let me know if some other fix would be better.
Thanx, Paul
------------------------------------------------------------------------
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 8ce1ea4..2424e2a 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3481,7 +3481,8 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
success = kvfree_call_rcu_add_ptr_to_bulk(krcp, ptr);
if (!success) {
// Use delayed work, so we do not deadlock with rq->lock.
- if (!atomic_xchg(&krcp->work_in_progress, 1))
+ if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
+ !atomic_xchg(&krcp->work_in_progress, 1))
schedule_delayed_work(&krcp->page_cache_work, 1);
if (head == NULL)
Powered by blists - more mailing lists