[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20211230020044.GA17043@xsang-OptiPlex-9020>
Date: Thu, 30 Dec 2021 10:00:44 +0800
From: Oliver Sang <oliver.sang@...el.com>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: Neeraj Upadhyay <neeraj.iitr10@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
lkp@...ts.01.org, lkp@...el.com
Subject: Re: [rcutorture] 82e310033d:
WARNING:possible_recursive_locking_detected
hi Paul,
On Wed, Dec 29, 2021 at 09:24:41AM -0800, Paul E. McKenney wrote:
> On Wed, Dec 29, 2021 at 10:01:21PM +0800, Oliver Sang wrote:
> > hi Paul,
> >
> > we applied below patch upon next-20211224,
> > confirmed no "WARNING:possible_recursive_locking_detected" after patch.
> >
>
> Good to hear! May I add your Tested-by?
sure (:
Tested-by: Oliver Sang <oliver.sang@...el.com>
>
> Many of the remainder appear to be due to memory exhaustion, FWIW.
thanks for information
>
> Thanx, Paul
>
> > > ------------------------------------------------------------------------
> > >
> > > commit dd47cbdcc2f72ba3df1248fb7fe210acca18d09c
> > > Author: Paul E. McKenney <paulmck@...nel.org>
> > > Date: Tue Dec 28 15:59:38 2021 -0800
> > >
> > > rcutorture: Fix rcu_fwd_mutex deadlock
> > >
> > > The rcu_torture_fwd_cb_hist() function acquires rcu_fwd_mutex, but is
> > > invoked from rcutorture_oom_notify() function, which hold this same
> > > mutex across this call. This commit fixes the resulting deadlock.
> > >
> > > Reported-by: kernel test robot <oliver.sang@...el.com>
> > > Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> > >
> > > diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
> > > index 918a2ea34ba13..9190dce686208 100644
> > > --- a/kernel/rcu/rcutorture.c
> > > +++ b/kernel/rcu/rcutorture.c
> > > @@ -2184,7 +2184,6 @@ static void rcu_torture_fwd_cb_hist(struct rcu_fwd *rfp)
> > > for (i = ARRAY_SIZE(rfp->n_launders_hist) - 1; i > 0; i--)
> > > if (rfp->n_launders_hist[i].n_launders > 0)
> > > break;
> > > - mutex_lock(&rcu_fwd_mutex); // Serialize histograms.
> > > pr_alert("%s: Callback-invocation histogram %d (duration %lu jiffies):",
> > > __func__, rfp->rcu_fwd_id, jiffies - rfp->rcu_fwd_startat);
> > > gps_old = rfp->rcu_launder_gp_seq_start;
> > > @@ -2197,7 +2196,6 @@ static void rcu_torture_fwd_cb_hist(struct rcu_fwd *rfp)
> > > gps_old = gps;
> > > }
> > > pr_cont("\n");
> > > - mutex_unlock(&rcu_fwd_mutex);
> > > }
> > >
> > > /* Callback function for continuous-flood RCU callbacks. */
> > > @@ -2435,7 +2433,9 @@ static void rcu_torture_fwd_prog_cr(struct rcu_fwd *rfp)
> > > n_launders, n_launders_sa,
> > > n_max_gps, n_max_cbs, cver, gps);
> > > atomic_long_add(n_max_cbs, &rcu_fwd_max_cbs);
> > > + mutex_lock(&rcu_fwd_mutex); // Serialize histograms.
> > > rcu_torture_fwd_cb_hist(rfp);
> > > + mutex_unlock(&rcu_fwd_mutex);
> > > }
> > > schedule_timeout_uninterruptible(HZ); /* Let CBs drain. */
> > > tick_dep_clear_task(current, TICK_DEP_BIT_RCU);
Powered by blists - more mailing lists