lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAB8ipk--3-+K2bhQS_YMTgpxCkZRbX21Lv_x-nT298yx5YTJeg@mail.gmail.com>
Date: Wed, 17 Jan 2024 12:35:26 +0800
From: Xuewen Yan <xuewen.yan94@...il.com>
To: Boqun Feng <boqun.feng@...il.com>
Cc: Zhiguo Niu <zhiguo.niu@...soc.com>, peterz@...radead.org, mingo@...hat.com, 
	will@...nel.org, longman@...hat.com, linux-kernel@...r.kernel.org, 
	niuzhiguo84@...il.com, ke.wang@...soc.com, xuewen.yan@...soc.com
Subject: Re: [PATCH] lockdep: fix deadlock issue between lockdep and rcu

On Wed, Jan 17, 2024 at 1:47 AM Boqun Feng <boqun.feng@...il.com> wrote:
>
> On Mon, Jan 15, 2024 at 04:53:16PM +0800, Zhiguo Niu wrote:
> > There is a deadlock scenario between lockdep and rcu when
> > rcu nocb feature is enabled, just as following call stack:
> >
> >      rcuop/x
> > -000|queued_spin_lock_slowpath(lock = 0xFFFFFF817F2A8A80, val = ?)
> > -001|queued_spin_lock(inline) // try to hold nocb_gp_lock
> > -001|do_raw_spin_lock(lock = 0xFFFFFF817F2A8A80)
> > -002|__raw_spin_lock_irqsave(inline)
> > -002|_raw_spin_lock_irqsave(lock = 0xFFFFFF817F2A8A80)
> > -003|wake_nocb_gp_defer(inline)
> > -003|__call_rcu_nocb_wake(rdp = 0xFFFFFF817F30B680)
> > -004|__call_rcu_common(inline)
> > -004|call_rcu(head = 0xFFFFFFC082EECC28, func = ?)
> > -005|call_rcu_zapped(inline)
> > -005|free_zapped_rcu(ch = ?)// hold graph lock
> > -006|rcu_do_batch(rdp = 0xFFFFFF817F245680)
> > -007|nocb_cb_wait(inline)
> > -007|rcu_nocb_cb_kthread(arg = 0xFFFFFF817F245680)
> > -008|kthread(_create = 0xFFFFFF80803122C0)
> > -009|ret_from_fork(asm)
> >
> >      rcuop/y
> > -000|queued_spin_lock_slowpath(lock = 0xFFFFFFC08291BBC8, val = 0)
> > -001|queued_spin_lock()
> > -001|lockdep_lock()
> > -001|graph_lock() // try to hold graph lock
> > -002|lookup_chain_cache_add()
> > -002|validate_chain()
> > -003|lock_acquire
> > -004|_raw_spin_lock_irqsave(lock = 0xFFFFFF817F211D80)
> > -005|lock_timer_base(inline)
> > -006|mod_timer(inline)
> > -006|wake_nocb_gp_defer(inline)// hold nocb_gp_lock
> > -006|__call_rcu_nocb_wake(rdp = 0xFFFFFF817F2A8680)
> > -007|__call_rcu_common(inline)
> > -007|call_rcu(head = 0xFFFFFFC0822E0B58, func = ?)
> > -008|call_rcu_hurry(inline)
> > -008|rcu_sync_call(inline)
> > -008|rcu_sync_func(rhp = 0xFFFFFFC0822E0B58)
> > -009|rcu_do_batch(rdp = 0xFFFFFF817F266680)
> > -010|nocb_cb_wait(inline)
> > -010|rcu_nocb_cb_kthread(arg = 0xFFFFFF817F266680)
> > -011|kthread(_create = 0xFFFFFF8080363740)
> > -012|ret_from_fork(asm)
> >
> > rcuop/x and rcuop/y are rcu nocb threads with the same nocb gp thread.
> >
>
> Nice! Looks like you find the root cause ;-) nocb_gp_lock and graph_lock
> have an ABBA deadlock due to lockdep's dependency on RCU. I assume this
> actually fixes the problem you saw?
>
> However, I want to suggest a different fix, please see below:
>
> > This patch release the graph lock before lockdep call_rcu.
> >
> > Signed-off-by: Zhiguo Niu <zhiguo.niu@...soc.com>
> > Signed-off-by: Xuewen Yan <xuewen.yan@...soc.com>
> > ---
> >  kernel/locking/lockdep.c | 38 +++++++++++++++++++++++++-------------
> >  1 file changed, 25 insertions(+), 13 deletions(-)
> >
> > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> > index 151bd3d..c1d432a 100644
> > --- a/kernel/locking/lockdep.c
> > +++ b/kernel/locking/lockdep.c
> > @@ -6186,23 +6186,29 @@ static struct pending_free *get_pending_free(void)
> >  /*
> >   * Schedule an RCU callback if no RCU callback is pending. Must be called with
> >   * the graph lock held.
> > + *
> > + * Return true if graph lock need be released by the caller, otherwise false
> > + * means graph lock is released by itself.
> >   */
> > -static void call_rcu_zapped(struct pending_free *pf)
> > +static bool call_rcu_zapped(struct pending_free *pf)
> >  {
> >       WARN_ON_ONCE(inside_selftest());
> >
> >       if (list_empty(&pf->zapped))
> > -             return;
> > +             return true;
> >
> >       if (delayed_free.scheduled)
> > -             return;
> > +             return true;
> >
> >       delayed_free.scheduled = true;
> >
> >       WARN_ON_ONCE(delayed_free.pf + delayed_free.index != pf);
> >       delayed_free.index ^= 1;
> >
> > +     lockdep_unlock();
> >       call_rcu(&delayed_free.rcu_head, free_zapped_rcu);
> > +
> > +     return false;
> >  }
> >
> >  /* The caller must hold the graph lock. May be called from RCU context */
> > @@ -6228,6 +6234,7 @@ static void free_zapped_rcu(struct rcu_head *ch)
> >  {
> >       struct pending_free *pf;
> >       unsigned long flags;
> > +     bool need_unlock;
> >
> >       if (WARN_ON_ONCE(ch != &delayed_free.rcu_head))
> >               return;
> > @@ -6243,9 +6250,9 @@ static void free_zapped_rcu(struct rcu_head *ch)
> >       /*
> >        * If there's anything on the open list, close and start a new callback.
> >        */
> > -     call_rcu_zapped(delayed_free.pf + delayed_free.index);
> > -
> > -     lockdep_unlock();
> > +     need_unlock = call_rcu_zapped(delayed_free.pf + delayed_free.index);
> > +     if (need_unlock)
> > +             lockdep_unlock();
>
> Instead of returning a bool to control the unlock, I think it's better
> that we refactor the call_rcu_zapped() a bit, so it becomes a
> prepare_call_rcu_zapped():
>
>         // See if we need to queue an RCU callback, must called with
>         // the lockdep lock held, returns false if either we don't have
>         // any pending free or the callback is already scheduled.
>         // Otherwise, a call_rcu() must follow this function call.
>         static bool prepare_call_rcu_zapped(struct pending_free *pf)
>         {
>                 WARN_ON_ONCE(inside_selftest());
>
>                 if (list_empty(&pf->zapped))
>                         return false;
>
>                 if (delayed_free.scheduled)
>                         return false;
>
>                 delayed_free.scheduled = true;
>
>                 WARN_ON_ONCE(delayed_free.pf + delayed_free.index != pf);
>                 delayed_free.index ^= 1;
>
>                 return true;
>         }
>
> , and here we can:
>
>         <lockdep_lock() is called previous>
>         need_callback = prepare_call_rcu_zapped(...);
>         lockdep_unlock();
>         raw_local_irq_restore(flags);
>
>         if (need_callback)
>                 call_rcu(&delayed_free.rcu_head, free_zapped_rcu);

Would there be any problems if call_rcu is placed outside the shutdown
interrupt?

>
> compared to your fix, we don't have a special logic where
> call_rcu_zapped() can be an unlock in some conditions, which prevents
> local correctness reasoning.
>
> Thoughts?
>
> Regards,
> Boqun
>
> >       raw_local_irq_restore(flags);
> >  }
> >
> [...]
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ