[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221028184241.GR5600@paulmck-ThinkPad-P17-Gen-1>
Date: Fri, 28 Oct 2022 11:42:41 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Boqun Feng <boqun.feng@...il.com>
Cc: Petr Mladek <pmladek@...e.com>,
John Ogness <john.ogness@...utronix.de>,
Sergey Senozhatsky <senozhatsky@...omium.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: [PATCH printk v2 33/38] printk: introduce console_list_lock
On Fri, Oct 28, 2022 at 11:09:35AM -0700, Boqun Feng wrote:
> On Thu, Oct 27, 2022 at 11:50:07AM -0700, Paul E. McKenney wrote:
> > On Thu, Oct 27, 2022 at 12:09:32PM +0200, Petr Mladek wrote:
> > > Adding Paul into Cc so that he is aware of using a custom SRCU lockdep
> > > check in console_list_lock().
> >
> > [ . . . ]
> >
> > > > +/**
> > > > + * console_list_lock - Lock the console list
> > > > + *
> > > > + * For console list or console->flags updates
> > > > + */
> > > > +void console_list_lock(void)
> > > > +{
> > > > +#ifdef CONFIG_DEBUG_LOCK_ALLOC
> > > > + /*
> > > > + * In unregister_console(), synchronize_srcu() is called with the
> > > > + * console_list_lock held. Therefore it is not allowed that the
> > > > + * console_list_lock is taken with the srcu_lock held.
> > > > + *
> > > > + * Whether or not this context is in the read-side critical section
> > > > + * can only be detected if the appropriate debug options are enabled.
> > > > + */
> > > > + WARN_ON_ONCE(debug_lockdep_rcu_enabled() &&
> > > > + srcu_read_lock_held(&console_srcu));
> >
> > Yes, this is an interesting case.
> >
> > The problem that John is facing is that srcu_read_lock_held() believes
> > that it is safer to err on the side of there being an SRCU reader.
> > This is because the standard use is to complain if there is -not-
> > an SRCU reader. So as soon as there is a lockdep issue anywhere,
> > srcu_read_lock_held() switches to unconditionally returning true.
> >
> > Which is exactly what John does not want in this case.
> >
> > So he excludes the CONFIG_DEBUG_LOCK_ALLOC=n case and the
> > !debug_lockdep_rcu_enabled() case, both of which cause
> > srcu_read_lock_held() to unconditionally return true.
> >
> > This can result in false-positive splats if some other CPU issues a
> > lockdep warning after debug_lockdep_rcu_enabled() is invoked but before
> > srcu_read_lock_held() is invoked. But similar vulnerabilities are
> > present in RCU_LOCKDEP_WARN(), so unless and until there is a problem,
> > this code should suffice.
> >
> > One way to save a line is as follows:
> >
> > WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_LOCK_ALLOC) &&
> > debug_lockdep_rcu_enabled() &&
> > srcu_read_lock_held(&console_srcu));
> >
> > Longer term, it might be possible to teach lockdep about this sort of
> > SRCU deadlock. This is not an issue for vanilla RCU because the RCU
> > reader context prohibits such deadlocks. This isn't exactly the same
> > as reader-writer locking because this is perfectly OK with SRCU:
> >
> > CPU 0:
> > spin_lock(&mylock);
> > idx = srcu_read_lock(&mysrcu);
> > do_something();
> > srcu_read_unlock(&mysrcu, idx);
> > spin_unlock(&mylock);
> >
> > CPU 1:
> > idx = srcu_read_lock(&mysrcu);
> > spin_lock(&mylock);
> > do_something();
> > spin_unlock(&mylock);
> > srcu_read_unlock(&mysrcu, idx);
> >
> > Adding Boqun on CC in case it is easier than I think. ;-)
>
> First I think reader-writer deadlock detection won't treat this as a
> deadlock, because srcu_read_lock() is a recursive read lock ;-) in other
> words, lockdep knows they don't block each other.
Nice!
> I was actually considering to equip SRCU with reader-writer deadlock
> detection when:
>
> https://lore.kernel.org/lkml/20180412021233.ewncg5jjuzjw3x62@tardis/
>
> The problem (for SRCU to use reader-writer deadlock detection) is
> letting lockdep know synchronize_srcu() doesn't block srcu_read_lock(),
> so looks like I owe you a new lockdep annotation primitive ;-)
Even better! ;-)
Thanx, Paul
Powered by blists - more mailing lists