[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201209101102.GJ2414@hirez.programming.kicks-ass.net>
Date: Wed, 9 Dec 2020 11:11:02 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>,
Frederic Weisbecker <frederic@...nel.org>,
Paul McKenney <paulmck@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Boqun Feng <boqun.feng@...il.com>,
Ingo Molnar <mingo@...nel.org>, Will Deacon <will@...nel.org>
Subject: Re: [patch V2 4/9] softirq: Make softirq control and processing RT
aware
On Fri, Dec 04, 2020 at 06:01:55PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner <tglx@...utronix.de>
>
> Provide a local lock based serialization for soft interrupts on RT which
> allows the local_bh_disabled() sections and servicing soft interrupts to be
> preemptible.
>
> Provide the necessary inline helpers which allow to reuse the bulk of the
> softirq processing code.
> +struct softirq_ctrl {
> + local_lock_t lock;
> + int cnt;
> +};
> +
> +static DEFINE_PER_CPU(struct softirq_ctrl, softirq_ctrl) = {
> + .lock = INIT_LOCAL_LOCK(softirq_ctrl.lock),
> +};
> +
> +void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
> +{
> + unsigned long flags;
> + int newcnt;
> +
> + WARN_ON_ONCE(in_hardirq());
> +
> + /* First entry of a task into a BH disabled section? */
> + if (!current->softirq_disable_cnt) {
> + if (preemptible()) {
> + local_lock(&softirq_ctrl.lock);
AFAICT this significantly changes the locking rules.
Where previously we could do:
spin_lock(&ponies)
spin_lock_bh(&foo);
vs
spin_lock_bh(&bar);
spin_lock(&ponies)
provided the rest of the code observed: bar -> ponies -> foo
and never takes ponies from in-softirq.
This is now a genuine deadlock on RT.
Also see:
https://lkml.kernel.org/r/X9CheYjuXWc75Spa@hirez.programming.kicks-ass.net
> + /* Required to meet the RCU bottomhalf requirements. */
> + rcu_read_lock();
> + } else {
> + DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
> + }
> + }
> +
> + /*
> + * Track the per CPU softirq disabled state. On RT this is per CPU
> + * state to allow preemption of bottom half disabled sections.
> + */
> + newcnt = __this_cpu_add_return(softirq_ctrl.cnt, cnt);
> + /*
> + * Reflect the result in the task state to prevent recursion on the
> + * local lock and to make softirq_count() & al work.
> + */
> + current->softirq_disable_cnt = newcnt;
> +
> + if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && newcnt == cnt) {
> + raw_local_irq_save(flags);
> + lockdep_softirqs_off(ip);
> + raw_local_irq_restore(flags);
> + }
> +}
Powered by blists - more mailing lists