[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <871rg17iy3.fsf@nanos.tec.linutronix.de>
Date: Mon, 07 Dec 2020 16:08:04 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Frederic Weisbecker <frederic@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Paul McKenney <paulmck@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: Re: [patch V2 4/9] softirq: Make softirq control and processing RT aware
On Mon, Dec 07 2020 at 15:16, Frederic Weisbecker wrote:
> On Fri, Dec 04, 2020 at 06:01:55PM +0100, Thomas Gleixner wrote:
>> +void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
>> +{
>> + unsigned long flags;
>> + int newcnt;
>> +
>> + WARN_ON_ONCE(in_hardirq());
>> +
>> + /* First entry of a task into a BH disabled section? */
>> + if (!current->softirq_disable_cnt) {
>> + if (preemptible()) {
>> + local_lock(&softirq_ctrl.lock);
>> + /* Required to meet the RCU bottomhalf requirements. */
>> + rcu_read_lock();
>> + } else {
>> + DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
>
> So, to be clear this adds a new constraint where we can't call
> local_bh_disable() inside a preempt disabled section? I guess the rest of the
> RT code chased all the new offenders :-)
There are not that many.
>> + }
>> + }
>> +
>> + /*
>> + * Track the per CPU softirq disabled state. On RT this is per CPU
>> + * state to allow preemption of bottom half disabled sections.
>> + */
>> + newcnt = __this_cpu_add_return(softirq_ctrl.cnt, cnt);
>> + /*
>> + * Reflect the result in the task state to prevent recursion on the
>> + * local lock and to make softirq_count() & al work.
>> + */
>> + current->softirq_disable_cnt = newcnt;
>> +
>> + if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && newcnt == cnt) {
>> + raw_local_irq_save(flags);
>> + lockdep_softirqs_off(ip);
>> + raw_local_irq_restore(flags);
>> + }
>> +}
>> +EXPORT_SYMBOL(__local_bh_disable_ip);
>> +
> [...]
>> +
>> +void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
>> +{
>> + bool preempt_on = preemptible();
>> + unsigned long flags;
>> + u32 pending;
>> + int curcnt;
>> +
>> + WARN_ON_ONCE(in_irq());
>> + lockdep_assert_irqs_enabled();
>> +
>> + local_irq_save(flags);
>> + curcnt = this_cpu_read(softirq_ctrl.cnt);
>
> __this_cpu_read() ?
Yes.
>> +
>> + /*
>> + * If this is not reenabling soft interrupts, no point in trying to
>> + * run pending ones.
>> + */
>> + if (curcnt != cnt)
>> + goto out;
>
> I guess you could move the local_irq_save() here?
Indeed.
Powered by blists - more mailing lists