[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <877dtn57x7.fsf@nanos.tec.linutronix.de>
Date: Tue, 25 Aug 2020 12:36:20 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Wang Long <w@...qinren.net>
Cc: john.stultz@...aro.org, sboyd@...nel.org,
linux-kernel@...r.kernel.org, w@...qinren.net
Subject: Re: [PATCH] timer: use raw_spin_unlock_irqrestore and raw_spin_lock_irqsave instead of raw_spin_{lock|unlock}
Wang,
On Thu, Aug 20 2020 at 10:59, Wang Long wrote:
> The code in (1)(2) lock the base with raw_spin_lock_irqsave(&base->lock, flag),
> if base != new_base, the code in (3) unlock the old base, the code in (4) lock the
> new base. at the end of the function(5), use raw_spin_unlock_irqrestore(&base->lock, flags);
> to unlock the new_base.
>
> Consider the following situation:
>
> CPU0 CPU1
> base = lock_timer_base(timer, &flags); (1)(2)
> raw_spin_unlock(&base->lock); (3)
> base = new_base;
> raw_spin_lock(&base->lock); (4)
> raw_spin_unlock_irqrestore(&base->lock, flags); (5)
>
> The flags save from CPU0, and restore to CPU1. Is this wrong?
Completely wrong. This code switches the per CPU base pointer of the
timer and does not migrate the task to a different CPU. The execution
stays on the same CPU and keeps interrupts disabled accross the whole
code sequence.
> we encountered a kernel panic, and we suspect that it is the
> problem. How about the following patch to fix.
It does not fix anything. It just adds pointless overhead.
Thanks,
tglx
Powered by blists - more mailing lists