lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YaYeFu4hi3uVkhkN@FVFF77S0Q05N>
Date:   Tue, 30 Nov 2021 12:50:30 +0000
From:   Mark Rutland <mark.rutland@....com>
To:     Nicolas Saenz Julienne <nsaenzju@...hat.com>
Cc:     linux-kernel <linux-kernel@...r.kernel.org>,
        linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
        rcu@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        mtosatti <mtosatti@...hat.com>, frederic <frederic@...nel.org>,
        paulmck@...nel.org
Subject: Re: Question WRT early IRQ/NMI entry code

On Tue, Nov 30, 2021 at 12:28:41PM +0100, Nicolas Saenz Julienne wrote:
> Hi All,

Hi Nicolas,

> while going over the IRQ/NMI entry code I've found a small 'inconsistency':
> while in the IRQ entry path, we inform RCU of the context change *before*
> incrementing the preempt counter, the opposite happens for the NMI entry
> path. This applies to both arm64 and x86[1].

For arm64, the style was copied from the x86 code, and (AFAIK) I had no
particular reason for following either order other than consistency with x86.

> Actually, rcu_nmi_enter() — which is also the main RCU context switch function
> for the IRQ entry path — uses the preempt counter to verify it's not in NMI
> context. So it would make sense to assume all callers have the same updated
> view of the preempt count, which isn't true ATM.

I agree consistency would be nice, assuming there's no issue preventing us from
moving the IRQ preempt_count logic earlier.

It sounds like today the ordering is only *required* when entering an NMI, and
we already do the right thing there. Do you see a case where something would go
wrong (or would behave differently with the flipped ordering) for IRQ today?

> I'm sure there an obscure/non-obvious reason for this, right?

TBH I suspect this is mostly oversight / legacy, and likely something we can
tighten up.

Thanks,
Mark.

> 
> Thanks!
> Nicolas
> 
> [1] 
> IRQ path:
>   -> x86_64 asm (entry_64.S)
>   -> irqentry_enter() -> rcu_irq_enter() -> *rcu_nmi_enter()*
>   -> run_irq_on_irqstack_cond() -> irq_exit_rcu() -> *preempt_count_add(HARDIRQ_OFFSET)*
>   -> // Run IRQ...
> 
> NMI path:
>   -> x86_64 asm (entry_64.S)
>   -> irqentry_nmi_enter() -> __nmi_enter() -> *__preempt_count_add(NMI_OFFSET + HARDIRQ_OFFSET)*
>                           -> *rcu_nmi_enter()*
> 
> For arm64, see 'arch/arm64/kernel/entry-common.c'.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ