lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875ys9dacq.ffs@tglx>
Date:   Tue, 30 Nov 2021 14:47:01 +0100
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Nicolas Saenz Julienne <nsaenzju@...hat.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
        rcu@...r.kernel.org
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Mark Rutland <mark.rutland@....com>,
        Steven Rostedt <rostedt@...dmis.org>, paulmck@...nel.org,
        mtosatti <mtosatti@...hat.com>, frederic <frederic@...nel.org>
Subject: Re: Question WRT early IRQ/NMI entry code

On Tue, Nov 30 2021 at 12:28, Nicolas Saenz Julienne wrote:
> while going over the IRQ/NMI entry code I've found a small 'inconsistency':
> while in the IRQ entry path, we inform RCU of the context change *before*
> incrementing the preempt counter, the opposite happens for the NMI entry
> path. This applies to both arm64 and x86[1].
>
> Actually, rcu_nmi_enter() — which is also the main RCU context switch function
> for the IRQ entry path — uses the preempt counter to verify it's not in NMI
> context. So it would make sense to assume all callers have the same updated
> view of the preempt count, which isn't true ATM.
>
> I'm sure there an obscure/non-obvious reason for this, right?

There is.

> IRQ path:
>   -> x86_64 asm (entry_64.S)
>   -> irqentry_enter() -> rcu_irq_enter() -> *rcu_nmi_enter()*
>   -> run_irq_on_irqstack_cond() -> irq_exit_rcu() -> *preempt_count_add(HARDIRQ_OFFSET)*
>   -> // Run IRQ...
>
> NMI path:
>   -> x86_64 asm (entry_64.S)
>   -> irqentry_nmi_enter() -> __nmi_enter() -> *__preempt_count_add(NMI_OFFSET + HARDIRQ_OFFSET)*
>                           -> *rcu_nmi_enter()*

The reason is symmetry vs. returning from interupt / exception:

 irqentry_enter()
      exit_rcu = false;

      if (user_mode(regs)) {
          irqentry_enter_from_user_mode(regs)
            __enter_from_user_mode(regs)
              user_exit_irqoff();       <- RCU handling for NOHZ full

      } else if (is_idle_task_current()) {
            rcu_irq_enter()
            exit_rcu = true;
      }

 irq_enter_rcu()
     __irq_enter_raw()
     preempt_count_add(HARDIRQ_OFFSET);

 irq_handler()

 irq_exit_rcu()
     preempt_count_sub(HARDIRQ_OFFSET);
     if (!in_interrupt() && local_softirq_pending())
     	 invoke_softirq();

 irqentry_exit(regs, exit_rcu)

     if (user_mode(regs)) {
         irqentry_exit_to_usermode(regs)
           user_enter_irqoff();     <- RCU handling for NOHZ full
     } else if (irqs_enabled(regs)) {
           if (exit_rcu) {          <- Idle task special case
               rcu_irq_exit();
           } else {
              irqentry_exit_cond_resched();
           }

     } else if (exit_rcu) {
         rcu_irq_exit();
     }

On return from interrupt HARDIRQ_OFFSET has to be removed _before_
handling soft interrupts. It's also required that the preempt count has
the original state _before_ reaching irqentry_exit() which
might schedule if the interrupt/exception hit user space or kernel space
with interrupts enabled.

So doing it symmetric makes sense.

For NMIs the above conditionals do not apply at all and we just do

    __nmi_enter()
        preempt_count_add(NMI_COUNT + HARDIRQ_COUNT);
    rcu_nmi_enter();

    handle_nmi();

    rcu_nmi_exit();
    __nmi_exit()
        preempt_count_sub(NMI_COUNT + HARDIRQ_COUNT);

The reason why preempt count is incremented before invoking
rcu_nmi_enter() is simply that RCU has to know about being in NMI
context, i.e. in_nmi() has to return the correct answer.

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ