lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 25 Oct 2021 19:00:40 +0100 From: Catalin Marinas <catalin.marinas@....com> To: Mark Rutland <mark.rutland@....com> Cc: linux-kernel@...r.kernel.org, aou@...s.berkeley.edu, deanbo422@...il.com, green.hu@...il.com, guoren@...nel.org, jonas@...thpole.se, kernelfans@...il.com, linux-arm-kernel@...ts.infradead.org, linux@...linux.org.uk, maz@...nel.org, nickhu@...estech.com, palmer@...belt.com, paulmck@...nel.org, paul.walmsley@...ive.com, peterz@...radead.org, shorne@...il.com, stefan.kristiansson@...nalahti.fi, tglx@...utronix.de, torvalds@...ux-foundation.org, tsbogend@...ha.franken.de, vgupta@...nel.org, will@...nel.org Subject: Re: [PATCH 10/15] irq: arm64: perform irqentry in entry code On Thu, Oct 21, 2021 at 07:02:31PM +0100, Mark Rutland wrote: > In preparation for removing HANDLE_DOMAIN_IRQ_IRQENTRY, have arch/arm64 > perform all the irqentry accounting in its entry code. > > As arch/arm64 already performs portions of the irqentry logic in > enter_from_kernel_mode() and exit_to_kernel_mode(), including > rcu_irq_{enter,exit}(), the only additional calls that need to be made > are to irq_{enter,exit}_rcu(). Removing the calls to > rcu_irq_{enter,exit}() from handle_domain_irq() ensures that we inform > RCU once per IRQ entry and will correctly identify quiescent periods. > > Since we should not call irq_{enter,exit}_rcu() when entering a > pseudo-NMI, el1_interrupt() is reworked to have separate __el1_irq() and > __el1_pnmi() paths for regular IRQ and psuedo-NMI entry, with > irq_{enter,exit}_irq() only called for the former. > > In preparation for removing HANDLE_DOMAIN_IRQ, the irq regs are managed > in do_interrupt_handler() for both regular IRQ and pseudo-NMI. This is > currently redundant, but not harmful. > > For clarity the preemption logic is moved into __el1_irq(). We should > never preempt within a pseudo-NMI, and arm64_enter_nmi() already > enforces this by incrementing the preempt_count, but it's clearer if we > never invoke the preemption logic when entering a pseudo-NMI. > > Signed-off-by: Mark Rutland <mark.rutland@....com> > Cc: Catalin Marinas <catalin.marinas@....com> > Cc: Marc Zyngier <maz@...nel.org> > Cc: Thomas Gleixner <tglx@...utronix.de> > Cc: Will Deacon <will@...nel.org> Acked-by: Catalin Marinas <catalin.marinas@....com>
Powered by blists - more mailing lists