lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <91cde5eeb22eb2926515dd27113c664a@kernel.org>
Date:   Fri, 20 Nov 2020 09:20:17 +0000
From:   Marc Zyngier <maz@...nel.org>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     LAK <linux-arm-kernel@...ts.infradead.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Will Deacon <will@...nel.org>,
        Catalin Marinas <catalin.marinas@....com>,
        Valentin Schneider <Valentin.Schneider@....com>,
        Peter Zijlstra <peterz@...radead.org>,
        Android Kernel Team <kernel-team@...roid.com>,
        mark.rutland@....com
Subject: Re: [PATCH 0/2] arm64: Allow the rescheduling IPI to bypass
 irq_enter/exit

[+ Mark who has been hacking in the same area lately]

Hi Thomas,

On 2020-11-03 20:32, Thomas Gleixner wrote:
> On Sun, Nov 01 2020 at 13:14, Marc Zyngier wrote:
>> Vincent recently reported [1] that 5.10-rc1 showed a significant
>> regression when running "perf bench sched pipe" on arm64, and
>> pinpointed it to the recent move to handling IPIs as normal
>> interrupts.
>> 
>> The culprit is the use of irq_enter/irq_exit around the handling of
>> the rescheduling IPI, meaning that we enter the scheduler right after
>> the handling of the IPI instead of deferring it to the next preemption
>> event. This accounts for most of the overhead introduced.
> 
> irq_enter()/exit() does not end up in the scheduler. If it does then
> please show the call chain.
> 
> Scheduling happens when the IPI returns just before returning into the
> low level code (or on ARM in the low level code) when NEED_RESCHED is
> set (which is usually the case when the IPI is sent) and:
> 
>   the IPI hit user space
> 
> or
> 
>   the IPI hit in preemptible kernel context and CONFIG_PREEMPT[_RT] is
>   enabled.
> 
> Not doing so would be a bug. So I really don't understand your 
> reasoning
> here.

You are of course right. I somehow associated the overhead of the 
resched
IPI with the scheduler itself. I stand corrected.

> 
>> On architectures that have architected IPIs at the CPU level (x86
>> being the obvious one), the absence of irq_enter/exit is natural.
> 
> It's not really architected IPIs. We reserve the top 20ish vectors on
> each CPU for IPIs and other per processor interrupts, e.g. the per cpu
> timer.
> 
> Now lets look at what x86 does:
> 
> Interrupts and regular IPIs (function call ....) do
> 
>     irqentry_enter()   <- handles rcu_irq_enter() or context tracking
>       ...
>       irq_enter_rcu()
>       ...
>       irq_exit_rcu()
>     irqentry_exit()     <- handles need_resched()
> 
> The scheduler IPI does:
> 
>     irqentry_enter()   <- handles rcu_irq_enter() or context tracking
>       ...
>       __irq_enter_raw()
>       ...
>       __irq_exit_raw()
>     irqentry_exit()     <- handles need_resched()
> 
> So we don't invoke irq_enter_rcu() on enter and on exit we skip
> irq_exit_rcu(). That's fine because
> 
>   - Calling the tick management is pointless because this is going to
>     schedule anyway or something consumed the need_resched already.
> 
>   - The irqtime accounting is silly because it covers only the call and
>     returns. The time spent in the accounting is more than the time
>     we are accounting (empty call).
> 
> So what your hack fails to invoke is rcu_irq_enter()/exit() in case 
> that
> the IPI hits the idle task in an RCU off region. You also fail to tell
> lockdep. No cookies!

Who needs cookies when you can have cheese? ;-)

More seriously, it seems to me that we have a bit of a 
cross-architecture
disconnect here. I have been trying to join the dots between what you
describe above, and the behaviour of arm64 (and probably a large number
of the non-x86 architectures), and I feel massively confused.

Up to 5.9, our handling of the rescheduling IPI was "do as little as
possible": decode the interrupt from the lowest possible level (the
root irqchip), call into arch code, end-up in scheduler_ipi(), the end.

No lockdep, no RCU, no nothing.

What changed? Have we missed some radical change in the way the core
kernel expects the arch code to do thing? I'm aware of the 
kernel/entry/common.c stuff, which implements most of the goodies you
mention, but this effectively is x86-only at the moment.

If arm64 has forever been broken, I'd really like to know and fix it.

>> The bad news is that these patches are ugly as sin, and I really don't
>> like them.
> 
> Yes, they are ugly and the extra conditional in the irq handling path 
> is
> not pretty either.
> 
>> I specially hate that they can give driver authors the idea that they
>> can make random interrupts "faster".
> 
> Just split the guts of irq_modify_status() into __irq_modify_status()
> and call that from irq_modify_status().
> 
> Reject IRQ_HIDDEN (which should have been IRQ_IPI really) and IRQ_NAKED
> (IRQ_RAW perhaps) in irq_modify_status().
> 
> Do not export __irq_modify_status() so it can be only used from 
> built-in
> code which takes it away from driver writers.

Yup, done.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ