lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200806203914.GQ4295@paulmck-ThinkPad-P72>
Date:   Thu, 6 Aug 2020 13:39:14 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     peterz@...radead.org,
        Valentin Schneider <valentin.schneider@....com>,
        Vladimir Oltean <olteanv@...il.com>,
        Kurt Kanzenbach <kurt.kanzenbach@...utronix.de>,
        Alison Wang <alison.wang@....com>, catalin.marinas@....com,
        will@...nel.org, mw@...ihalf.com, leoyang.li@....com,
        vladimir.oltean@....com, linux-arm-kernel@...ts.infradead.org,
        linux-kernel@...r.kernel.org,
        Anna-Maria Gleixner <anna-maria@...utronix.de>
Subject: Re: [RFC PATCH] arm64: defconfig: Disable fine-grained task level
 IRQ time accounting

On Thu, Aug 06, 2020 at 09:03:24PM +0200, Thomas Gleixner wrote:
> Paul,
> 
> "Paul E. McKenney" <paulmck@...nel.org> writes:
> > On Thu, Aug 06, 2020 at 01:45:45PM +0200, peterz@...radead.org wrote:
> >> The safety thing is concerned with RT tasks. It doesn't pretend to help
> >> with runnaway IRQs, never has, never will.
> >
> > Getting into the time machine back to the 1990s...
> >
> > DYNIX/ptx had a discretionary mechanism to deal with excessive interrupts.
> > There was a function that long-running interrupt handlers were supposed
> > to call periodically that would return false if the system felt that
> > the CPU had done enough interrupts for the time being.  In that case,
> > the interrupt handler was supposed to schedule itself for a later time,
> > but leave the interrupt unacknowledged in order to prevent retriggering
> > in the meantime.
> >
> > Of course, this mechanism would be rather less helpful in Linux.
> >
> > For one, Linux has way more device drivers and way more oddball devices.
> > In contrast, the few devices that DYNIX/ptx supported were carefully
> > selected, and the selection criteria included being able to put up
> > with this sort of thing.  Also, the fact that there was but a handful
> > of device drivers meant that changes like this could be more easily
> > propagated through all drivers.
> 
> We could do that completely at the core interrupt handling level. 

Ah, true enough if the various NAPI-like devices give up the CPU from
time to time.  Which they might well do for all I know.

> > Also, Linux supports way more workloads.  In contrast, DYNIX/ptx could
> > pick a small percentage of each CPU that would be permitted to be used
> > by hardware interrupt handlers.  As in there are probably Linux workloads
> > that run >90% of some poor CPU within hardware interrupt handlers.
> 
> Yet another tunable. /me runs

;-) ;-) ;-)

If there are workloads that would like to be able to keep one or more
CPUs completely busy handling interrupts, it should be possible to
create something that is used sort of like cond_resched() to keep RCU,
the scheduler, and the various watchdogs and lockup detectors at bay.

For example, RCU could supply a function that checked to make sure that
it was in an interrupt from idle, and if so report a quiescent state
for that CPU.  So if the CPU was idle and there wasn't anything pending
for it, that CPU could safely stay in a hardirq handler indefinitely.
I suppose that the function should return an indication in cases such
as interrupt from non-idle.

Sort of like NO_HZ_FULL, but for hardirq handlers, and also allowing
those handlers to use RCU read-side critical sections.

Or we could do what all the cool kids do these days, namely just apply
machine learning, thus automatically self-tuning in real time.

/me runs...

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ