[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <jhja6z9i4bi.mognet@arm.com>
Date: Wed, 05 Aug 2020 14:56:49 +0100
From: Valentin Schneider <valentin.schneider@....com>
To: peterz@...radead.org
Cc: Thomas Gleixner <tglx@...utronix.de>,
Vladimir Oltean <olteanv@...il.com>,
Kurt Kanzenbach <kurt.kanzenbach@...utronix.de>,
Alison Wang <alison.wang@....com>, catalin.marinas@....com,
will@...nel.org, paulmck@...nel.org, mw@...ihalf.com,
leoyang.li@....com, vladimir.oltean@....com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
Anna-Maria Gleixner <anna-maria@...utronix.de>
Subject: Re: [RFC PATCH] arm64: defconfig: Disable fine-grained task level IRQ time accounting
On 05/08/20 14:40, peterz@...radead.org wrote:
> On Mon, Aug 03, 2020 at 09:22:53PM +0200, Thomas Gleixner wrote:
>
>> totaltime = irqtime + tasktime
>>
>> Ignoring irqtime and pretending that totaltime is what the scheduler
>> can control and deal with is naive at best.
>
> Well no, that's what we call system overhead and is assumed to be
> included in the 'error margin'.
>
> The way things are set up is that we say that, by default, RT tasks can
> consume 95% of cputime and the remaining 5% is sufficient to keep the
> system alive.
>
> Those 5% include all system overhead, IRQs, RCU, !RT workqueues etc..
>
> Obviously IRQ_TIME accounting changes the balance a bit, but that's what
> it is. We can't really do anything better.
>
I'm starting to think that as well. I tried some fugly hack of injecting
avg_irq into sched_rt_runtime_exceeded() with something along the lines of:
irq_time = (rq->avg_irq.util_avg * sched_rt_period(rt_rq)) >> SCHED_CAPACITY_SHIFT;
It's pretty bad for a few reasons; one is that avg_irq already has its own
period (PELT-based). Another is that it is, as Dietmar pointed out, CPU and
freq invariant, so falls over on big.LITTLE.
Making update_curr_rt() use rq_clock() rather than rq_clock_task() makes it
"work" but goes against all the good reasons there were to introduce
rq_clock_task() in the first place.
> Apparently this SoC has significant IRQ time for some reason. Also,
> relying on RT throttling for 'correct' behaviour is also wrong. What
> needs to be done is find who is using all this RT time and why, that
> isn't right.
I've been tempted to say the test case is a bit bogus, but am not familiar
enough with the RT throttling details to stand that ground. That said, from
both looking at the execution and the stress-ng source code, it seems to
unconditionally spawn 32 FIFO-50 tasks (there's even an option to make
these FIFO-99!!!), which is quite a crowd on monoCPU systems.
Powered by blists - more mailing lists