[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANDhNCo6M8NdLemjhA2sQ941agU+LQHxhRKAVMvr-qg9mQV51Q@mail.gmail.com>
Date: Mon, 17 Oct 2022 17:04:09 -0700
From: John Stultz <jstultz@...gle.com>
To: Qais Yousef <qais.yousef@....com>
Cc: LKML <linux-kernel@...r.kernel.org>,
John Dias <joaodias@...gle.com>,
"Connor O'Brien" <connoro@...gle.com>,
Rick Yiu <rickyiu@...gle.com>, John Kacur <jkacur@...hat.com>,
Chris Redpath <chris.redpath@....com>,
Abhijeet Dharmapurikar <adharmap@...cinc.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>, kernel-team@...roid.com,
"J . Avila" <elavila@...gle.com>
Subject: Re: [RFC PATCH v4 3/3] softirq: defer softirq processing to ksoftirqd
if CPU is busy with RT
On Mon, Oct 17, 2022 at 7:45 AM Qais Yousef <qais.yousef@....com> wrote:
> This time I paid attention to the average as the best case number for vanilla
> kernel is better:
>
> | vanilla | with softirq patches v4 |
> -------------------|--------------------|--------------------------|
> | #1 | #2 | #3 | #1 | #2 | #3 |
> -------------------|------|------|------|--------|--------|--------|
> t0 avg delay (us) |31.59 |22.94 |26.50 | 31.81 | 33.57 | 34.90 |
> t1 avg delay (us) |16.85 |16.32 |37.16 | 29.05 | 30.51 | 31.65 |
> t2 avg delay (us) |25.34 |32.12 |17.40 | 26.76 | 28.28 | 28.56 |
>
> It shows that we largely hover around 30us with the patches compared to 16-26us
> being more prevalent for vanilla kernels.
>
> I am not sure I can draw a concrete conclusion from these numbers. It seems
> I need to run longer than 4 hours to hit the worst case scenario every run on
> the vanilla kernel. There's an indication that the worst case scenario is
> harder to hit, and it looks there's a hit on the average delay.
Thanks so much for running these tests and capturing these detailed numbers!
I'll have to look further into the average case going up here.
> I'm losing access to this system from today. I think I'll wait for more
> feedback on this RFC; and do another round of testing for longer periods of
> time once there's clearer sense this is indeed the direction we'll be going
> for.
Do you mind sending me the script you used to run the test, and I'll
try to reproduce on some x86 hardware locally?
thanks
-john
Powered by blists - more mailing lists