[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221019110143.m4romocqmprkekzw@airbuntu>
Date: Wed, 19 Oct 2022 12:01:43 +0100
From: Qais Yousef <qyousef@...alina.io>
To: John Stultz <jstultz@...gle.com>
Cc: Qais Yousef <qais.yousef@....com>,
LKML <linux-kernel@...r.kernel.org>,
John Dias <joaodias@...gle.com>,
Connor O'Brien <connoro@...gle.com>,
Rick Yiu <rickyiu@...gle.com>, John Kacur <jkacur@...hat.com>,
Chris Redpath <chris.redpath@....com>,
Abhijeet Dharmapurikar <adharmap@...cinc.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>, kernel-team@...roid.com,
"J . Avila" <elavila@...gle.com>
Subject: Re: [RFC PATCH v4 3/3] softirq: defer softirq processing to
ksoftirqd if CPU is busy with RT
(note the change in email)
On 10/17/22 17:04, John Stultz wrote:
> On Mon, Oct 17, 2022 at 7:45 AM Qais Yousef <qais.yousef@....com> wrote:
> > This time I paid attention to the average as the best case number for vanilla
> > kernel is better:
> >
> > | vanilla | with softirq patches v4 |
> > -------------------|--------------------|--------------------------|
> > | #1 | #2 | #3 | #1 | #2 | #3 |
> > -------------------|------|------|------|--------|--------|--------|
> > t0 avg delay (us) |31.59 |22.94 |26.50 | 31.81 | 33.57 | 34.90 |
> > t1 avg delay (us) |16.85 |16.32 |37.16 | 29.05 | 30.51 | 31.65 |
> > t2 avg delay (us) |25.34 |32.12 |17.40 | 26.76 | 28.28 | 28.56 |
> >
> > It shows that we largely hover around 30us with the patches compared to 16-26us
> > being more prevalent for vanilla kernels.
> >
> > I am not sure I can draw a concrete conclusion from these numbers. It seems
> > I need to run longer than 4 hours to hit the worst case scenario every run on
> > the vanilla kernel. There's an indication that the worst case scenario is
> > harder to hit, and it looks there's a hit on the average delay.
>
> Thanks so much for running these tests and capturing these detailed numbers!
>
> I'll have to look further into the average case going up here.
>
> > I'm losing access to this system from today. I think I'll wait for more
> > feedback on this RFC; and do another round of testing for longer periods of
> > time once there's clearer sense this is indeed the direction we'll be going
> > for.
>
> Do you mind sending me the script you used to run the test, and I'll
> try to reproduce on some x86 hardware locally?
I ran that in a personal CI setup. I basically do the following 3 in parallel
'scripts':
cyclictest.sh [1]:
cyclictest -t 3 -p 99 -D 3600 -i 1000 --json=cyclictest.json
iperf.sh [2]:
iperf -s -D
iperf -c localhost -u -b 10g -t 3600 -i 1 -P 3
dd.sh [3]:
while true
do
cyclictest_running=`ps -e | grep cyclictest || true`
if [ "x$cyclictest_running" == "x" ]; then
break
fi
#
# Run dd
#
file="/tmp/myci.dd.file"
for i in $(seq 3)
do
dd if=/dev/zero of=$file.$i bs=1M count=2048 &
done
wait
rm -f $file*
sleep 3
done
[1] https://github.com/qais-yousef/myci-sched-tests/blob/dev/vars/run_cyclictest.groovy
[2] https://github.com/qais-yousef/myci-sched-tests/blob/dev/vars/run_iperf_parallel.groovy
[3] https://github.com/qais-yousef/myci-sched-tests/blob/dev/vars/run_dd_parallel.groovy
Cheers
--
Qais Yousef
Powered by blists - more mailing lists