[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190530193806.GB23199@amt.cnet>
Date: Thu, 30 May 2019 16:38:08 -0300
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Anna-Maria Gleixner <anna-maria@...utronix.de>
Cc: linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Luiz Capitulino <lcapitulino@...hat.com>,
Haris Okanovic <haris.okanovic@...com>
Subject: Re: [patch 0/3] do not raise timer softirq unconditionally
(spinlockless version)
On Wed, May 29, 2019 at 04:52:37PM +0200, Anna-Maria Gleixner wrote:
> Hi,
>
> I had a look at the queue and have several questions about your
> implementation.
>
> First of all, I had some troubles to understand your commit messages. So I
> first had to read the code and then tried to understand the commit
> messages. It is easier, if it works the other way round.
Right. Commit message seemed descriptive to me, but i should probably
try to improve the explanation in the commit message.
> On Mon, 15 Apr 2019, Marcelo Tosatti wrote:
>
> > For isolated CPUs, we'd like to skip awakening ktimersoftd
> > (the switch to and then back from ktimersoftd takes 10us in
> > virtualized environments, in addition to other OS overhead,
> > which exceeds telco requirements for packet forwarding for
> > 5G) from the sched tick.
>
> You would like to prevent raising the timer softirq in general from the
> sched tick for isolated CPUs? Or you would like to prevent raising the
> timer softirq if no pending timer is available?
With DPDK and CPU isolation, there are no low resolution timers running on
the isolated CPUs. So prevent raising timer softirq if no pending timer
is available is enough.
> Nevertheless, this change is not PREEMPT_RT specific. It is a NOHZ
> dependand change. So it would be nice, if the queue is against
> mainline. But please correct me, if I'm wrong.
Since it was based on the idea of
https://lore.kernel.org/patchwork/patch/446045/
Which was an -RT patch, i decided to submit it against the -RT kernel.
But it can be merged through the mainline kernel queue as well,
i believe.
> [...]
>
> > This patchset reduces cyclictest latency from 25us to 14us
> > on my testbox.
> >
>
> A lot of information is missing: How does your environment looks like for
> this test, what is your workload,...?
x86 hardware, with KVM virtualized, running -RT kernel on both host
and guest. Relevant host kernel command line:
isolcpus=1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,14
intel_pstate=disable nosoftlockup nohz=on
nohz_full=1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,14
rcu_nocbs=1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,14
Relevant guest kernel command line:
isolcpus=1 nosoftlockup nohz=on nohz_full=1
rcu_nocbs=1
And workload in question is cyclictest (the production
software is DPDK, but cyclictest is used to gather
latency data).
> Did you also run other tests?
Yes, have a custom module stress test to attempt to hit the race.
Daniel (CC'ed) ran it under a kernel with debugging enabled,
with no apparent problems.
So one downside of this patch is that raises the softirq
unnecessarily sometimes. However, its impact is reduced to
isolated CPUs.
Powered by blists - more mailing lists