[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180124021159.GB11302@lerouge>
Date: Wed, 24 Jan 2018 03:12:01 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: Dmitry Safonov <dima@...sta.com>
Cc: Paolo Abeni <pabeni@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Levin Alexander <alexander.levin@...izon.com>,
Peter Zijlstra <peterz@...radead.org>,
Mauro Carvalho Chehab <mchehab@...pensource.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
Wanpeng Li <wanpeng.li@...mail.com>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Radu Rendec <rrendec@...sta.com>,
Ingo Molnar <mingo@...nel.org>,
Stanislaw Gruszka <sgruszka@...hat.com>,
Rik van Riel <riel@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
David Miller <davem@...emloft.net>
Subject: Re: [RFC PATCH 0/4] softirq: Per vector threading v3
On Tue, Jan 23, 2018 at 12:32:13PM +0000, Dmitry Safonov wrote:
> On Tue, 2018-01-23 at 11:13 +0100, Paolo Abeni wrote:
> > Hi,
> >
> > On Fri, 2018-01-19 at 16:46 +0100, Frederic Weisbecker wrote:
> > > As per Linus suggestion, this take doesn't limit the number of
> > > occurences
> > > per jiffy anymore but instead defers a vector to workqueues as soon
> > > as
> > > it gets re-enqueued on IRQ tail.
> > >
> > > No tunable here, so testing should be easier.
> > >
> > > git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-
> > > dynticks.git
> > > softirq/thread-v3
> > >
> > > HEAD: 6835e92cbd70ef4a056987d2e1ed383b294429d4
> >
> > I tested this series in the UDP flood scenario, binding the user
> > space
> > process receiving the packets on the same CPU processing the related
> > IRQ, and the tput sinks nearly to 0, like before Eric's patch.
> >
> > The perf tool says that almost all the softirq processing is done
> > inside the workqueue, but the user space process is scheduled very
> > rarely, while before this series, in this scenario, ksoftirqd and the
> > user space process got a fair share of the CPU time.
>
> It get a fair share of the CPU time only on some lucky platforms (maybe
> on the most). On other not-so-lucky platforms it was also unfair - as
> I've previously described by the reason of slow raising softirqs :-/
Indeed your case looks pretty special, as IRQs don't get the opportunity to
interrupt softirqs but they fire right after.
I'm wondering if you shouldn't try threaded IRQs. Having softirqs always
running in a thread instead of hardirq tail might help (through threadirqs=
kernel param).
Powered by blists - more mailing lists