[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1516805645.2476.23.camel@redhat.com>
Date: Wed, 24 Jan 2018 15:54:05 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: David Miller <davem@...emloft.net>,
Frederic Weisbecker <frederic@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Sasha Levin <alexander.levin@...izon.com>,
Peter Zijlstra <peterz@...radead.org>,
Mauro Carvalho Chehab <mchehab@...pensource.com>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Wanpeng Li <wanpeng.li@...mail.com>,
Dmitry Safonov <dima@...sta.com>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Radu Rendec <rrendec@...sta.com>,
Ingo Molnar <mingo@...nel.org>,
Stanislaw Gruszka <sgruszka@...hat.com>,
Rik van Riel <riel@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
Niklas Cassel <nks.gnu@...il.com>
Subject: Re: [RFC PATCH 0/4] softirq: Per vector threading v3
On Tue, 2018-01-23 at 09:42 -0800, Linus Torvalds wrote:
> On Tue, Jan 23, 2018 at 8:57 AM, Paolo Abeni <pabeni@...hat.com> wrote:
> >
> > > Or is it that the workqueue execution is simply not yielding for some
> > > reason?
> >
> > It's like that.
> >
> > I spent little time on it, so I haven't many data point. I'll try to
> > investigate the scenario later this week.
>
> Hmm. workqueues seem to use cond_resched_rcu_qs(), which does a
> cond_resched() (and a RCU quiescent note).
>
> But I wonder if the test triggers the "lets run lots of workqueue
> threads", and then the single-threaded user space just gets blown out
> of the water by many kernel threads. Each thread gets its own "fair"
> amount of CPU, but..
Niklas suggested a possible relation with CONFIG_IRQ_TIME_ACCOUNTING=y
and indeed he was right.
The patched kernel under test had CONFIG_IRQ_TIME_ACCOUNTING set, and
very little CPU time was accounted to the kworker:
[2125 is the relevant kworker's pid]
grep sum_exec_runtime /proc/2125/sched; sleep 10; grep sum_exec_runtime /proc/2125/sched
se.sum_exec_runtime : 13408.239286
se.sum_exec_runtime : 13456.907197
despite such process was processing a lot of packets and basically
burning a CPU.
Switching CONFIG_IRQ_TIME_ACCOUNTING off I see the expected behaviour:
top reports that the user space process and kworker share the CPU
almost fairly and the user space process is able to receive a
reasonable amount of packets.
Paolo
Powered by blists - more mailing lists