[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180124.100558.97132829347179555.davem@davemloft.net>
Date: Wed, 24 Jan 2018 10:05:58 -0500 (EST)
From: David Miller <davem@...emloft.net>
To: pabeni@...hat.com
Cc: torvalds@...ux-foundation.org, frederic@...nel.org,
linux-kernel@...r.kernel.org, alexander.levin@...izon.com,
peterz@...radead.org, mchehab@...pensource.com,
hannes@...essinduktion.org, paulmck@...ux.vnet.ibm.com,
wanpeng.li@...mail.com, dima@...sta.com, tglx@...utronix.de,
akpm@...ux-foundation.org, rrendec@...sta.com, mingo@...nel.org,
sgruszka@...hat.com, riel@...hat.com, edumazet@...gle.com,
nks.gnu@...il.com
Subject: Re: [RFC PATCH 0/4] softirq: Per vector threading v3
From: Paolo Abeni <pabeni@...hat.com>
Date: Wed, 24 Jan 2018 15:54:05 +0100
> Niklas suggested a possible relation with CONFIG_IRQ_TIME_ACCOUNTING=y
> and indeed he was right.
>
> The patched kernel under test had CONFIG_IRQ_TIME_ACCOUNTING set, and
> very little CPU time was accounted to the kworker:
>
> [2125 is the relevant kworker's pid]
> grep sum_exec_runtime /proc/2125/sched; sleep 10; grep sum_exec_runtime /proc/2125/sched
> se.sum_exec_runtime : 13408.239286
> se.sum_exec_runtime : 13456.907197
>
> despite such process was processing a lot of packets and basically
> burning a CPU.
So IRQ_TIME_ACCOUNTING makes the scheduler think that the worker
threads are using nearly no task time at all.
The existing ksoftirqd code should hit the same problem, right?
Powered by blists - more mailing lists