[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180123.132424.1035340800864767853.davem@davemloft.net>
Date: Tue, 23 Jan 2018 13:24:24 -0500 (EST)
From: David Miller <davem@...emloft.net>
To: torvalds@...ux-foundation.org
Cc: pabeni@...hat.com, frederic@...nel.org,
linux-kernel@...r.kernel.org, alexander.levin@...izon.com,
peterz@...radead.org, mchehab@...pensource.com,
hannes@...essinduktion.org, paulmck@...ux.vnet.ibm.com,
wanpeng.li@...mail.com, dima@...sta.com, tglx@...utronix.de,
akpm@...ux-foundation.org, rrendec@...sta.com, mingo@...nel.org,
sgruszka@...hat.com, riel@...hat.com, edumazet@...gle.com
Subject: Re: [RFC PATCH 0/4] softirq: Per vector threading v3
From: Linus Torvalds <torvalds@...ux-foundation.org>
Date: Tue, 23 Jan 2018 09:42:32 -0800
> But I wonder if the test triggers the "lets run lots of workqueue
> threads", and then the single-threaded user space just gets blown out
> of the water by many kernel threads. Each thread gets its own "fair"
> amount of CPU, but..
If a single cpu's softirq deferral can end up running on multiple
workqueue threads, indeed that's a serious problem.
So if we're in a workqueue and it does a:
schedule_work_on(this_cpu, currently_executing_work);
it'll potentially make a new thread?
That's exactly the code path that will get exercised during a UDP
flood the way that vector_work_func() is implemented.
Powered by blists - more mailing lists