[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YzQuwlc3CIlGWa4u@linutronix.de>
Date: Wed, 28 Sep 2022 13:23:46 +0200
From: Sebastian Siewior <bigeasy@...utronix.de>
To: Tejun Heo <tj@...nel.org>, Sherry Yang <sherry.yang@...cle.com>
Cc: "Jason A. Donenfeld" <Jason@...c4.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org,
Lai Jiangshan <jiangshanlai@...il.com>,
Jack Vogel <jack.vogel@...cle.com>,
Tariq Toukan <tariqt@...dia.com>
Subject: Re: 10% regression in qperf tcp latency after introducing commit
"4a61bf7f9b18 random: defer fast pool mixing to worker"
On 2022-09-21 13:54:43 [-1000], Tejun Heo wrote:
> Hello,
Hi,
> On Thu, Sep 22, 2022 at 12:32:49AM +0200, Jason A. Donenfeld wrote:
> > What are our options? Investigate queue_work_on() bottlenecks? Move back
> > to the original pattern, but use raw spinlocks? Some thing else?
>
> I doubt it's queue_work_on() itself if it's called at very high frequency as
> the duplicate calls would just fail to claim the PENDING bit and return but
> if it's being called at a high frequency, it'd be waking up a kthread over
> and over again, which can get pretty expensive. Maybe that ends competing
> with softirqd which is handling net rx or sth?
There is this (simplified):
| if (new_count & MIX_INFLIGHT)
| return;
|
| if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
| return;
|
| fast_pool->count |= MIX_INFLIGHT;
| queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
at least 1k interrupts are needed and a second must pass before a worker
will be scheduled. Oh wait. We need only one of both. So how many
interrupts do we get per second?
Is the regression coming from more than 1k interrupts in less then a
second or a context switch each second? Because if it is a context
switch every second then I am surprised to see a 10% performance drop in
this case since should happen for other reasons, too unless the CPU is
isolated.
[ There isn't a massive claims of the PENDING bit or wakeups because
fast_pool is per-CPU and due to the MIX_INFLIGHT bit. ]
> So, yeah, I'd try something which doesn't always involve scheduling and a
> context switch whether that's softirq, tasklet, or irq work. I probably am
> mistaken but I thought RT kernel pushes irq handling to threads so that
> these things can be handled sanely. Is this some special case?
As Jason explained this part is invoked in the non-threaded part.
> Thanks.
Sebastian
Powered by blists - more mailing lists