[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YyyRMam6Eu8nmeCd@zx2c4.com>
Date: Thu, 22 Sep 2022 18:45:37 +0200
From: "Jason A. Donenfeld" <Jason@...c4.com>
To: Tejun Heo <tj@...nel.org>
Cc: Sherry Yang <sherry.yang@...cle.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org,
Lai Jiangshan <jiangshanlai@...il.com>,
Sebastian Siewior <bigeasy@...utronix.de>,
Jack Vogel <jack.vogel@...cle.com>,
Tariq Toukan <tariqt@...dia.com>, sultan@...neltoast.com
Subject: Re: 10% regression in qperf tcp latency after introducing commit
"4a61bf7f9b18 random: defer fast pool mixing to worker"
Hi Tejun,
On Wed, Sep 21, 2022 at 01:54:43PM -1000, Tejun Heo wrote:
> Hello,
>
> On Thu, Sep 22, 2022 at 12:32:49AM +0200, Jason A. Donenfeld wrote:
> > What are our options? Investigate queue_work_on() bottlenecks? Move back
> > to the original pattern, but use raw spinlocks? Some thing else?
>
> I doubt it's queue_work_on() itself if it's called at very high frequency as
> the duplicate calls would just fail to claim the PENDING bit and return but
> if it's being called at a high frequency, it'd be waking up a kthread over
> and over again, which can get pretty expensive. Maybe that ends competing
> with softirqd which is handling net rx or sth?
Huh, yea, interesting theory. Orrr, the one time that it _does_ pass the
test_and_set_bit check, the extra overhead here is enough to screw up
the latency? Both theories sound at least plausible.
> So, yeah, I'd try something which doesn't always involve scheduling and a
> context switch whether that's softirq, tasklet, or irq work.
Alright, I'll do that. I posted a diff for Sherry to try, and I'll make
that into a real patch and wait for her test.
> I probably am
> mistaken but I thought RT kernel pushes irq handling to threads so that
> these things can be handled sanely. Is this some special case?
It does mostly. But there's still a hard IRQ handler, somewhere, because
IRQs gotta IRQ, and the RNG benefits from getting a timestamp exactly
when that happens. So here we are.
Jason
Powered by blists - more mailing lists