[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <YyuREcGAXV9828w5@zx2c4.com>
Date: Thu, 22 Sep 2022 00:32:49 +0200
From: "Jason A. Donenfeld" <Jason@...c4.com>
To: Sherry Yang <sherry.yang@...cle.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org,
Tejun Heo <tj@...nel.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Sebastian Siewior <bigeasy@...utronix.de>
Cc: Sebastian Siewior <bigeasy@...utronix.de>,
Jack Vogel <jack.vogel@...cle.com>,
Tariq Toukan <tariqt@...dia.com>
Subject: Re: 10% regression in qperf tcp latency after introducing commit
"4a61bf7f9b18 random: defer fast pool mixing to worker"
Hi Sherry (and Sebastian and Netdev and Tejun and whomever),
I'm top-replying so that I can provide an overview of what's up to other
readers, and then I'll leave your email below for additional context.
random.c used to have a hard IRQ handler that did something like this:
do_some_stuff()
spin_lock()
do_some_other_stuff()
spin_lock()
That worked fine, but Sebastian pointed out that having spinlocks in a
hard IRQ handler was a big no-no for RT. Not wanting to make those into
raw spinlocks, he suggested we hoist things into a workqueue. So that's
what we did together, and now that function reads:
do_some_stuff()
queue_work_on(raw_smp_processor_id(), other_stuff_worker);
That seemed reasonable to me -- it's a pattern practiced a million times
all over the kernel -- and is currently how random.c's
add_interrupt_randomness() functions.
Sherry, however, has reported a ~10% performance regression using qperf
with TCP over some heavy duty infiniband cards. According to Sherry's
tests, removing the call to queue_work_on() makes the performance
regression go away.
That leads me to suspect that queue_work_on() might actually not be as
cheap as I assumed? If so, is that surprising to anybody else? And what
should we do about this?
Unfortunately, as you'll see from reading below, I'm hopeless in trying
to recreate Sherry's test rig, and even Sherry was unable to reproduce
it on different hardware. Nonetheless, a 10% regression on fancy 40gbps
hardware seems like something worthy of wider concern.
What are our options? Investigate queue_work_on() bottlenecks? Move back
to the original pattern, but use raw spinlocks? Some thing else?
Sherry -- are you able to do a bit of profiling to see which
instructions or which area of a function is the hottest or creating that
bottleneck? I think we probably need more information to do something
with this.
Also, because I still have no idea how I can reproduce this myself, you
might need to take the reigns with helping to develop and test a patch,
since I'm kind of stabbing in the dark here.
Anyway, because this might be rather involved, I figure it's best to
move this conversation on list in case other folks have insights.
Regards,
Jason
On Wed, Sep 21, 2022 at 06:09:27PM +0000, Sherry Yang wrote:
> > On Sep 20, 2022, at 7:44 AM, Jason A. Donenfeld <Jason@...c4.com> wrote:
> >
> > Anyway, a few questions:
> > 1) Does the regression disappear if you change this line:
> > - queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
> > + schedule_work_on(raw_smp_processor_id(), &fast_pool->mix);
>
> After applying this change, we still see performance regression there on linux-stable v5.15
>
> >
> > 2) Does the regression disappear if you remove this line:
> > - queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
> > + //queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
>
> After applying this change, we see performance get recovered on linux-stable v5.15.
>
> >
> >> We could see performance regression there.
> >
> > Can you give me some detailed instructions on how I can reproduce
> > this? Can it be reproduced inside of a single VM using network
> > namespaces, for example? Something like that would greatly help me
> > nail this down. For example, if you can give me a bash script that
> > does everything entirely on a single host?
> We are dong qperf tcp latency test there. All test results above are collected from X7 server with Mellanox Technologies
> MT27500 Family [ConnectX-3] cards:
> Infiniband device 'mlx4_0' port 1 status:
> default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1
> base lid: 0x6
> sm lid: 0x1
> state: 4: ACTIVE
> phys state: 5: LinkUp
> rate: 40 Gb/sec (4X QDR)
> link_layer: InfiniBand
>
> Cards are configured with IP addresses on private subnet for IPoIB
> performance testing.
> Regression identified in this bug is in TCP latency in this stack as reported
> by qperf tcp_lat metric:
>
> We have one system listen as a qperf server:
> [root@...rQperfServer ~]# qperf
>
> Have the other system connect to qperf server as a client (in this case, it’s X7 server with Mellanox card):
> [root@...rQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 -oo msg_size:4K:1024K:*2 tcp_lat
>
> However, our test team ran other experiments yesterday.
> * Ran benchmark on X5-2 system over ixgbe interface
> * Ran 8 processes of the benchmark on the original system over the Mellanox card
> Both these experiments failed to reproduce the regression. This highlights that the regression is not seen over ethernet network devices
> and is only seen when running a single instance of the qperf benchmark.
Powered by blists - more mailing lists