[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <419f7e64-c15c-86b8-3b1d-ccedf60959f5@linux.ibm.com>
Date: Wed, 26 Oct 2022 16:10:23 -0500
From: Nick Child <nnac123@...ux.ibm.com>
To: Dave Taht <dave.taht@...il.com>, Jakub Kicinski <kuba@...nel.org>
Cc: netdev@...r.kernel.org, nick.child@....com
Subject: Re: [RFC PATCH net-next 0/1] ibmveth: Implement BQL
On 10/25/22 19:08, Dave Taht wrote:
> On Tue, Oct 25, 2022 at 3:10 PM Jakub Kicinski <kuba@...nel.org> wrote:
>>
>> On Tue, 25 Oct 2022 15:03:03 -0500 Nick Child wrote:
>>> Th qdisc is default pfifo_fast.
>>
>> You need a more advanced qdisc to seen an effect. Try fq.
>> BQL tries to keep the NIC queue (fifo) as short as possible
>> to hold packets in the qdisc. But if the qdisc is also just
>> a fifo there's no practical difference.
>>
>> I have no practical experience with BQL on virtualized NICs
>> tho, so unsure what gains you should expect to see..
>
I understand. I think that is why I am trying to investigate this
further, because the whole virtualization aspect could undermine
everything that BQL is trying to accomplish. That being said, I could
also be shining my flashlight in the wrong places. Hence the reason for
the RFC.
> fq_codel would be a better choice of underlying qdisc for a test, and
> in this environment you'd need to pound the interface flat with hundreds
> of flows, preferably in both directions.
>
Enabling FQ_CODEL and restarting tests, I am still not seeing any
noticeable difference in bytes sitting in the netdev_queue (but it is
possible my tracing is incorrect). I also tried reducing the number of
queues, disabling tso and even running 100-500 parallel iperf
connections. I can see the throughput and latency taking a hit with more
connections so I assume the systems are saturated.
> My questions are:
>
> If the ring buffers never fill, why do you need to allocate so many
> buffers in the first place?
The reasoning for 16 tx queues was mostly to allow for more parallel
calls to the devices xmit function. After hearing your points about
resource issues, I will send a patch to reduce this number to 8 queues.
> If bql never engages, what's the bottleneck elsewhere? XMIT_MORE?
>
I suppose the question I am trying to pose is: How do we know that bql
is engaging?
> Now the only tool for monitoring bql I know of is bqlmon.
>
bqlmon is to useful for tracking the bql `limit` value assigned to a
queue (IOW `watch
/sys/class/net/<device>/queues/tx*/byte_queue_limits/limit` ) but
whether or not this value is being applied to an active network
connection is what I would like to figure out.
Thanks again for feedback and helping me out with this.
Nick Child
Powered by blists - more mailing lists