[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <15007808-b5b1-441c-9a20-94195330b245@gmail.com>
Date: Fri, 7 Jun 2024 10:40:38 +0300
From: Tariq Toukan <ttoukan.linux@...il.com>
To: Jakub Kicinski <kuba@...nel.org>, Joe Damato <jdamato@...tly.com>
Cc: Tariq Toukan <ttoukan.linux@...il.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, nalramli@...tly.com,
Saeed Mahameed <saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>, Richard Cochran <richardcochran@...il.com>,
"open list:MELLANOX MLX5 core VPI driver" <linux-rdma@...r.kernel.org>,
Tariq Toukan <tariqt@...dia.com>
Subject: Re: [RFC net-next v4 2/2] net/mlx5e: Add per queue netdev-genl stats
On 07/06/2024 3:19, Jakub Kicinski wrote:
> On Thu, 6 Jun 2024 14:54:40 -0700 Joe Damato wrote:
>>>> Compare the values in /proc/net/dev match the output of cli for the same
>>>> device, even while the device is down.
>>>>
>>>> Note that while the device is down, per queue stats output nothing
>>>> (because the device is down there are no queues):
>>>
>>> This part is not true anymore.
>>
>> It is true with this patch applied and running the command below.
>> Maybe I should have been more explicit that using cli.py outputs []
>> when scope = queue, which could be an internal cli.py thing, but
>> this is definitely true with this patch.
>>
>> Did you test it and get different results?
>
> To avoid drivers having their own interpretations what "closed" means,
> core hides all queues in closed state:
>
> https://elixir.bootlin.com/linux/v6.10-rc1/source/net/core/netdev-genl.c#L582
>
Oh, so the kernel doesn't even call the driver's
mlx5e_get_queue_stats_rx/tx callbacks if interface is down. Although our
driver can easily satisfy the query and provide the stats.
I think the kernel here makes some design assumption about the stats,
and enforces it on all vendor drivers.
I don't think it's a matter of "closed channel" interpretation, it's
more about persistent stats.
IMO the kernel should be generic enough to let both designs (persistent
and non-persistent stats) integrate naturally with this new queue
netdev-genl stats feature.
>>> PTP RQ index is naively assigned to zero:
>>> rq->ix = MLX5E_PTP_CHANNEL_IX;
>>>
>>> but this isn't to be used as the stats index.
>>> Today, the PTP-RQ has no matcing rxq in the kernel level.
>>> i.e. turning PTP-RQ on won't add a kernel-level RXQ to the
>>> real_num_rx_queues.
>>> Maybe we better do.
>>> If not, and the current state is kept, the best we can do is let the PTP-RQ
>>> naively contribute its queue-stat to channel 0.
>>
>> OK, it sounds like the easiest thing to do is just count PTP as
>> channel 0, so if i == 0, I'll in the PTP stats.
>>
>> But please see below regarding testing whether or not PTP is
>> actually enabled or not.
>
> If we can I think we should avoid making queue 0 too special.
> If someone configures steering and only expects certain packets on
> queue 0 - getting PTP counted there will be a surprise.
> I vote to always count it towards base.
+1, let's count PTP RX in the base, especially that it has no matching
kernel-level rxq.
Another option is to add one more kernel rxq for it (i.e. set
real_num_rx_queues to num_channels + 1). But, that would be a bigger
change, we can keep it for a followup discussion.
Powered by blists - more mailing lists