lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9d29e624-fc02-44cd-9a92-01f813e66eed@nvidia.com>
Date: Wed, 10 Jan 2024 16:09:52 +0200
From: Gal Pressman <gal@...dia.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Saeed Mahameed <saeed@...nel.org>, "David S. Miller"
 <davem@...emloft.net>, Paolo Abeni <pabeni@...hat.com>,
 Eric Dumazet <edumazet@...gle.com>, Saeed Mahameed <saeedm@...dia.com>,
 netdev@...r.kernel.org, Tariq Toukan <tariqt@...dia.com>
Subject: Re: [net-next 10/15] net/mlx5e: Let channels be SD-aware

On 09/01/2024 18:00, Jakub Kicinski wrote:
> On Tue, 9 Jan 2024 16:15:50 +0200 Gal Pressman wrote:
>>>> I'm confused, how are RX queues related to XPS?  
>>>
>>> Separate sentence, perhaps I should be more verbose..  
>>
>> Sorry, yes, your understanding is correct.
>> If a packet is received on RQ 0 then it is from PF 0, RQ 1 came from PF
>> 1, etc. Though this is all from the same wire/port.
>>
>> You can enable arfs for example, which will make sure that packets that
>> are destined to a certain CPU will be received by the PF that is closer
>> to it.
> 
> Got it.
> 
>>>> XPS shouldn't be affected, we just make sure that whatever queue XPS
>>>> chose will go out through the "right" PF.  
>>>
>>> But you said "correct" to queue 0 going to PF 0 and queue 1 to PF 1.
>>> The queue IDs in my question refer to the queue mapping form the stacks
>>> perspective. If user wants to send everything to queue 0 will it use
>>> both PFs?  
>>
>> If all traffic is transmitted through queue 0, it will go out from PF 0
>> (the PF that is closer to CPU 0 numa).
> 
> Okay, but earlier you said: "whatever queue XPS chose will go out
> through the "right" PF." - which I read as PF will be chosen based
> on CPU locality regardless of XPS logic.
> 
> If queue 0 => PF 0, then user has to set up XPS to make CPUs from NUMA
> node which has PF 0 use even number queues, and PF 1 to use odd number
> queues. Correct?

I think it is based on the default xps configuration, but I don't want
to get the details wrong, checking with Tariq and will reply (he's OOO).

>>>> So for example, XPS will choose a queue according to the CPU, and the
>>>> driver will make sure that packets transmitted from this SQ are going
>>>> out through the PF closer to that NUMA.  
>>>
>>> Sounds like queue 0 is duplicated in both PFs, then?  
>>
>> Depends on how you look at it, each PF has X queues, the netdev has 2X
>> queues.
> 
> I'm asking how it looks from the user perspective, to be clear.

>From the user's perspective there is a single netdev, the PFs separation
is internal to the driver and transparent to the user.
The user configures the number of queues, and the driver splits them
between the PF.

Same for other features, the user configures the netdev like any other
netdev, it is up to the driver to make sure that the netdev model is
working.

> From above I gather than the answer is no - queue 0 maps directly 
> to PF 0 / queue 0, nothing on PF 1 will ever see traffic of queue 0.

Right, traffic received on RQ 0 is traffic that was processed by PF 0.
RQ 1 is in fact (PF 1, RQ 0).

>>>> Can you share a link please?  
>>>
>>> commit a90d56049acc45802f67cd7d4c058ac45b1bc26f  
>>
>> Thanks, will take a look.
>>
>>>> All the logic is internal to the driver, so I expect it to be fine, but
>>>> I'd like to double check.
>>>
>>> Herm, "internal to the driver" is a bit of a landmine. It will be fine
>>> for iperf testing but real users will want to configure the NIC.
>>
>> What kind of configuration are you thinking of?
> 
> Well, I was hoping you'd do the legwork and show how user configuration
> logic has to be augmented for all relevant stack features to work with
> multi-PF devices. I can list the APIs that come to mind while writing
> this email, but that won't be exhaustive :(

We have been working on this feature for a long time, we did think of
the different configurations and potential issues, and backed that up
with our testing.

TLS for example is explicitly blocked in this series for such netdevices
as we identified it as problematic.

There is always potential that we missed things, that's why I was
genuinely curious to hear if you had anything specific in mind.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ