lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 18 Feb 2020 08:16:43 -0800
From:   Shannon Nelson <snelson@...sando.io>
To:     David Miller <davem@...emloft.net>
Cc:     netdev@...r.kernel.org
Subject: Re: [PATCH net-next 0/9] ionic: Add support for Event Queues

On 2/17/20 2:03 PM, David Miller wrote:
> From: Shannon Nelson <snelson@...sando.io>
> Date: Sun, 16 Feb 2020 22:55:22 -0800
>
>> On 2/16/20 8:11 PM, David Miller wrote:
>>> From: Shannon Nelson <snelson@...sando.io>
>>> Date: Sun, 16 Feb 2020 15:11:49 -0800
>>>
>>>> This patchset adds a new EventQueue feature that can be used
>>>> for multiplexing the interrupts if we find that we can't get
>>>> enough from the system to support our configuration.  We can
>>>> create a small number of EQs that use interrupts, and have
>>>> the TxRx queue pairs subscribe to event messages that come
>>>> through the EQs, selecting an EQ with (TxIndex % numEqs).
>>> How is a user going to be able to figure out how to direct
>>> traffic to specific cpus using multiqueue settings if you're
>>> going to have the mapping go through this custom muxing
>>> afterwards?
>> When using the EQ feature, the TxRx are assigned to the EventQueues in
>> a straight round-robin, so the layout is predictable.  I suppose we
>> could have a way to print out the TxRx -> EQ -> Irq mappings, but I'm
>> not sure where we would put such a thing.
> No user is going to know this and it's completely inconsistent with how
> other multiqueue networking devices behave.

The ionic's main RSS set is limited to number of cpus, so that in normal 
use we remain consistent with other drivers.  With no additional 
configuration, this is the standard behavior, as expected, so most users 
won't need to know or care.

We have a FW configuration option that can be chosen by the customer to 
make use of the much larger set of queues that we have available.  This 
keeps the RSS set limited to the cpu count or less, keeping normal use 
consistent, and makes additional queues available for macvlan offload 
use.  Depending on the customer's configuration, this can be 100's of 
queues, which seems excessive, but we have been given use-cases for 
them.  In these cases, the queues will be wrapped around the vectors 
available with the customer's use case. This is similar to the Intel 
i40e's support for macvlan offload which can also end up wrapping 
around, but they have the number of offload channels constrained to a 
much smaller number.

(BTW, with the ixgbe we can do an ethtool set_channel to get more queues 
than vectors on the PF, which will end up up wrapping the queues around 
the vectors allocated.  Not extremely useful perhaps, but possible.)

We don't have support for the macvlan offload in this upstream driver 
yet, but this patchset allows us to play nicely with that FW configuration.

sln

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ