lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 12 Jun 2018 15:33:37 -0700
From:   Alexander Duyck <alexander.duyck@...il.com>
To:     Florian Fainelli <f.fainelli@...il.com>
Cc:     Alexander Duyck <alexander.h.duyck@...el.com>,
        intel-wired-lan@...osl.org,
        Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
        Netdev <netdev@...r.kernel.org>
Subject: Re: [Intel-wired-lan] [jkirsher/next-queue PATCH v2 0/7] Add support
 for L2 Fwd Offload w/o ndo_select_queue

On Tue, Jun 12, 2018 at 10:56 AM, Florian Fainelli <f.fainelli@...il.com> wrote:
> On 06/12/2018 08:18 AM, Alexander Duyck wrote:
>> This patch series is meant to allow support for the L2 forward offload, aka
>> MACVLAN offload without the need for using ndo_select_queue.
>>
>> The existing solution currently requires that we use ndo_select_queue in
>> the transmit path if we want to associate specific Tx queues with a given
>> MACVLAN interface. In order to get away from this we need to repurpose the
>> tc_to_txq array and XPS pointer for the MACVLAN interface and use those as
>> a means of accessing the queues on the lower device. As a result we cannot
>> offload a device that is configured as multiqueue, however it doesn't
>> really make sense to configure a macvlan interfaced as being multiqueue
>> anyway since it doesn't really have a qdisc of its own in the first place.
>
> Interesting, so at some point I had came up with the following for
> mapping queues between the DSA slave network devices and the DSA master
> network device (doing the actual transmission). The DSA master network
> device driver is just a normal network device driver.
>
> The set-up is as follows: 4 external Ethernet switch ports, each with 8
> egress queues and the DSA master (bcmsysport.c), aka CPU Ethernet
> controller has 32 output queues, so you can do a 1:1 mapping of those,
> that's actually what we want. A subsequent hardware generation only
> provides 16 output queues, so we can still do 2:1 mapping.
>
> The implementation is done like this:
>
> - DSA slave network devices are always created after the DSA master
> network device so we can leverage that
>
> - a specific notifier is running from the DSA core and tells the DSA
> master about the switch position in the tree (position 0 = directly
> attached), and the switch port number and a pointer to the slave network
> device
>
> - we establish the mapping between the queues within the bcmsysport
> driver as a simple array
>
> - when transmitting, DSA slave network devices set a specific queue/port
> number within the 16-bits that skb->queue_mapping permits
>
> - this gets re-used by bcmsysport.c to extract the correct queue number
> during ndo_select_queue such that the appropriate queue number gets used
> and congestion works end-to-end.
>
> The reason why we do that is because there is some out of band HW that
> monitors the queue depth of the switch port's egress queue and
> back-pressure the Ethernet controller directly when trying to transmit
> to a congested queue.
>
> I had initially considered establishing the mapping using tc and some
> custom "bind" argument of some kind, but ended-up doing things the way
> they are which are more automatic though they leave less configuration
> to an user. This has a number of caveats though:
>
> - this is made generic within the context of DSA in that nothing is
> switch driver or Ethernet MAC driver specific and the notifier
> represents the contract between these two seemingly independent subsystems
>
> - the queue indicated between DSA slave and master is unfortunately
> switch driver/controller specific (BRCM_TAG_SET_PORT_QUEUE,
> BRCM_TAG_GET_PORT, BRCM_TAG_GET_QUEUE)
>
> What I like about your patchset is the mapping establishment, but as you
> will read from my reply in patch 2, I think the (upper) 1:N (lower)
> mapping might not work for my specific use case.
>
> Anyhow, not intended to be blocking this, as it seems to be going in the
> right direction anyway.

I think I am still not getting why the 1:N would be an issue. At least
the way I have the code implemented here the lower queues all have a
qdisc associated with them, just not the upper device. Generally I am
using the macvlan as a bump in the wire to take care of filtering for
the bridging mode. If I have to hairpin packets and send them back up
on on of the the upper interfaces I want to do that in software rather
than hardware so I try to take care of it there instead of routing it
through the hardware.

>>
>> I am submitting this as an RFC for the netdev mailing list, and officially
>> submitting it for testing to Jeff Kirsher's next-queue in order to validate
>> the ixgbe specific bits.
>>
>> The big changes in this set are:
>>   Allow lower device to update tc_to_txq and XPS map of offloaded MACVLAN
>>   Disable XPS for single queue devices
>>   Replace accel_priv with sb_dev in ndo_select_queue
>>   Add sb_dev parameter to fallback function for ndo_select_queue
>>   Consolidated ndo_select_queue functions that appeared to be duplicates
>
> Interesting, turns out I had a possibly similar use case with DSA with
> the slave network devices need to select an outgoing queue number for

I was kind of assuming this could be applied to a number of possible
use cases. As it was I was wondering if maybe we should look at adding
this as an option for just a standard VLAN as we could perform the
same kind of filtering and just deliver the packet directly to the
VLAN interface instead of requiring the extra trip through the stack
after the tag has been stripped.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ