lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 26 Sep 2022 13:58:47 +0200
From:   Jiri Pirko <jiri@...nulli.us>
To:     Edward Cree <ecree.xilinx@...il.com>
Cc:     "Wilczynski, Michal" <michal.wilczynski@...el.com>,
        netdev@...r.kernel.org, alexandr.lobakin@...el.com,
        dchumak@...dia.com, maximmi@...dia.com, simon.horman@...igine.com,
        jacob.e.keller@...el.com, jesse.brandeburg@...el.com,
        przemyslaw.kitszel@...el.com
Subject: Re: [RFC PATCH net-next v4 2/6] devlink: Extend devlink-rate api
 with queues and new parameters

Tue, Sep 20, 2022 at 01:09:04PM CEST, ecree.xilinx@...il.com wrote:
>On 19/09/2022 14:12, Wilczynski, Michal wrote:
>> Maybe a switchdev case would be a good parallel here. When you enable switchdev, you get port representors on
>> the host for each VF that is already attached to the VM. Something that gives the host power to configure
>> netdev that it doesn't 'own'. So it seems to me like giving user more power to configure things from the host

Well, not really. It gives the user on hypervisor possibility
to configure the eswitch vport side. The other side of the wire, which
is in VM, is autonomous.


>> is acceptable.
>
>Right that's the thing though: I instinctively Want this to be done
> through representors somehow, because it _looks_ like it ought to
> be scoped to a single netdev; but that forces the hierarchy to
> respect netdev boundaries which as we've discussed is an unwelcome
> limitation.

Why exacly? Do you want to share a single queue between multiple vport?
Or what exactly would the the usecase where you hit the limitation?


>
>> In my mind this is a device-wide configuration, since the ice driver registers each port as a separate pci device.
>> And each of this devices have their own hardware Tx Scheduler tree global to that port. Queues that we're
>> discussing are actually hardware queues, and are identified by hardware assigned txq_id.
>
>In general, hardware being a single unit at the device level does
> not necessarily mean its configuration should be device-wide.
>For instance, in many NICs each port has a single hardware v-switch,
> but we do not have some kind of "devlink filter" API to program it
> directly.  Instead we attach TC rules to _many_ netdevs, and driver
> code transforms and combines these to program the unitary device.
>"device-wide configuration" originally meant things like firmware
> version or operating mode (legacy vs. switchdev) that do not relate
> directly to netdevs.
>
>But I agree with you that your approach is the "least evil method";
> if properly explained and documented then I don't have any
> remaining objection to your patch, despite that I'm continuing to
> take the opportunity to proselytise for "reprs >> devlink" ;)
>
>-ed

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ