lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 11 Oct 2018 12:03:52 +0100
From:   John Hurley <john.hurley@...ronome.com>
To:     Or Gerlitz <gerlitz.or@...il.com>
Cc:     Jakub Kicinski <jakub.kicinski@...ronome.com>,
        Linux Netdev List <netdev@...r.kernel.org>,
        Jiri Pirko <jiri@...nulli.us>, oss-drivers@...ronome.com,
        ozsh@...lanox.com, avivh@...lanox.com,
        Simon Horman <simon.horman@...ronome.com>
Subject: Re: [RFC 0/2] net: sched: indirect/remote setup tc block cb registering

On Wed, Oct 10, 2018 at 2:38 PM Or Gerlitz <gerlitz.or@...il.com> wrote:
>
> On Thu, Oct 4, 2018 at 8:19 PM Jakub Kicinski
> <jakub.kicinski@...ronome.com> wrote:
> > On Thu, 4 Oct 2018 17:20:43 +0100, John Hurley wrote:
> > > > > In this case the hw driver will receive the rules from the tunnel device directly.
> > > > > The driver can then offload them as it sees fit.
> > > >
> > > > if both instances of the hw drivers (uplink0, uplink1) register to get
> > > > the rules installed on the block of the tunnel device we have exactly
> > > > what we want, isn't that?
> > > >
> > >
> > > The design here is that each hw driver should only need to register
> > > for callbacks on a 'higher level' device's block once.
> > > When a callback is triggered the driver receives one instance of the
> > > rule and can make its own decision about what to do.
> > > This is slightly different from registering ingress devs where each
> > > uplink registers for its own block.
> > > It is probably more akin to the egdev setup in that if a rule on a
> > > block egresses to an uplink, the driver receives 1 callback on the
> > > rule, irrespective of how may underlying netdevs are on the block.
> >
> > Right, though nothing stops the driver from registering multiple
> > callbacks for the same device, if its somehow useful.
>
> I must be missing something.. put uplink bonding a side. If the user
> is setting tc ingress rule
> on a tunnel device (vxlan0/gre0) over a system with multiple unrelated
> NICs/uplinks that support
> TC decap offload, wouldn't each of these netdevs want to install the
> rule into HW? why do we want
> the HW driver to duplicate the rule between the potential candidates
> among the netdev instances they created?
> and not each of them to get the callback and decide??
>
> we want each netdev instance of these NIC

Hi Or,
It depends on how we want to offload tunnels.
In the case of the NFP, we offload 1 instance of a tunnel rule, not
one instance per uplink.
With this, it makes sense to have 1 callback per tunnel netdev (and
per driver) rather that per uplink (although as Jakub pointed out, the
option is there to register more callbacks).
If we consider the egdev model for offload, we only got a single
callback per rule if the egress device was registered and did not know
the ingress dev - is this not a similar in that the driver gets 1
callback for the rule and decides what to do with it?
John

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ