lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190306095638.7c028bdd@cakuba.hsd1.ca.comcast.net>
Date:   Wed, 6 Mar 2019 09:56:38 -0800
From:   Jakub Kicinski <jakub.kicinski@...ronome.com>
To:     Jiri Pirko <jiri@...nulli.us>
Cc:     davem@...emloft.net, netdev@...r.kernel.org,
        oss-drivers@...ronome.com
Subject: Re: [PATCH net-next v2 4/7] devlink: allow subports on devlink PCI
 ports

On Wed, 6 Mar 2019 13:20:37 +0100, Jiri Pirko wrote:
> Tue, Mar 05, 2019 at 06:15:34PM CET, jakub.kicinski@...ronome.com wrote:
> >On Tue, 5 Mar 2019 12:06:01 +0100, Jiri Pirko wrote:  
> >> >> >as ports.  Can we invent a new command (say "partition"?) that'd take
> >> >> >the bus info where the partition is to be spawned?      
> >> >> 
> >> >> Got it. But the question is how different this object would be from the
> >> >> existing "port" we have today.    
> >> >
> >> >They'd be where "the other side of a PCI link" is represented,
> >> >restricting ports to only ASIC's forwarding plane ports.    
> >> 
> >> Basically a "host port", right? It can still be the same port object,
> >> only with different flavour and attributes. So we would have:
> >> 
> >> 1) pci/0000:05:00.0/0: type eth netdev enp5s0np0
> >>                        flavour physical switch_id 00154d130d2f
> >> 2) pci/0000:05:00.0/10000: type eth netdev enp5s0npf0s0
> >>                            flavour pci_pf pf 0 subport 0
> >>                            switch_id 00154d130d2f
> >>                            peer pci/0000:05:00.0/1
> >> 3) pci/0000:05:00.0/10001: type eth netdev enp5s0npf0vf0
> >>                            flavour pci_vf pf 0 vf 0
> >>                            switch_id 00154d130d2f
> >>                            peer pci/0000:05:10.1/0
> >> 4) pci/0000:05:00.0/10001: type eth netdev enp5s0npf0s1
> >>                            flavour pci_pf pf 0 subport 1
> >>                            switch_id 00154d130d2f
> >>                            peer pci/0000:05:00.0/2
> >> 5) pci/0000:05:00.0/1: type eth netdev enp5s0f0??
> >>                        flavour host          <----------------
> >>                        peer pci/0000:05:00.0/10000
> >> 6) pci/0000:05:10.1/0: type eth netdev enp5s10f0 
> >>                        flavour host          <----------------
> >>                        peer pci/0000:05:00.0/10001
> >> 7) pci/0000:05:00.0/2: type eth netdev enp5s0f0??
> >>                        flavour host          <----------------
> >>                        peer pci/0000:05:00.0/10001
> >> 
> >> I think it looks quite clear, it gives complete topology view.  
> >
> >Okay, I have some of questions :)
> >
> >What do we use for port_index?  
> 
> That is just a number totally in control of the driver. Driver can
> assign it in any way.
> 
> >
> >What are the operations one can perform on "host ports"?  
> 
> That is a good question. I would start with *none* and extend it upon
> needs.
> 
> 
> >
> >If we have PCI parameters, do they get set on the ASIC side of the port
> >or the host side of the port?  
> 
> Could you give me an example?

Let's take msix_vec_per_pf_min as an example.  

> But I believe that on switch-port side.

Ok.

> >How do those behave when device is passed to VM?  
> 
> In case of VF? VF will have separate devlink instance (separate handle,
> probably "aliased" to the PF handle). So it would disappear from
> baremetal and appear in VM:
> $ devlink dev
> pci/0000:00:10.0
> $ devlink dev port
> pci/0000:00:10.1/0: type eth netdev enp5s10f0
>                     flavour host
> That's it for the VM.
> 
> There's no linkage (peer, alias) between this and the instances on
> baremetal. 

Ok, I guess this is the main advantage from your perspective?
The fact that "host ports" are visible inside a VM?
Or do you believe that having both ends of a pipe as ports makes the
topology easier to understand?

For creating subdevices, I don't think the handle should ever be port.
We create new ports on a devlink instance, and configure its forwarding
with offloads of well established Linux SW constructs.  New devices are
not logically associated with other ports (see how in my patches there
are 2 "subports" but no main port on that PF - a split not a hierarchy).

How we want to model forwarding inside a VM (who configures the
underlying switching) remains unclear.

> >You have a VF devlink instance there - what ports does it show?  
> 
> See above.
> 
> 
> >
> >How do those look when the PF is connected to another host?  Do they
> >get spawned at all?  
> 
> What do you mean by "PF is connected to another host"?

Either "SmartNIC":

http://www.mellanox.com/products/smartnic/?ls=gppc&lsd=SmartNIC-gen-smartnic&gclid=EAIaIQobChMIxIrGmYju4AIVy5yzCh2SFwQJEAAYASAAEgIui_D_BwE

or

Multi-host NIC: http://www.mellanox.com/page/multihost

> >Will this not be confusing to DSA folks who have a CPU port?  
> 
> Why do you think so?

Host and CPU sound quite similar, it is unclear how they differ, and
why we have a need for both (from user's perspective).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ