lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BY5PR12MB432298572152955D2ACAA2A3DC3F0@BY5PR12MB4322.namprd12.prod.outlook.com>
Date:   Fri, 18 Sep 2020 16:13:49 +0000
From:   Parav Pandit <parav@...dia.com>
To:     David Ahern <dsahern@...il.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "kuba@...nel.org" <kuba@...nel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC:     Jiri Pirko <jiri@...dia.com>
Subject: RE: [PATCH net-next v2 1/8] devlink: Introduce PCI SF port flavour
 and port attribute

Hi David,

> From: David Ahern <dsahern@...il.com>
> Sent: Friday, September 18, 2020 8:45 PM
> 
> On 9/17/20 10:18 PM, Parav Pandit wrote:
> >
> >> From: David Ahern <dsahern@...il.com>
> >> Sent: Friday, September 18, 2020 1:32 AM
> >>
> >> On 9/17/20 11:20 AM, Parav Pandit wrote:
> >>> diff --git a/include/net/devlink.h b/include/net/devlink.h index
> >>> 48b1c1ef1ebd..1edb558125b0 100644
> >>> --- a/include/net/devlink.h
> >>> +++ b/include/net/devlink.h
> >>> @@ -83,6 +83,20 @@ struct devlink_port_pci_vf_attrs {
> >>>  	u8 external:1;
> >>>  };
> >>>
> >>> +/**
> >>> + * struct devlink_port_pci_sf_attrs - devlink port's PCI SF
> >>> +attributes
> >>> + * @controller: Associated controller number
> >>> + * @pf: Associated PCI PF number for this port.
> >>> + * @sf: Associated PCI SF for of the PCI PF for this port.
> >>> + * @external: when set, indicates if a port is for an external
> >>> +controller  */ struct devlink_port_pci_sf_attrs {
> >>> +	u32 controller;
> >>> +	u16 pf;
> >>> +	u32 sf;
> >>
> >> Why a u32? Do you expect to support that many SFs? Seems like even a
> >> u16 is more than you can adequately name within an IFNAMESZ buffer.
> >>
> > I think u16 is likely enough, which let use creates 64K SF ports which
> > is a lot. :-) Was little concerned that it shouldn't fall short in few years. So
> picked u32.
> > Users will be able to make use of alternative names so just because
> IFNAMESZ is 16 characters, do not want to limit sfnum size.
> > What do you think?
> >
> >>
> >>> +	u8 external:1;
> >>> +};
> >>> +
> >>>  /**
> >>>   * struct devlink_port_attrs - devlink port object
> >>>   * @flavour: flavour of the port
> >>
> >>
> >>> diff --git a/net/core/devlink.c b/net/core/devlink.c index
> >>> e5b71f3c2d4d..fada660fd515 100644
> >>> --- a/net/core/devlink.c
> >>> +++ b/net/core/devlink.c
> >>> @@ -7855,6 +7889,9 @@ static int
> >> __devlink_port_phys_port_name_get(struct devlink_port *devlink_port,
> >>>  		n = snprintf(name, len, "pf%uvf%u",
> >>>  			     attrs->pci_vf.pf, attrs->pci_vf.vf);
> >>>  		break;
> >>> +	case DEVLINK_PORT_FLAVOUR_PCI_SF:
> >>> +		n = snprintf(name, len, "pf%usf%u", attrs->pci_sf.pf, attrs-
> >>> pci_sf.sf);
> >>> +		break;
> >>>  	}
> >>>
> >>>  	if (n >= len)
> >>>
> >>
> >> And as I noted before, this function continues to grow device names
> >> and it is going to spill over the IFNAMESZ buffer and EINVAL is going
> >> to be confusing. It really needs better error handling back to users (not
> kernel buffers).
> > Alternative names [1] should help to overcome the limitation of IFNAMESZ.
> > For error code EINVAL, should it be ENOSPC?
> > If so, should I insert a pre-patch in this series?
> >
> > [1] ip link property add dev DEVICE [ altname NAME .. ]
> >
> 
> You keep adding patches that extend the template based names. Those are
> going to cause odd EINVAL failures (the absolute worst kind of configuration
> failure) with no way for a user to understand why the command is failing, and
> you need to handle that. Returning an extack message back to the user is
> preferred.
Sure, make sense.
I will run one short series after this one, to return extack in below code flow and other where extack return is possible.
In sysfs flow it is irrelevant anyway.
rtnl_getlink()
  rtnl_phys_port_name_fill()
     dev_get_phys_port_name()
       ndo_phys_port_name()

is that ok?
This series is not really making phys_port_name any worse except that sfnum field width is bit larger than vfnum.

Now coming back to phys_port_name not fitting in 15 characters which can trigger -EINVAL error is very slim in most sane cases.
Let's take an example.
A controller in valid range of 0 to 16,
PF in range of 0 to 7,
VF in range of 0 to 255,
SF in range of 0 to 65536,

Will generate VF phys_port_name=c16pf7vf255 (11 chars + 1 null)
SF phys_port name = c16pf7sf65535 (13 chars + 1 null)
So yes, eth dev name won't fit in but phys_port_name failure is almost nil.

> 
> Yes, the altnames provides a solution after the the netdevice has been
> created, but I do not see how that works when the netdevice is created as
> part of devlink commands using the template names approach.
systemd/udev version v245 understands the parent PCI device of netdev and phys_port_name to construct a unique netdevice name.

This is the example from this patch series for PF port.

udevadm test-builtin net_id /sys/class/net/eth0
Load module index
Parsed configuration file /usr/lib/systemd/network/99-default.link
Created link configuration context.
Using default interface naming scheme 'v245'.
ID_NET_NAMING_SCHEME=v245
ID_NET_NAME_PATH=eni10npf0
Unload module index
Unloaded link configuration context.

And for SF port,

udevadm test-builtin net_id /sys/class/net/eth1
Load module index
Parsed configuration file /usr/lib/systemd/network/99-default.link
Created link configuration context.
Using default interface naming scheme 'v245'.
ID_NET_NAMING_SCHEME=v245
ID_NET_NAME_PATH=eni10npf0sf44
Unload module index
Unloaded link configuration context.

Similar VF naming in places for I think more than a year now.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ