lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Dec 2020 19:12:50 -0800
From:   Alexander Duyck <>
To:     Edwin Peer <>
Cc:     Parav Pandit <>, Saeed Mahameed <>,
        "David S. Miller" <>,
        Jakub Kicinski <>,
        Jason Gunthorpe <>,
        Leon Romanovsky <>,
        Netdev <>,
        "" <>,
        David Ahern <>,
        Jacob Keller <>,
        Sridhar Samudrala <>,
        "Ertman, David M" <>,
        Dan Williams <>,
        Kiran Patil <>,
        Greg KH <>
Subject: Re: [net-next v4 00/15] Add mlx5 subfunction support

On Tue, Dec 15, 2020 at 5:13 PM Edwin Peer <> wrote:
> On Tue, Dec 15, 2020 at 10:49 AM Alexander Duyck
> <> wrote:
> > It isn't "SR-IOV done right" it seems more like "VMDq done better".
> I don't think I agree with that assertion. The fact that VMDq can talk
> to a common driver still makes VMDq preferable in some respects. Thus,
> subfunctions do appear to be more of a better SR-IOV than a better
> VMDq, but I'm similarly not sold on whether a better SR-IOV is
> sufficient benefit to warrant the additional complexity this
> introduces. If I understand correctly, subfunctions buy two things:
> 1) More than 256 SFs are possible: Maybe it's about time PCI-SIG
> addresses this limit for VFs? If that were the only problem with VFs,
> then fixing it once there would be cleaner. The devlink interface for
> configuring a SF is certainly more sexy than legacy SR-IOV, but it
> shouldn't be fundamentally impossible to zhuzh up VFs either. One can
> also imagine possibilities around remapping multiple PFs (and their
> VFs) in a clever way to get around the limited number of PCI resources
> exposed to the host.

The fact is SR-IOV just wasn't designed to scale well. I think we are
probably going to see most vendors move away from it.

The fact is what we are talking about now is the future of all this
and how to implement Scalable I/O Virtualization
The document is a good primer to many of the features we are
discussing as it describes how to compose a device.

The problem is as it was with SR-IOV that the S-IOV specification is
very PCIe centric and doesn't do a good job explaining how to deal
with the network as it relates to all this. Then to complicate things
the S-IOV expected this to be used with direct assigned devices for
guests/applications and instead we are talking about using the devices
in the host which makes things a bit messier.

> 2) More flexible division of resources: It's not clear that device
> firmwarre can't perform smarter allocation than N/<num VFs>, but
> subfunctions also appear to allow sharing of certain resources by the
> PF driver, if desirable. To the extent that resources are shared, how
> are workloads isolated from each other?
> I'm not sure I like the idea of having to support another resource
> allocation model in our driver just to support this, at least not
> without a clearer understanding of what is being gained.

I view this as the future alternative to SR-IOV. It is just a matter
of how we define it. Eventually we probably would be dropping the
SR-IOV implementation and instead moving over to S-IOV as an
alternative instead. As such if this is done right I don't see this as
a thing where we need to support both. Really we should be able to
drop support for one if we have the other.

Powered by blists - more mailing lists