lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201216203537.GM552508@nvidia.com>
Date:   Wed, 16 Dec 2020 16:35:37 -0400
From:   Jason Gunthorpe <jgg@...dia.com>
To:     Alexander Duyck <alexander.duyck@...il.com>
CC:     Saeed Mahameed <saeed@...nel.org>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Leon Romanovsky <leonro@...dia.com>,
        Netdev <netdev@...r.kernel.org>, <linux-rdma@...r.kernel.org>,
        David Ahern <dsahern@...nel.org>,
        Jacob Keller <jacob.e.keller@...el.com>,
        Sridhar Samudrala <sridhar.samudrala@...el.com>,
        "Ertman, David M" <david.m.ertman@...el.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Kiran Patil <kiran.patil@...el.com>,
        Greg KH <gregkh@...uxfoundation.org>
Subject: Re: [net-next v4 00/15] Add mlx5 subfunction support

On Wed, Dec 16, 2020 at 11:27:32AM -0800, Alexander Duyck wrote:

> That has been the case for a long time. However it had been my
> experience that SR-IOV never scaled well to meet those needs and so it
> hadn't been used in such deployments.

Seems to be going quite well here, perhaps the applications are
different.

> > The optimization here is to reduce the hypervisor workload and free up
> > CPU cycles for the VMs/containers to consume. This means less handling
> > of packets in the CPU, especially for VM cases w/ SRIOV or VDPA.
> >
> > Even the extra DMA on the NIC is not really a big deal. These are 400G
> > NICs with big fast PCI. If you top them out you are already doing an
> > aggregate of 400G of network traffic. That is a big number for a
> > single sever, it is OK.
> 
> Yes, but at a certain point you start bumping up against memory
> throughput limitations as well. Doubling up the memory footprint by
> having the device have to write to new pages instead of being able to
> do something like pinning and zero-copy would be expensive.

You can't zero-copy when using VMs.

And when using containers every skb still has to go through all the
switching and encapsulation logic, which is not free in SW.

At a certain point the gains of avoiding the DMA copy are lost by the
costs of all the extra CPU work. The factor being optimized here is
CPU capacity.

> > Someone might feel differently if they did this on a 10/40G NIC, in
> > which case this is not the solution for their application.
> 
> My past experience was with 10/40G NIC with tens of VFs. When we start
> talking about hundreds I would imagine the overhead becomes orders of
> magnitudes worse as the problem becomes more of an n^2 issue since you
> will have n times more systems sending to n times more systems

The traffic demand is application dependent. If an application has an
n^2 traffic pattern then it needs a network to sustain that cross
sectional bandwidth regardless of how the VMs are packed.

It just becomes a design factor of the network and now the network
includes that switching component on the PCIe NIC as part of the
capacity for cross sectional BW.

There is some balance where a VM can only generate so much traffic
based on the CPU it has available, and you can design the entire
infrastructure to balance the CPU with the NIC with the switches and
come to some packing factor of VMs. 

As CPU constrains VM performance, removing CPU overheads from the
system will improve packing density. A HW network data path in the VMs
is one such case that can turn to a net win if the CPU bottleneck is
bigger than the network bottleneck.

It is really over simplifiying to just say PCIe DMA copies are bad.

> receiving. As such things like broadcast traffic would end up
> consuming a fair bit of traffic.

I think you have a lot bigger network problems if your broadcast
traffic is so high that you start to worry about DMA copy performance
in a 400G NIC.

> The key bit here is outside of netdev. Like I said, SIOV and SR-IOV
> tend to be PCIe specific specifications. What we are defining here is
> how the network interfaces presented by such devices will work.

I think we've achieved this..

> > That seems small enough it should be resolvable, IMHO. eg some new
> > switch rule that matches that specific HW behavior?
> 
> I would have to go digging to find the conversation. It was about 3 or
> 4 years ago. I seem to recall mentioning the idea of having some
> static rules but it was a no-go at the time. If we wanted to spin off
> this conversation and pull in some Intel folks I would be up for us
> revisiting it. However I'm not with Intel anymore so it would mostly
> be something I would be working on as a hobby project instead of
> anything serious.

Personally I welcome getting more drivers to implement the switchdev
model, I think it is only good for the netdev community as as a whole
to understand and standardize on this.

> > > Something like the vdpa model is more like what I had in mind. Only
> > > vdpa only works for the userspace networking case.
> >
> > That's because making a driver that converts the native HW to VDPA and
> > then running a generic netdev on the resulting virtio-net is a pretty
> > wild thing to do. I can't really think of an actual use case.
> 
> I'm not talking about us drastically changing existing models. I would
> still expect the mlx5 driver to be running on top of the aux device.
> However it may be that the aux device is associated with something
> like the switchdev port as a parent 
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

That is exactly how this works. The switchdev representor and the aux
device are paired and form the analog of the veth tunnel. IIRC this
relationship with the aux device is shown in the devlink output for
the switchdev ports.

I still can't understand what you think should be changed here.

We can't get rid of the aux device, it is integral to the software
layering and essential to support any device assignment flow.

We can't add a mandatory netdev, because that is just pure waste for
any device assignment flow.

> The basic idea is most of the control is still in the switchdev
> port, but it becomes more visible that the switchdev port and the
> subfunction netdev are linked with the switchdev port as the parent.

If a user netdev is created, say for a container, then it should have
as a parent the aux device.

If you inspect the port configuration on the switchdev side it will
show the aux device associated with the port.

Are you just asking that the userspace tools give a little help and
show that switchdev port XX is visable as netdev YYY by cross matching
the auxbus?

> > > they are pushed into containers you don't have to rip them out if for
> > > some reason you need to change the network configuration.
> >
> > Why would you need to rip them out?
> 
> Because they are specifically tied to the mlx5 device. So if for
> example I need to hotplug out the mlx5 and replace it, 

Uhh, I think we are very very far away from being able to hot unplug a
switchdev driver, keep the switchdev running, and drop in a different
driver.

That isn't even on the radar, AFAIK.

> namespace. One of the issues with VFs is that we have always had to
> push some sort of bond on top, or switch over to a virtio-net
> interface in order to support fail-over. If we can resolve that in the
> host case that would go a long way toward solving one of the main
> issues of SR-IOV.

This is all solved already, virtio-net is the answer.

qemu can swap back ends under the virtio-net ADI it created on the
fly. This means it can go from processing a virtio-net queue in mlx5
HW, to full SW, to some other HW on another machine. All hitlessly and
transparently to the guest VM.

Direct HW processing of a queue inside a VM without any downsides for
VM migration. Check.

I have the feeling this stuff you are asking for is already done..

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ