lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sun, 4 Jun 2017 20:43:28 +0000
From:   "Chickles, Derek" <Derek.Chickles@...ium.com>
To:     David Miller <davem@...emloft.net>,
        "Manlunas, Felix" <Felix.Manlunas@...ium.com>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "Burru, Veerasenareddy" <Veerasenareddy.Burru@...ium.com>,
        "Vatsavayi, Raghu" <Raghu.Vatsavayi@...ium.com>,
        "Burla, Satananda" <Satananda.Burla@...ium.com>
Subject: RE: [PATCH net-next] liquidio: add support for OVS offload



> From: David Miller [mailto:davem@...emloft.net]
> Sent: Saturday, May 27, 2017 5:07 PM
> Subject: Re: [PATCH net-next] liquidio: add support for OVS offload
> 
> From: Felix Manlunas <felix.manlunas@...ium.com>
> Date: Sat, 27 May 2017 08:56:33 -0700
> 
> > From: VSR Burru <veerasenareddy.burru@...ium.com>
> >
> > Add support for OVS offload.  By default PF driver runs in basic NIC
> > mode as usual.  To run in OVS mode, use the insmod parameter
> "fw_type=ovs".
> >
> > For OVS mode, create a management interface for communication with NIC
> > firmware.  This communication channel uses PF0's I/O rings.
> >
> > Bump up driver version to 1.6.0 to match newer firmware.
> >
> > Signed-off-by: VSR Burru <veerasenareddy.burru@...ium.com>
> > Signed-off-by: Felix Manlunas <felix.manlunas@...ium.com>
> 

Hi David,

We probably should have put in a cover letter before submitting this patch, but we'll try to explain how everything works here. We've also reassessed the mechanism for triggering the creating of the management interface, and will be submitting a revised patch shortly that triggers creation of this interface based on a message coming from the firmware at startup, instead of that module parameter check.

Anyway, this patch is really a foundational change for a series of new features that depend on having a private ethernet interface into the adapter. It will enable features like OVS, traffic monitoring, remote management of LiquidIO and other things that are planned for LiquidIO down the road.

> How does this work?
> 

LiquidIO can be thought of as a second computer within a NIC form factor. It has a processor complex, various I/Os, and may run a full blown operating system, like Linux. In such a configuration it is natural to have an ethernet interface from the host operating system so one could communicate with the Linux instance running on the card. Things like regular TCP sockets, SSH, etc are all possible. That's what this patch specifically enables. Again, we're not going to create this interface based on some module parameter, but rather the card will tell the host that it needs this interface when it starts.

Our initial versions of the LiquidIO host driver required LiquidIO firmware that provided basic NIC features, as seen by our previous submissions into the driver tree. Now, we're moving to running Linux on the LiquidIO adapter itself. 

So, in the OVS case, we simply replace the Ethernet Bridge in the "basic NIC" implementation with Linux running OVS. Since we have the management interface available to the host, this OVS implementation can communicate with an external controller residing on the host.

> What in userspace installs the OVS rules onto the card?
> 

There is no direct host involvement in the LiquidIO OVS configuration. The ovs-vswitchd runs on LiquidIO and can communicate with ovsdb-server running on the LiquidIO processor or on the host using the management interface supplied in this patch.

> We do not support direct offload of OVS, as an OVS entity, instead we
> required all vendors to make their OVS offloads visible as packet
> scheduler classifiers and actions.
> 
> The same rules apply to liquidio.
> 

We are running OVS as the switching infrastructure on the card instead of a VEB. If someone wants to run OVS in the host they can still do that. We do have plans to add the ndo interfaces for supporting classifiers and filters, so the host could have accelerated OVS when a LiquidIO card is installed. However, in the near term, we're focusing on enabling an ethernet interface to the Linux instance running on the card, so you can simply connect to ovs-switchd running on the card. This also enables a lot of flexibility in the types of applications that could be running on the adapter.

> If there is some special set of userspace interfaces that are used to
> comunicate with these different firmwares in some liquidio specific way, I
> am going to be very upset.  That is definitely not allowed.
> 

Our solution does not require any special user or kernel space components on the host. The OVS-based LiquidIO firmware can be configured just like OVS on the host by usual means such as a remote Openflow controller on the network.


> I'm not applying this patch until the above is resolved and at least more
> information is added to this commit log message to explain how this stuff
> works.

Thanks and regards,
Derek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ