lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200728063211.GA229972@mtl-vdi-166.wap.labs.mlnx>
Date:   Tue, 28 Jul 2020 09:32:11 +0300
From:   Eli Cohen <eli@...lanox.com>
To:     Jason Wang <jasowang@...hat.com>
Cc:     mst@...hat.com, virtualization@...ts.linux-foundation.org,
        linux-kernel@...r.kernel.org, shahafs@...lanox.com,
        saeedm@...lanox.com, parav@...lanox.com
Subject: Re: [PATCH V3 vhost next 00/10] VDPA support for Mellanox ConnectX
 devices

On Tue, Jul 28, 2020 at 02:18:16PM +0800, Jason Wang wrote:
> 
> On 2020/7/28 下午2:05, Eli Cohen wrote:
> >Hi Michael,
> >please note that this series depends on mlx5 core device driver patches
> >in mlx5-next branch in
> >git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git.
> >
> >git pull git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git mlx5-next
> >
> >They also depend Jason Wang's patches submitted a couple of weeks ago.
> >
> >vdpa_sim: use the batching API
> >vhost-vdpa: support batch updating
> 
> 
> Just notice that a new version is posted[1] (you were cced). Talked
> with Michael, and it's better for you to merge the new version in
> this series.
> 

OK, will send again. Just to make sure, I should put your series and my
series on Michal's vhost branch, the same tree I have been using so far?

> Sorry for not spotting this before.
> 
> [1] https://lkml.org/lkml/2020/7/1/301
> 
> Thanks
> 
> 
> >
> >
> >The following series of patches provide VDPA support for Mellanox
> >devices. The supported devices are ConnectX6 DX and newer.
> >
> >Currently, only a network driver is implemented; future patches will
> >introduce a block device driver. iperf performance on a single queue is
> >around 12 Gbps.  Future patches will introduce multi queue support.
> >
> >The files are organized in such a way that code that can be used by
> >different VDPA implementations will be placed in a common are resides in
> >drivers/vdpa/mlx5/core.
> >
> >Only virtual functions are currently supported. Also, certain firmware
> >capabilities must be set to enable the driver. Physical functions (PFs)
> >are skipped by the driver.
> >
> >To make use of the VDPA net driver, one must load mlx5_vdpa. In such
> >case, VFs will be operated by the VDPA driver. Although one can see a
> >regular instance of a network driver on the VF, the VDPA driver takes
> >precedence over the NIC driver, steering-wize.
> >
> >Currently, the device/interface infrastructure in mlx5_core is used to
> >probe drivers. Future patches will introduce virtbus as a means to
> >register devices and drivers and VDPA will be adapted to it.
> >
> >The mlx5 mode of operation required to support VDPA is switchdev mode.
> >Once can use Linux or OVS bridge to take care of layer 2 switching.
> >
> >In order to provide virtio networking to a guest, an updated version of
> >qemu is required. This version has been tested by the following quemu
> >version:
> >
> >url: https://github.com/jasowang/qemu.git
> >branch: vdpa
> >Commit ID: 6f4e59b807db
> >
> >
> >V2->V3
> >Fix makefile to use include path relative to the root of the kernel
> >
> >Eli Cohen (7):
> >   net/vdpa: Use struct for set/get vq state
> >   vhost: Fix documentation
> >   vdpa: Modify get_vq_state() to return error code
> >   vdpa/mlx5: Add hardware descriptive header file
> >   vdpa/mlx5: Add support library for mlx5 VDPA implementation
> >   vdpa/mlx5: Add shared memory registration code
> >   vdpa/mlx5: Add VDPA driver for supported mlx5 devices
> >
> >Jason Wang (2):
> >   vhost-vdpa: support batch updating
> >   vdpa_sim: use the batching API
> >
> >Max Gurtovoy (1):
> >   vdpa: remove hard coded virtq num
> >
> >  drivers/vdpa/Kconfig                   |   18 +
> >  drivers/vdpa/Makefile                  |    1 +
> >  drivers/vdpa/ifcvf/ifcvf_base.c        |    4 +-
> >  drivers/vdpa/ifcvf/ifcvf_base.h        |    4 +-
> >  drivers/vdpa/ifcvf/ifcvf_main.c        |   13 +-
> >  drivers/vdpa/mlx5/Makefile             |    4 +
> >  drivers/vdpa/mlx5/core/mlx5_vdpa.h     |   91 ++
> >  drivers/vdpa/mlx5/core/mlx5_vdpa_ifc.h |  168 ++
> >  drivers/vdpa/mlx5/core/mr.c            |  473 ++++++
> >  drivers/vdpa/mlx5/core/resources.c     |  284 ++++
> >  drivers/vdpa/mlx5/net/main.c           |   76 +
> >  drivers/vdpa/mlx5/net/mlx5_vnet.c      | 1950 ++++++++++++++++++++++++
> >  drivers/vdpa/mlx5/net/mlx5_vnet.h      |   24 +
> >  drivers/vdpa/vdpa.c                    |    3 +
> >  drivers/vdpa/vdpa_sim/vdpa_sim.c       |   35 +-
> >  drivers/vhost/iotlb.c                  |    4 +-
> >  drivers/vhost/vdpa.c                   |   46 +-
> >  include/linux/vdpa.h                   |   24 +-
> >  include/uapi/linux/vhost_types.h       |    2 +
> >  19 files changed, 3165 insertions(+), 59 deletions(-)
> >  create mode 100644 drivers/vdpa/mlx5/Makefile
> >  create mode 100644 drivers/vdpa/mlx5/core/mlx5_vdpa.h
> >  create mode 100644 drivers/vdpa/mlx5/core/mlx5_vdpa_ifc.h
> >  create mode 100644 drivers/vdpa/mlx5/core/mr.c
> >  create mode 100644 drivers/vdpa/mlx5/core/resources.c
> >  create mode 100644 drivers/vdpa/mlx5/net/main.c
> >  create mode 100644 drivers/vdpa/mlx5/net/mlx5_vnet.c
> >  create mode 100644 drivers/vdpa/mlx5/net/mlx5_vnet.h
> >
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ