[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <DM2PR0701MB1392C0D6A29561095E40891288C40@DM2PR0701MB1392.namprd07.prod.outlook.com>
Date: Mon, 19 Jun 2017 06:28:53 +0000
From: "Kalderon, Michal" <Michal.Kalderon@...ium.com>
To: Christoph Hellwig <hch@...radead.org>,
"Mintz, Yuval" <Yuval.Mintz@...ium.com>
CC: "davem@...emloft.net" <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>
Subject: RE: [PATCH v2 net-next 0/7] qed*: RDMA and infrastructure for iWARP
From: Christoph Hellwig [mailto:hch@...radead.org]
Sent: Sunday, June 18, 2017 4:15 PM
> On Sun, Jun 18, 2017 at 02:50:28PM +0300, Yuval Mintz wrote:
> > This series focuses on RDMA in general with emphasis on required
> > changes toward adding iWARP support. The vast majority of the changes
> > introduced are in qed/qede, with a couple of small changes to qedr
> > [mentioned below].
>
> Btw, can you explain us how you are going to expose RoCE vs iWarp?
> Is it going to be a per-port setting? Are you going to use different ib_device
> structures or do you plan to differenciate at another level?
Initial submission will be based on iWARP or RoCE being part of the device configuration ( using EFI-Hii),
Later on we're considering adding devlink support for changing the device configuration.
This is not a port setting, pci devices sharing the same physical port can have different RDMA protocols.
Powered by blists - more mailing lists