[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <553A574B.80208@talpey.com>
Date: Fri, 24 Apr 2015 10:46:35 -0400
From: Tom Talpey <tom@...pey.com>
To: Michael Wang <yun.wang@...fitbricks.com>,
Roland Dreier <roland@...nel.org>,
Sean Hefty <sean.hefty@...el.com>,
Hal Rosenstock <hal@....mellanox.co.il>,
linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org
CC: Steve Wise <swise@...ngridcomputing.com>,
Jason Gunthorpe <jgunthorpe@...idianresearch.com>,
Doug Ledford <dledford@...hat.com>,
Ira Weiny <ira.weiny@...el.com>,
Tom Tucker <tom@...ngridcomputing.com>,
Hoang-Nam Nguyen <hnguyen@...ibm.com>,
Christoph Raisch <raisch@...ibm.com>,
Mike Marciniszyn <infinipath@...el.com>,
Eli Cohen <eli@...lanox.com>,
Faisal Latif <faisal.latif@...el.com>,
Jack Morgenstein <jackm@....mellanox.co.il>,
Or Gerlitz <ogerlitz@...lanox.com>,
Haggai Eran <haggaie@...lanox.com>
Subject: Re: [PATCH v6 01/26] IB/Verbs: Implement new callback query_transport()
On 4/24/2015 10:35 AM, Michael Wang wrote:
>
>
> On 04/24/2015 04:29 PM, Tom Talpey wrote:
>> On 4/24/2015 8:23 AM, Michael Wang wrote:
> [snip]
>>> +static enum rdma_transport_type
>>> +mlx5_ib_query_transport(struct ib_device *device, u8 port_num)
>>> +{
>>> + return RDMA_TRANSPORT_IB;
>>> +}
>>> +
>>
>>
>> Just noticed that mlx5 is not being coded as RoCE-capable like mlx4.
>> The mlx5 driver is for the new ConnectX-4, which implements all three
>> of IB, RoCE and RoCEv2, right? Are those last two not supported?
>
> I'm not sure about the details of mlx5, but according to the current
> implementation, it's transport is IB without a link-layer callback,
> which means it doesn't support IBoE...
>
> And there is no method to change the port link-layer type as mlx4 did.
Hal, is that correct?
From the Mellanox web:
http://www.mellanox.com/related-docs/products/IB_Adapter_card_brochure_c_2_3.pdf
"ConnectX-4
ConnectX-4 adapter cards with Virtual Protocol Interconnect (VPI),
supporting EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity,
provide the...
"Virtual Protocol Interconnect
VPIĀ® flexibility enables any standard networking, clustering, storage,
and management protocol to seamlessly operate over any converged network
leveraging a consolidated software stack. Each port can operate on
InfiniBand, Ethernet, or Data Center Bridging (DCB) fabrics, and
supports IP over InfiniBand (IPoIB), Ethernet over InfiniBand (EoIB)
and RDMA over Converged Ethernet (RoCE and RoCEv2).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists