[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150407174223.GB15704@obsidianresearch.com>
Date: Tue, 7 Apr 2015 11:42:23 -0600
From: Jason Gunthorpe <jgunthorpe@...idianresearch.com>
To: Tom Talpey <tom@...pey.com>
Cc: Michael Wang <yun.wang@...fitbricks.com>,
Roland Dreier <roland@...nel.org>,
Sean Hefty <sean.hefty@...el.com>, linux-rdma@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-nfs@...r.kernel.org,
netdev@...r.kernel.org, Hal Rosenstock <hal.rosenstock@...il.com>,
Tom Tucker <tom@...ngridcomputing.com>,
Steve Wise <swise@...ngridcomputing.com>,
Hoang-Nam Nguyen <hnguyen@...ibm.com>,
Christoph Raisch <raisch@...ibm.com>,
Mike Marciniszyn <infinipath@...el.com>,
Eli Cohen <eli@...lanox.com>,
Faisal Latif <faisal.latif@...el.com>,
Upinder Malhi <umalhi@...co.com>,
Trond Myklebust <trond.myklebust@...marydata.com>,
"J. Bruce Fields" <bfields@...ldses.org>,
"David S. Miller" <davem@...emloft.net>,
Ira Weiny <ira.weiny@...el.com>,
PJ Waskiewicz <pj.waskiewicz@...idfire.com>,
Tatyana Nikolova <Tatyana.E.Nikolova@...el.com>,
Or Gerlitz <ogerlitz@...lanox.com>,
Jack Morgenstein <jackm@....mellanox.co.il>,
Haggai Eran <haggaie@...lanox.com>,
Ilya Nelkenbaum <ilyan@...lanox.com>,
Yann Droneaud <ydroneaud@...eya.com>,
Bart Van Assche <bvanassche@....org>,
Shachar Raindel <raindel@...lanox.com>,
Sagi Grimberg <sagig@...lanox.com>,
Devesh Sharma <devesh.sharma@...lex.com>,
Matan Barak <matanb@...lanox.com>,
Moni Shoua <monis@...lanox.com>, Jiri Kosina <jkosina@...e.cz>,
Selvin Xavier <selvin.xavier@...lex.com>,
Mitesh Ahuja <mitesh.ahuja@...lex.com>,
Li RongQing <roy.qing.li@...il.com>,
Rasmus Villemoes <linux@...musvillemoes.dk>,
Alex Estrin <alex.estrin@...el.com>,
Doug Ledford <dledford@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
Erez Shitrit <erezsh@...lanox.com>,
Tom Gundersen <teg@...m.no>,
Chuck Lever <chuck.lever@...cle.com>
Subject: Re: [PATCH v2 09/17] IB/Verbs: Use helper cap_read_multi_sge() and
reform svc_rdma_accept()
On Tue, Apr 07, 2015 at 11:46:57AM -0400, Tom Talpey wrote:
> On 4/7/2015 8:34 AM, Michael Wang wrote:
> > /**
> >+ * cap_read_multi_sge - Check if the port of device has the capability
> >+ * RDMA Read Multiple Scatter-Gather Entries.
> >+ *
> >+ * @device: Device to be checked
> >+ * @port_num: Port number of the device
> >+ *
> >+ * Return 0 when port of the device don't support
> >+ * RDMA Read Multiple Scatter-Gather Entries.
> >+ */
> >+static inline int cap_read_multi_sge(struct ib_device *device, u8 port_num)
> >+{
> >+ return !rdma_transport_iwarp(device, port_num);
> >+}
>
> This just papers over the issue we discussed earlier. How *many*
> entries does the device support? If a device supports one, or two,
> is that enough? How does the upper layer know the limit?
I think Michael is fine to just make this one mechanical change.
The kernel only supports two kinds of devices today, ones with 1 read
SGE and ones where READ SGE == WRITE SGE == SEND SGE.
If someone makes another variation then it is up to them to propose a
better fix.
> > static int rdma_read_max_sge(struct svcxprt_rdma *xprt, int sge_count)
> > {
> >- if (rdma_node_get_transport(xprt->sc_cm_id->device->node_type) ==
> >- RDMA_TRANSPORT_IWARP)
> >+ if (!cap_read_multi_sge(xprt->sc_cm_id->device,
> >+ xprt->sc_cm_id->port_num))
> > return 1;
> > else
> > return min_t(int, sge_count, xprt->sc_max_sge);
>
> This is incorrect. The RDMA Read max is not at all the same as the
> max_sge. It is a different operation, with a different set of work
> request parameters.
The algorithm looks OK to me,
newxprt->sc_max_sge = min((size_t)devattr.max_sge,
(size_t)RPCSVC_MAXPAGES);
So it returns 1 or the number of sge entries per WR, and max_sge is
for READ/WRITE/SEND in every case except when cap_read_multi_sge == 1
> > /*
> > * Determine if a DMA MR is required and if so, what privs are required
> > */
> >- switch (rdma_node_get_transport(newxprt->sc_cm_id->device->node_type)) {
> >- case RDMA_TRANSPORT_IWARP:
> >+ if (rdma_transport_iwarp(newxprt->sc_cm_id->device,
> >+ newxprt->sc_cm_id->port_num)) {
> > newxprt->sc_dev_caps |= SVCRDMA_DEVCAP_READ_W_INV;
>
> Do I read this correctly that it is forcing the "read with invalidate"
> capability to "on" for all iWARP devices? I don't think that is correct,
> for the legacy devices you're also supporting.
No idea here, this logic was added in:
commit 3a5c63803d0552a3ad93b85c262f12cd86471443
Author: Tom Tucker <tom@...ngridcomputing.com>
Date: Tue Sep 30 13:46:13 2008 -0500
svcrdma: Query device for Fast Reg support during connection setup
Query the device capabilities in the svc_rdma_accept function to determine
what advanced memory management capabilities are supported by the device.
Based on the query, select the most secure model available given the
requirements of the transport and capabilities of the adapter.
Signed-off-by: Tom Tucker <tom@...ngridcomputing.com>
> >@@ -992,8 +992,8 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
> > dma_mr_acc = IB_ACCESS_LOCAL_WRITE;
> > } else
> > need_dma_mr = 0;
> >- break;
> >- case RDMA_TRANSPORT_IB:
> >+ } else if (rdma_ib_mgmt(newxprt->sc_cm_id->device,
> >+ newxprt->sc_cm_id->port_num)) {
> > if (!(newxprt->sc_dev_caps & SVCRDMA_DEVCAP_FAST_REG)) {
> > need_dma_mr = 1;
> > dma_mr_acc = IB_ACCESS_LOCAL_WRITE;
>
> Now I'm even more confused. How is the presence of IB management
> related to needing a privileged lmr?
Agree, this needs to be someone else.
I think the test is probably based on this comment:
* NB: iWARP requires remote write access for the data sink
* of an RDMA_READ. IB does not.
So the if should be:
if (cap_rdma_read_needs_write(..) &&
!(newxprt->sc_dev_caps & SVCRDMA_DEVCAP_FAST_REG)) {
need_dma_mr = 1;
dma_mr_acc =
(IB_ACCESS_LOCAL_WRITE |
IB_ACCESS_REMOTE_WRITE);
And the identical if blocks merged.
Plus the
if (rdma_transport_iwarp(newxprt->sc_cm_id->device,
newxprt->sc_cm_id->port_num))
newxprt->sc_dev_caps |= SVCRDMA_DEVCAP_READ_W_INV
Jason
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists