[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1455784674-8412-1-git-send-email-hans.westgaard.ry@oracle.com>
Date: Thu, 18 Feb 2016 09:37:54 +0100
From: Hans Westgaard Ry <hans.westgaard.ry@...cle.com>
To: unlisted-recipients:; (no To-header on input)
Cc: Hans Westgaard Ry <hans.westgaard.ry@...cle.com>,
Doug Ledford <dledford@...hat.com>,
Sean Hefty <sean.hefty@...el.com>,
Hal Rosenstock <hal.rosenstock@...il.com>,
Bart Van Assche <bart.vanassche@...disk.com>,
Yuval Shaia <yuval.shaia@...cle.com>,
Christian Marie <christian@...ies.io>,
Jason Gunthorpe <jgunthorpe@...idianresearch.com>,
Håkon Bugge <haakon.bugge@...cle.com>,
Wei Lin Guay <wei.lin.guay@...cle.com>,
Or Gerlitz <ogerlitz@...lanox.com>,
Erez Shitrit <erezsh@...lanox.com>,
Haggai Eran <haggaie@...lanox.com>,
Chuck Lever <chuck.lever@...cle.com>,
Matan Barak <matanb@...lanox.com>,
linux-rdma@...r.kernel.org (open list:INFINIBAND SUBSYSTEM),
linux-kernel@...r.kernel.org (open list)
Subject: [PATCH] IB/ipoib: Add handling of skb with many frags
IPoIB converts skb-fragments to sge adding 1 extra sge when offloading
is enabled. Current codepath assumes that the max number of sge a device
support is at least MAX_SKB_FRAGS+1, there is no interaction with upper
layers to limit number of fragments in an skb if a device suports fewer
sge. The assumptions also lead to requesting a fixed number ot sge when
IPoIB creates queue-pairs with scatter/gather enabled.
A fallback/slowpath is implemented using skb_linearize to
handle cases where the conversion would result in more sges than supported.
Signed-off-by: Hans Westgaard Ry <hans.westgaard.ry@...cle.com>
Reviewed-by: Håkon Bugge <haakon.bugge@...cle.com>
Reviewed-by: Wei Lin Guay <wei.lin.guay@...cle.com>
---
drivers/infiniband/ulp/ipoib/ipoib_cm.c | 4 +++-
drivers/infiniband/ulp/ipoib/ipoib_ib.c | 9 +++++++++
drivers/infiniband/ulp/ipoib/ipoib_verbs.c | 4 +++-
3 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
index 917e46e..0a2bd43 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
@@ -1031,7 +1031,9 @@ static struct ib_qp *ipoib_cm_create_tx_qp(struct net_device *dev, struct ipoib_
struct ib_qp *tx_qp;
if (dev->features & NETIF_F_SG)
- attr.cap.max_send_sge = MAX_SKB_FRAGS + 1;
+ attr.cap.max_send_sge = min_t(u32,
+ priv->ca->attrs.max_sge,
+ MAX_SKB_FRAGS + 1);
tx_qp = ib_create_qp(priv->pd, &attr);
if (PTR_ERR(tx_qp) == -EINVAL) {
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
index 5ea0c14..b4f2240 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
@@ -541,6 +541,15 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
int hlen, rc;
void *phead;
+ if (skb_shinfo(skb)->nr_frags >= priv->ca->attrs.max_sge) {
+ if (skb_linearize(skb) != 0) {
+ ipoib_warn(priv, "skb could not be linearized\n");
+ ++dev->stats.tx_dropped;
+ ++dev->stats.tx_errors;
+ dev_kfree_skb_any(skb);
+ return;
+ }
+ }
if (skb_is_gso(skb)) {
hlen = skb_transport_offset(skb) + tcp_hdrlen(skb);
phead = skb->data;
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
index d48c5ba..62f8ec3 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
@@ -206,7 +206,9 @@ int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca)
init_attr.create_flags |= IB_QP_CREATE_NETIF_QP;
if (dev->features & NETIF_F_SG)
- init_attr.cap.max_send_sge = MAX_SKB_FRAGS + 1;
+ init_attr.cap.max_send_sge = min_t(u32,
+ priv->ca->attrs.max_sge,
+ MAX_SKB_FRAGS + 1);
priv->qp = ib_create_qp(priv->pd, &init_attr);
if (IS_ERR(priv->qp)) {
--
2.4.3
Powered by blists - more mailing lists