[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20130226235538.152295835@linuxfoundation.org>
Date: Tue, 26 Feb 2013 15:56:31 -0800
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Kleber Sacilotto de Souza <klebers@...ux.vnet.ibm.com>,
Amir Vadai <amirv@...lanox.com>,
"David S. Miller" <davem@...emloft.net>
Subject: [ 134/150] mlx4_en: fix allocation of CPU affinity reverse-map
3.8-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kleber Sacilotto de Souza <klebers@...ux.vnet.ibm.com>
[ Upstream commit 3770699675dd1b8fc1e86ff369eb3cce44e10082 ]
The mlx4_en driver allocates the number of objects for the CPU affinity
reverse-map based on the number of rx rings of the device. However,
mlx4_assign_eq() calls irq_cpu_rmap_add() as many times as IRQ's are
assigned to EQ's, which can be as large as mlx4_dev->caps.comp_pool. If
caps.comp_pool is larger than rx_ring_num we will eventually hit the
BUG_ON() in cpu_rmap_add().
Fix this problem by allocating space for the maximum number of CPU
affinity reverse-map objects we might want to add.
Signed-off-by: Kleber Sacilotto de Souza <klebers@...ux.vnet.ibm.com>
Acked-by: Amir Vadai <amirv@...lanox.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -1434,7 +1434,7 @@ int mlx4_en_alloc_resources(struct mlx4_
}
#ifdef CONFIG_RFS_ACCEL
- priv->dev->rx_cpu_rmap = alloc_irq_cpu_rmap(priv->rx_ring_num);
+ priv->dev->rx_cpu_rmap = alloc_irq_cpu_rmap(priv->mdev->dev->caps.comp_pool);
if (!priv->dev->rx_cpu_rmap)
goto err;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists