[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200515224854.20390-2-saeedm@mellanox.com>
Date: Fri, 15 May 2020 15:48:44 -0700
From: Saeed Mahameed <saeedm@...lanox.com>
To: "David S. Miller" <davem@...emloft.net>, kuba@...nel.org
Cc: netdev@...r.kernel.org, Eran Ben Elisha <eranbe@...lanox.com>,
Saeed Mahameed <saeedm@...lanox.com>
Subject: [net-next 01/11] net/mlx5: Dedicate fw page to the requesting function
From: Eran Ben Elisha <eranbe@...lanox.com>
The cited patch assumes that all chuncks in a fw page belong to the same
function, thus the driver must dedicate fw page to the requesting
function, which is actually what was intedned in the original fw pages
allocator design, hence the fwp->func_id !
Up until the cited patch everything worked ok, but now "relase all pages"
is broken on systems with page_size > 4k.
Fix this by dedicating fw page to the requesting function id via adding a
func_id parameter to alloc_4k() function.
Fixes: c6168161f693 ("net/mlx5: Add support for release all pages event")
Signed-off-by: Eran Ben Elisha <eranbe@...lanox.com>
Signed-off-by: Saeed Mahameed <saeedm@...lanox.com>
---
.../net/ethernet/mellanox/mlx5/core/pagealloc.c | 16 +++++++++++-----
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
index 8ce78f42dfc0..84f6356edbf8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
@@ -156,15 +156,21 @@ static int mlx5_cmd_query_pages(struct mlx5_core_dev *dev, u16 *func_id,
return err;
}
-static int alloc_4k(struct mlx5_core_dev *dev, u64 *addr)
+static int alloc_4k(struct mlx5_core_dev *dev, u64 *addr, u16 func_id)
{
- struct fw_page *fp;
+ struct fw_page *fp = NULL;
+ struct fw_page *iter;
unsigned n;
- if (list_empty(&dev->priv.free_list))
+ list_for_each_entry(iter, &dev->priv.free_list, list) {
+ if (iter->func_id != func_id)
+ continue;
+ fp = iter;
+ }
+
+ if (list_empty(&dev->priv.free_list) || !fp)
return -ENOMEM;
- fp = list_entry(dev->priv.free_list.next, struct fw_page, list);
n = find_first_bit(&fp->bitmask, 8 * sizeof(fp->bitmask));
if (n >= MLX5_NUM_4K_IN_PAGE) {
mlx5_core_warn(dev, "alloc 4k bug\n");
@@ -295,7 +301,7 @@ static int give_pages(struct mlx5_core_dev *dev, u16 func_id, int npages,
for (i = 0; i < npages; i++) {
retry:
- err = alloc_4k(dev, &addr);
+ err = alloc_4k(dev, &addr, func_id);
if (err) {
if (err == -ENOMEM)
err = alloc_system_page(dev, func_id);
--
2.25.4
Powered by blists - more mailing lists