[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <fcb546986be346684a016f5ca23a0567399145fa.1627370131.git.leonro@nvidia.com>
Date: Tue, 27 Jul 2021 10:16:06 +0300
From: Leon Romanovsky <leon@...nel.org>
To: Doug Ledford <dledford@...hat.com>,
Jason Gunthorpe <jgg@...dia.com>
Cc: Aharon Landau <aharonl@...dia.com>, linux-kernel@...r.kernel.org,
linux-rdma@...r.kernel.org, Maor Gottlieb <maorg@...dia.com>
Subject: [PATCH rdma-rc] RDMA/mlx5: Delay emptying a cache entry when a new MR is added to it recently
From: Aharon Landau <aharonl@...dia.com>
Fixing a typo that causes a cache entry to shrink immediately after
adding to it new MRs if the entry size exceeds the high limit.
In doing so, the cache misses its purpose to prevent the creation of new
mkeys on the runtime by using the cached ones.
Fixes: b9358bdbc713 ("RDMA/mlx5: Fix locking in MR cache work queue")
Signed-off-by: Aharon Landau <aharonl@...dia.com>
Reviewed-by: Maor Gottlieb <maorg@...dia.com>
Signed-off-by: Leon Romanovsky <leonro@...dia.com>
---
drivers/infiniband/hw/mlx5/mr.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 3263851ea574..3f1c5a4f158b 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -531,8 +531,8 @@ static void __cache_work_func(struct mlx5_cache_ent *ent)
*/
spin_unlock_irq(&ent->lock);
need_delay = need_resched() || someone_adding(cache) ||
- time_after(jiffies,
- READ_ONCE(cache->last_add) + 300 * HZ);
+ !time_after(jiffies,
+ READ_ONCE(cache->last_add) + 300 * HZ);
spin_lock_irq(&ent->lock);
if (ent->disabled)
goto out;
--
2.31.1
Powered by blists - more mailing lists