[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181018203907.25149-3-saeedm@mellanox.com>
Date: Thu, 18 Oct 2018 13:39:01 -0700
From: Saeed Mahameed <saeedm@...lanox.com>
To: "David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Cc: Vlad Buslov <vladbu@...lanox.com>,
Saeed Mahameed <saeedm@...lanox.com>
Subject: [net-next 2/8] net/mlx5: Take fs_counters dellist before addlist
From: Vlad Buslov <vladbu@...lanox.com>
In fs_counters elements from both addlist and dellist are removed by
mlx5_fc_stats_work() without any locking. This introduces race condition
when batch of new rules is created and then immediately deleted (for
example, when error occurred during flow creation). In such case some of
the rules might be in dellist, but not in addlist when mlx5_fc_stats_work()
is executed concurrently with tc, which will result rule deletion and
use-after-free on next iteration because deleted rules are still in
addlist.
Always take dellist first to guarantee that rules can only be deleted after
they were removed from addlist.
Fixes: 6e5e22839136 ("net/mlx5: Add new list to store deleted flow counters")
Signed-off-by: Vlad Buslov <vladbu@...lanox.com>
Reported-by: Chris Mi <chrism@...lanox.com>
Reviewed-by: Roi Dayan <roid@...lanox.com>
Signed-off-by: Saeed Mahameed <saeedm@...lanox.com>
---
.../net/ethernet/mellanox/mlx5/core/fs_counters.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
index 1329bc5b7969..77bd4cd3a1db 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
@@ -179,19 +179,22 @@ static void mlx5_fc_stats_work(struct work_struct *work)
struct mlx5_core_dev *dev = container_of(work, struct mlx5_core_dev,
priv.fc_stats.work.work);
struct mlx5_fc_stats *fc_stats = &dev->priv.fc_stats;
- struct llist_node *tmplist = llist_del_all(&fc_stats->addlist);
+ /* Take dellist first to ensure that counters cannot be deleted before
+ * they are inserted.
+ */
+ struct llist_node *dellist = llist_del_all(&fc_stats->dellist);
+ struct llist_node *addlist = llist_del_all(&fc_stats->addlist);
struct mlx5_fc *counter = NULL, *last = NULL, *tmp;
unsigned long now = jiffies;
- if (tmplist || !list_empty(&fc_stats->counters))
+ if (addlist || !list_empty(&fc_stats->counters))
queue_delayed_work(fc_stats->wq, &fc_stats->work,
fc_stats->sampling_interval);
- llist_for_each_entry(counter, tmplist, addlist)
+ llist_for_each_entry(counter, addlist, addlist)
mlx5_fc_stats_insert(dev, counter);
- tmplist = llist_del_all(&fc_stats->dellist);
- llist_for_each_entry_safe(counter, tmp, tmplist, dellist) {
+ llist_for_each_entry_safe(counter, tmp, dellist, dellist) {
list_del(&counter->list);
mlx5_free_fc(dev, counter);
--
2.17.2
Powered by blists - more mailing lists