[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20211025124534.56345-1-songmuchun@bytedance.com>
Date: Mon, 25 Oct 2021 20:45:34 +0800
From: Muchun Song <songmuchun@...edance.com>
To: akpm@...ux-foundation.org, mhocko@...nel.org, shakeelb@...gle.com,
willy@...radead.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Muchun Song <songmuchun@...edance.com>
Subject: [PATCH] mm: list_lru: remove holding lru lock
Since commit e5bc3af7734f ("rcu: Consolidate PREEMPT and !PREEMPT
synchronize_rcu()"), the critical section of spin lock can serve
as an RCU read-side critical section which already allows readers
that hold nlru->lock avoid taking rcu lock. So just to remove
holding lock.
Signed-off-by: Muchun Song <songmuchun@...edance.com>
---
mm/list_lru.c | 11 -----------
1 file changed, 11 deletions(-)
diff --git a/mm/list_lru.c b/mm/list_lru.c
index 2bba1cd68bb3..7572f8e70b86 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -401,18 +401,7 @@ static int memcg_update_list_lru_node(struct list_lru_node *nlru,
}
memcpy(&new->lru, &old->lru, flex_array_size(new, lru, old_size));
-
- /*
- * The locking below allows readers that hold nlru->lock avoid taking
- * rcu_read_lock (see list_lru_from_memcg_idx).
- *
- * Since list_lru_{add,del} may be called under an IRQ-safe lock,
- * we have to use IRQ-safe primitives here to avoid deadlock.
- */
- spin_lock_irq(&nlru->lock);
rcu_assign_pointer(nlru->memcg_lrus, new);
- spin_unlock_irq(&nlru->lock);
-
kvfree_rcu(old, rcu);
return 0;
}
--
2.11.0
Powered by blists - more mailing lists