[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20210830172953.207257-1-shakeelb@google.com>
Date: Mon, 30 Aug 2021 10:29:53 -0700
From: Shakeel Butt <shakeelb@...gle.com>
To: Tejun Heo <tj@...nel.org>, Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, cgroups@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Shakeel Butt <shakeelb@...gle.com>
Subject: [PATCH] memcg: make memcg->event_list_lock irqsafe
The memcg->event_list_lock is usually taken in the normal context but
when the userspace closes the corresponding eventfd, eventfd_release
through memcg_event_wake takes memcg->event_list_lock with interrupts
disabled. This is not an issue on its own but it creates a nested
dependency from eventfd_ctx->wqh.lock to memcg->event_list_lock.
Independently, for unrelated eventfd, eventfd_signal() can be called in
the irq context, thus making eventfd_ctx->wqh.lock an irq lock. For
example, FPGA DFL driver, VHOST VPDA driver and couple of VFIO drivers.
This will force memcg->event_list_lock to be an irqsafe lock as well.
One way to break the nested dependency between eventfd_ctx->wqh.lock and
memcg->event_list_lock is to add an indirection. However the simplest
solution would be to make memcg->event_list_lock irqsafe. This is cgroup
v1 feature, is in maintenance and may get deprecated in near future. So,
no need to add more code.
BTW this has been discussed previously [1] but there weren't irq users
of eventfd_signal() at the time.
[1] https://www.spinics.net/lists/cgroups/msg06248.html
Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
---
mm/memcontrol.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 999e626f4111..a73689caee5d 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4851,9 +4851,9 @@ static ssize_t memcg_write_event_control(struct kernfs_open_file *of,
vfs_poll(efile.file, &event->pt);
- spin_lock(&memcg->event_list_lock);
+ spin_lock_irq(&memcg->event_list_lock);
list_add(&event->list, &memcg->event_list);
- spin_unlock(&memcg->event_list_lock);
+ spin_unlock_irq(&memcg->event_list_lock);
fdput(cfile);
fdput(efile);
@@ -5280,12 +5280,12 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
* Notify userspace about cgroup removing only after rmdir of cgroup
* directory to avoid race between userspace and kernelspace.
*/
- spin_lock(&memcg->event_list_lock);
+ spin_lock_irq(&memcg->event_list_lock);
list_for_each_entry_safe(event, tmp, &memcg->event_list, list) {
list_del_init(&event->list);
schedule_work(&event->remove);
}
- spin_unlock(&memcg->event_list_lock);
+ spin_unlock_irq(&memcg->event_list_lock);
page_counter_set_min(&memcg->memory, 0);
page_counter_set_low(&memcg->memory, 0);
--
2.33.0.259.gc128427fd7-goog
Powered by blists - more mailing lists