[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <319812ce-a15f-4004-a166-d281b8525616@suse.cz>
Date: Thu, 1 Aug 2024 10:06:37 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Muchun Song <songmuchun@...edance.com>, akpm@...ux-foundation.org
Cc: hannes@...xchg.org, muchun.song@...ux.dev, nphamcs@...il.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Shakeel Butt <shakeel.butt@...ux.dev>, stable@...r.kernel.org
Subject: Re: [PATCH v2] mm: list_lru: fix UAF for memory cgroup
On 8/1/24 04:46, Muchun Song wrote:
> The mem_cgroup_from_slab_obj() is supposed to be called under rcu
> lock or cgroup_mutex or others which could prevent returned memcg
> from being freed. Fix it by adding missing rcu read lock.
>
> Fixes: 0a97c01cd20b ("list_lru: allow explicit memcg and NUMA node selection")
> Signed-off-by: Muchun Song <songmuchun@...edance.com>
> Acked-by: Shakeel Butt <shakeel.butt@...ux.dev>
> Cc: <stable@...r.kernel.org>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
> ---
> v2:
> Only grab rcu lock when necessary (Vlastimil Babka)
>
> mm/list_lru.c | 28 ++++++++++++++++++++++------
> 1 file changed, 22 insertions(+), 6 deletions(-)
>
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index a29d96929d7c7..9b7ff06e9d326 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -85,6 +85,7 @@ list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx)
> }
> #endif /* CONFIG_MEMCG */
>
> +/* The caller must ensure the memcg lifetime. */
> bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> struct mem_cgroup *memcg)
> {
> @@ -109,14 +110,22 @@ EXPORT_SYMBOL_GPL(list_lru_add);
>
> bool list_lru_add_obj(struct list_lru *lru, struct list_head *item)
> {
> + bool ret;
> int nid = page_to_nid(virt_to_page(item));
> - struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> - mem_cgroup_from_slab_obj(item) : NULL;
>
> - return list_lru_add(lru, item, nid, memcg);
> + if (list_lru_memcg_aware(lru)) {
> + rcu_read_lock();
> + ret = list_lru_add(lru, item, nid, mem_cgroup_from_slab_obj(item));
> + rcu_read_unlock();
> + } else {
> + ret = list_lru_add(lru, item, nid, NULL);
> + }
> +
> + return ret;
> }
> EXPORT_SYMBOL_GPL(list_lru_add_obj);
>
> +/* The caller must ensure the memcg lifetime. */
> bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> struct mem_cgroup *memcg)
> {
> @@ -139,11 +148,18 @@ EXPORT_SYMBOL_GPL(list_lru_del);
>
> bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
> {
> + bool ret;
> int nid = page_to_nid(virt_to_page(item));
> - struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> - mem_cgroup_from_slab_obj(item) : NULL;
>
> - return list_lru_del(lru, item, nid, memcg);
> + if (list_lru_memcg_aware(lru)) {
> + rcu_read_lock();
> + ret = list_lru_del(lru, item, nid, mem_cgroup_from_slab_obj(item));
> + rcu_read_unlock();
> + } else {
> + ret = list_lru_del(lru, item, nid, NULL);
> + }
> +
> + return ret;
> }
> EXPORT_SYMBOL_GPL(list_lru_del_obj);
>
Powered by blists - more mailing lists