[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <65b7d88b-af4f-4869-9322-e38910abce6d@suse.cz>
Date: Thu, 18 Jul 2024 12:30:33 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Muchun Song <songmuchun@...edance.com>, akpm@...ux-foundation.org
Cc: hannes@...xchg.org, muchun.song@...ux.dev, nphamcs@...il.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Michal Hocko <mhocko@...nel.org>
Subject: Re: [PATCH] mm: list_lru: fix UAF for memory cgroup
On 7/18/24 10:36 AM, Muchun Song wrote:
> The mem_cgroup_from_slab_obj() is supposed to be called under rcu
> lock or cgroup_mutex or others which could prevent returned memcg
> from being freed. Fix it by adding missing rcu read lock.
Was the UAF ever observed, or is this due to code review?
Should there be some lockdep_assert somwhere?
> Fixes: 0a97c01cd20bb ("list_lru: allow explicit memcg and NUMA node selection)
> Signed-off-by: Muchun Song <songmuchun@...edance.com>
> ---
> mm/list_lru.c | 24 ++++++++++++++++++------
> 1 file changed, 18 insertions(+), 6 deletions(-)
>
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 3fd64736bc458..225da0778a3be 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -85,6 +85,7 @@ list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx)
> }
> #endif /* CONFIG_MEMCG_KMEM */
>
> +/* The caller must ensure the memcg lifetime. */
> bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> struct mem_cgroup *memcg)
> {
> @@ -109,14 +110,20 @@ EXPORT_SYMBOL_GPL(list_lru_add);
>
> bool list_lru_add_obj(struct list_lru *lru, struct list_head *item)
> {
> + bool ret;
> int nid = page_to_nid(virt_to_page(item));
> - struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> - mem_cgroup_from_slab_obj(item) : NULL;
> + struct mem_cgroup *memcg;
>
> - return list_lru_add(lru, item, nid, memcg);
> + rcu_read_lock();
> + memcg = list_lru_memcg_aware(lru) ? mem_cgroup_from_slab_obj(item) : NULL;
> + ret = list_lru_add(lru, item, nid, memcg);
> + rcu_read_unlock();
> +
> + return ret;
> }
> EXPORT_SYMBOL_GPL(list_lru_add_obj);
>
> +/* The caller must ensure the memcg lifetime. */
> bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> struct mem_cgroup *memcg)
> {
> @@ -139,11 +146,16 @@ EXPORT_SYMBOL_GPL(list_lru_del);
>
> bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
> {
> + bool ret;
> int nid = page_to_nid(virt_to_page(item));
> - struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> - mem_cgroup_from_slab_obj(item) : NULL;
> + struct mem_cgroup *memcg;
>
> - return list_lru_del(lru, item, nid, memcg);
> + rcu_read_lock();
> + memcg = list_lru_memcg_aware(lru) ? mem_cgroup_from_slab_obj(item) : NULL;
> + ret = list_lru_del(lru, item, nid, memcg);
> + rcu_read_unlock();
> +
> + return ret;
> }
> EXPORT_SYMBOL_GPL(list_lru_del_obj);
>
Powered by blists - more mailing lists