[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOUHufYCPkUH0ysujoXZaw3PSrPvaw356-Pb97=LPGVRu_7FNQ@mail.gmail.com>
Date: Fri, 25 Oct 2024 21:55:38 -0600
From: Yu Zhao <yuzhao@...gle.com>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>, Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>, Hugh Dickins <hughd@...gle.com>,
Yosry Ahmed <yosryahmed@...gle.com>, linux-mm@...ck.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-doc@...r.kernel.org, Meta kernel team <kernel-team@...a.com>
Subject: Re: [PATCH v1 5/6] memcg-v1: no need for memcg locking for MGLRU
On Thu, Oct 24, 2024 at 7:23 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
>
> While updating the generation of the folios, MGLRU requires that the
> folio's memcg association remains stable. With the charge migration
> deprecated, there is no need for MGLRU to acquire locks to keep the
> folio and memcg association stable.
>
> Signed-off-by: Shakeel Butt <shakeel.butt@...ux.dev>
> ---
> mm/vmscan.c | 11 -----------
> 1 file changed, 11 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 29c098790b01..fd7171658b63 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3662,10 +3662,6 @@ static void walk_mm(struct mm_struct *mm, struct lru_gen_mm_walk *walk)
> if (walk->seq != max_seq)
> break;
Please remove the lingering `struct mem_cgroup *memcg` as well as
folio_memcg_rcu(). Otherwise it causes both build and lockdep
warnings.
> - /* folio_update_gen() requires stable folio_memcg() */
> - if (!mem_cgroup_trylock_pages(memcg))
> - break;
> -
> /* the caller might be holding the lock for write */
> if (mmap_read_trylock(mm)) {
> err = walk_page_range(mm, walk->next_addr, ULONG_MAX, &mm_walk_ops, walk);
> @@ -3673,8 +3669,6 @@ static void walk_mm(struct mm_struct *mm, struct lru_gen_mm_walk *walk)
> mmap_read_unlock(mm);
> }
>
> - mem_cgroup_unlock_pages();
> -
> if (walk->batched) {
> spin_lock_irq(&lruvec->lru_lock);
> reset_batch_size(walk);
> @@ -4096,10 +4090,6 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
> }
> }
>
> - /* folio_update_gen() requires stable folio_memcg() */
> - if (!mem_cgroup_trylock_pages(memcg))
> - return true;
> -
> arch_enter_lazy_mmu_mode();
>
> pte -= (addr - start) / PAGE_SIZE;
> @@ -4144,7 +4134,6 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
> }
>
> arch_leave_lazy_mmu_mode();
> - mem_cgroup_unlock_pages();
>
> /* feedback from rmap walkers to page table walkers */
> if (mm_state && suitable_to_scan(i, young))
> --
> 2.43.5
>
>
Powered by blists - more mailing lists