[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4wB-H0+gLgzHBU_AB6DYLd3MbByKgjK51SP6ukR4DksMQ@mail.gmail.com>
Date: Sat, 29 Nov 2025 08:34:05 +0800
From: Barry Song <21cnbao@...il.com>
To: Hongru Zhang <zhanghongru06@...il.com>
Cc: akpm@...ux-foundation.org, vbabka@...e.cz, david@...nel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, surenb@...gle.com,
mhocko@...e.com, jackmanb@...gle.com, hannes@...xchg.org, ziy@...dia.com,
lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com, rppt@...nel.org,
axelrasmussen@...gle.com, yuanchu@...gle.com, weixugc@...gle.com,
Hongru Zhang <zhanghongru@...omi.com>
Subject: Re: [PATCH 1/3] mm/page_alloc: add per-migratetype counts to buddy allocator
On Fri, Nov 28, 2025 at 11:12 AM Hongru Zhang <zhanghongru06@...il.com> wrote:
>
[...]
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ed82ee55e66a..9431073e7255 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -818,6 +818,7 @@ static inline void __add_to_free_list(struct page *page, struct zone *zone,
> else
> list_add(&page->buddy_list, &area->free_list[migratetype]);
> area->nr_free++;
> + area->mt_nr_free[migratetype]++;
>
> if (order >= pageblock_order && !is_migrate_isolate(migratetype))
> __mod_zone_page_state(zone, NR_FREE_PAGES_BLOCKS, nr_pages);
> @@ -840,6 +841,8 @@ static inline void move_to_free_list(struct page *page, struct zone *zone,
> get_pageblock_migratetype(page), old_mt, nr_pages);
>
> list_move_tail(&page->buddy_list, &area->free_list[new_mt]);
> + area->mt_nr_free[old_mt]--;
> + area->mt_nr_free[new_mt]++;
The overhead comes from effectively counting twice. Have we checked whether
the readers of area->nr_free are on a hot path? If not, we might just drop
nr_free and compute the sum each time.
Buddyinfo and compaction do not seem to be on a hot path ?
Thanks
Barry
Powered by blists - more mailing lists