[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <157411c6-90ad-e99d-b75e-002783f984b1@redhat.com>
Date: Mon, 7 Dec 2020 10:44:51 +0100
From: David Hildenbrand <david@...hat.com>
To: Muchun Song <songmuchun@...edance.com>, akpm@...ux-foundation.org,
vbabka@...e.cz
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] mm/page_alloc: speeding up the iteration of max_order
On 04.12.20 16:51, Muchun Song wrote:
> When we free a page whose order is very close to MAX_ORDER and greater
> than pageblock_order, it wastes some CPU cycles to increase max_order
> to MAX_ORDER one by one and check the pageblock migratetype of that page
> repeatedly especially when MAX_ORDER is much larger than pageblock_order.
>
> We also should not be checking migratetype of buddy when "order ==
> MAX_ORDER - 1" as the buddy pfn may be invalid, so adjust the condition.
> With the new check, we don't need the max_order check anymore, so we
> replace it.
>
> Also adjust max_order initialization so that it's lower by one than
> previously, which makes the code hopefully more clear.
>
> Fixes: d9dddbf55667 ("mm/page_alloc: prevent merging between isolated and other pageblocks")
> Signed-off-by: Muchun Song <songmuchun@...edance.com>
> Acked-by: Vlastimil Babka <vbabka@...e.cz>
> ---
> Changes in v3:
> - Update commit log.
>
> Changes in v2:
> - Rework the code suggested by Vlastimil. Thanks.
>
> mm/page_alloc.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index f91df593bf71..56e603eea1dd 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1002,7 +1002,7 @@ static inline void __free_one_page(struct page *page,
> struct page *buddy;
> bool to_tail;
>
> - max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
> + max_order = min_t(unsigned int, MAX_ORDER - 1, pageblock_order);
>
> VM_BUG_ON(!zone_is_initialized(zone));
> VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
> @@ -1015,7 +1015,7 @@ static inline void __free_one_page(struct page *page,
> VM_BUG_ON_PAGE(bad_range(zone, page), page);
>
> continue_merging:
> - while (order < max_order - 1) {
> + while (order < max_order) {
> if (compaction_capture(capc, page, order, migratetype)) {
> __mod_zone_freepage_state(zone, -(1 << order),
> migratetype);
> @@ -1041,7 +1041,7 @@ static inline void __free_one_page(struct page *page,
> pfn = combined_pfn;
> order++;
> }
> - if (max_order < MAX_ORDER) {
> + if (order < MAX_ORDER - 1) {
> /* If we are here, it means order is >= pageblock_order.
> * We want to prevent merge between freepages on isolate
> * pageblock and normal pageblock. Without this, pageblock
> @@ -1062,7 +1062,7 @@ static inline void __free_one_page(struct page *page,
> is_migrate_isolate(buddy_mt)))
> goto done_merging;
> }
> - max_order++;
> + max_order = order + 1;
> goto continue_merging;
> }
>
>
LGTM
Reviewed-by: David Hildenbrand <david@...hat.com>
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists