[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47e50136-eded-fbf4-4b63-7bf88b7fd791@suse.cz>
Date: Tue, 22 Nov 2022 10:09:17 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>, Yu Zhao <yuzhao@...gle.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Michal Hocko <mhocko@...nel.org>,
Marek Szyprowski <m.szyprowski@...sung.com>,
LKML <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH 2/2] mm/page_alloc: Leave IRQs enabled for per-cpu page
allocations
On 11/21/22 13:01, Mel Gorman wrote:
> On Fri, Nov 18, 2022 at 03:30:57PM +0100, Vlastimil Babka wrote:
>> On 11/18/22 11:17, Mel Gorman wrote:
>
> While I think you're right, I think it's a bit subtle, the batch reset would
> need to move, rechecked within the "Different zone, different pcp lock."
> block and it would be easy to forget exactly why it's structured like
> that in the future. Rather than being a fix, it could be a standalone
> patch so it would be obvious in git blame but I don't feel particularly
> strongly about it.
>
> For the actual fixes to the patch, how about this? It's boot-tested only
> as I find it hard to believe it would make a difference to performance.
Looks good. Shouldn't make a difference indeed.
>
> --8<--
> mm/page_alloc: Leave IRQs enabled for per-cpu page allocations -fix
>
> As noted by Vlastimil Babka, the migratetype might be wrong if a PCP
> fails to lock so check the migrate type early. Similarly the !pcp check
> is generally unlikely so explicitly tagging it makes sense.
>
> This is a fix for the mm-unstable patch
> mm-page_alloc-leave-irqs-enabled-for-per-cpu-page-allocations.patch
>
> Reported-by: Vlastimil Babka <vbabka@...e.cz>
> Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
> ---
> mm/page_alloc.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 323fec05c4c6..445066617204 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3516,6 +3516,7 @@ void free_unref_page_list(struct list_head *list)
> struct zone *zone = page_zone(page);
>
> list_del(&page->lru);
> + migratetype = get_pcppage_migratetype(page);
>
> /* Different zone, different pcp lock. */
> if (zone != locked_zone) {
> @@ -3530,7 +3531,7 @@ void free_unref_page_list(struct list_head *list)
> */
> pcp_trylock_prepare(UP_flags);
> pcp = pcp_spin_trylock(zone->per_cpu_pageset);
> - if (!pcp) {
> + if (unlikely(!pcp)) {
> pcp_trylock_finish(UP_flags);
> free_one_page(zone, page, page_to_pfn(page),
> 0, migratetype, FPI_NONE);
> @@ -3545,7 +3546,6 @@ void free_unref_page_list(struct list_head *list)
> * Non-isolated types over MIGRATE_PCPTYPES get added
> * to the MIGRATE_MOVABLE pcp list.
> */
> - migratetype = get_pcppage_migratetype(page);
> if (unlikely(migratetype >= MIGRATE_PCPTYPES))
> migratetype = MIGRATE_MOVABLE;
>
>
Powered by blists - more mailing lists