[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <625d91cc-59f3-4757-81df-220d32861493@nvidia.com>
Date: Mon, 20 Jan 2025 11:34:00 -0800
From: John Hubbard <jhubbard@...dia.com>
To: "zhaoyang.huang" <zhaoyang.huang@...soc.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Alistair Popple <apopple@...dia.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Zhaoyang Huang <huangzhaoyang@...il.com>,
steve.kang@...soc.com
Subject: Re: [PATCH] mm: gup: fix infinite loop within __get_longterm_locked
On 1/20/25 1:26 AM, zhaoyang.huang wrote:
> From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
>
> Infinite loop within __get_longterm_locked detected in an unique usage
> of pin_user_pages where the VA's pages are all unpinnable(vm_ops->fault
> function allocate pages via cma_alloc for hardware purpose and leave them
> out of LRU). Fixing this by have 'collected' reflect the actual number
> of pages in movable_folio_list.
The above is rather terse, although perhaps by kernel standards it's OK.
Isn't this missing a Fixes tag?
Fixes: 67e139b02d994 ("mm/gup.c: refactor
check_and_migrate_movable_pages()")
>
> Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> ---
> mm/gup.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index 3b75e631f369..2231ce7221f9 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -2341,8 +2341,6 @@ static unsigned long collect_longterm_unpinnable_folios(
> if (folio_is_longterm_pinnable(folio))
> continue;
>
> - collected++;
> -
> if (folio_is_device_coherent(folio))
> continue;
>
> @@ -2359,6 +2357,8 @@ static unsigned long collect_longterm_unpinnable_folios(
> if (!folio_isolate_lru(folio))
> continue;
>
> + collected++;
> +
Well, this seems correct to me. Somehow I talked myself into believing
that it was OK to do collected++ early, even though later on we skip
actually collecting the folio, thus miscounting things.
But now I believe it was just incorrect all along.
Reviewed-by: John Hubbard <jhubbard@...dia.com>
thanks,
--
John Hubbard
Powered by blists - more mailing lists