[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <25EB3C6C-4D6D-4946-BF0B-9B322E7DC16D@nvidia.com>
Date: Tue, 01 Jul 2025 13:09:23 -0400
From: Zi Yan <ziy@...dia.com>
To: Christoph Berg <myon@...ian.org>
Cc: David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Matthew Brost <matthew.brost@...el.com>,
Joshua Hahn <joshua.hahnjy@...il.com>, Rakie Kim <rakie.kim@...com>,
Byungchul Park <byungchul@...com>, Gregory Price <gourry@...rry.net>,
Ying Huang <ying.huang@...ux.alibaba.com>,
Alistair Popple <apopple@...dia.com>,
"open list:MEMORY MANAGEMENT - MEMORY POLICY AND MIGRATION" <linux-mm@...ck.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] mm/migrate: Fix do_pages_stat in 32-bit mode
On 1 Jul 2025, at 12:58, Christoph Berg wrote:
> Re: David Hildenbrand
>> Subject should start with "mm/migrate:"
>> Likely we want a
>> Fixes:
>> and then this is probably "Reported-by:" paired with a "Closes:" link
>> to any such report.
>
> I included these now, except for "Closes:" which I have to idea what
> to put in.
Fixes should be:
Fixes: 5b1b561ba73c ("mm: simplify compat_sys_move_pages")
Closes could be a link to the bug report.
>
>> But I'm wondering how long this has already been like that. :)
>
> The now-offending "pages += chunk_nr" line is from 2010, but I think
> the bug is rather from 5b1b561ba73c8ab9c98e5dfd14dc7ee47efb6530 (2021)
> which reshuffled the array-vs-32-bit handling.
>
>> Something a bit more elegant might be:
>
> Thanks, I used your patch draft with some minor changes.
>
>> static int get_compat_pages_array(const void __user *chunk_pages[],
>> const void __user * __user *pages,
>> + unsigned long chunk_offs,
>
> I replaced chunk_offs with "chunk_offset" since "offs" looked too much
> like plural (list of offsets) to me.
>
>> if (in_compat_syscall()) {
>> if (get_compat_pages_array(chunk_pages, pages,
>> - chunk_nr))
>> + chunk_offs, chunk_nr))
>> break;
>> } else {
>> if (copy_from_user(chunk_pages, pages,
>
> The else branch here needs tweaking as well:
>
> } else {
> - if (copy_from_user(chunk_pages, pages,
> + if (copy_from_user(chunk_pages, pages + chunk_offset,
> chunk_nr * sizeof(*chunk_pages)))
>
>
>> @@ -2440,11 +2442,11 @@ static int do_pages_stat(struct mm_struct *mm, unsigned long nr_pages,
>> do_pages_stat_array(mm, chunk_nr, chunk_pages, chunk_status);
>> - if (copy_to_user(status, chunk_status, chunk_nr * sizeof(*status)))
>> + if (copy_to_user(status + chunk_offs, chunk_status,
>> + chunk_nr * sizeof(*status)))
>
> This seems to work, but honestly I am wondering, if copy_from_user
> needs a special 32-bit case, doesn't copy_to_user need special casing
> as well?
>
>> (untested, of course)
>
> The attached patch makes PG18's new numa test pass on amd64 kernels
> both in amd64 and i386 userlands.
>
> (In the meantime, PG git head got a workaround that limits the chunk
> size to the same 16 as used in do_pages_stat; I tested with the
> version before that.)
>
> Christoph
>
>
> From fdbcbc88825bc2e857dfeeebc91d62864e0774dd Mon Sep 17 00:00:00 2001
> From: Christoph Berg <myon@...ian.org>
> Date: Tue, 24 Jun 2025 16:44:27 +0200
> Subject: [PATCH v2] mm/migrate: Fix do_pages_stat in 32-bit mode
>
> For arrays with more than 16 entries, the old code would incorrectly
> advance the pages pointer by 16 words instead of 16 compat_uptr_t.
> Fix by doing the pointer arithmetic inside get_compat_pages_array where
> pages32 is already a correctly-typed pointer.
>
> Discovered while working on PostgreSQL 18's new NUMA introspection code.
>
> Signed-off-by: Christoph Berg <myon@...ian.org>
> Reported-by: Bertrand Drouvot <bertranddrouvot.pg@...il.com>
> Reported-by: Tomas Vondra <tomas@...dra.me>
> Suggested-by: David Hildenbrand <david@...hat.com>
> Fixes: 5b1b561ba73c8ab9c98e5dfd14dc7ee47efb6530
> ---
> mm/migrate.c | 14 ++++++++------
> 1 file changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 8cf0f9c9599d..2c88f3b33833 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2399,6 +2399,7 @@ static void do_pages_stat_array(struct mm_struct *mm, unsigned long nr_pages,
>
> static int get_compat_pages_array(const void __user *chunk_pages[],
> const void __user * __user *pages,
> + unsigned long chunk_offset,
> unsigned long chunk_nr)
> {
> compat_uptr_t __user *pages32 = (compat_uptr_t __user *)pages;
> @@ -2406,7 +2407,7 @@ static int get_compat_pages_array(const void __user *chunk_pages[],
> int i;
>
> for (i = 0; i < chunk_nr; i++) {
> - if (get_user(p, pages32 + i))
> + if (get_user(p, pages32 + chunk_offset + i))
> return -EFAULT;
> chunk_pages[i] = compat_ptr(p);
> }
> @@ -2425,27 +2426,28 @@ static int do_pages_stat(struct mm_struct *mm, unsigned long nr_pages,
> #define DO_PAGES_STAT_CHUNK_NR 16UL
> const void __user *chunk_pages[DO_PAGES_STAT_CHUNK_NR];
> int chunk_status[DO_PAGES_STAT_CHUNK_NR];
> + unsigned long chunk_offset = 0;
>
> while (nr_pages) {
> unsigned long chunk_nr = min(nr_pages, DO_PAGES_STAT_CHUNK_NR);
>
> if (in_compat_syscall()) {
> if (get_compat_pages_array(chunk_pages, pages,
> - chunk_nr))
> + chunk_offset, chunk_nr))
> break;
> } else {
> - if (copy_from_user(chunk_pages, pages,
> + if (copy_from_user(chunk_pages, pages + chunk_offset,
> chunk_nr * sizeof(*chunk_pages)))
> break;
> }
>
> do_pages_stat_array(mm, chunk_nr, chunk_pages, chunk_status);
>
> - if (copy_to_user(status, chunk_status, chunk_nr * sizeof(*status)))
> + if (copy_to_user(status + chunk_offset, chunk_status,
> + chunk_nr * sizeof(*status)))
> break;
>
> - pages += chunk_nr;
> - status += chunk_nr;
> + chunk_offset += chunk_nr;
> nr_pages -= chunk_nr;
> }
> return nr_pages ? -EFAULT : 0;
> --
> 2.47.2
Best Regards,
Yan, Zi
Powered by blists - more mailing lists