[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4ced9211-2bd7-4257-a9fc-32c775ceffef@redhat.com>
Date: Tue, 17 Sep 2024 12:20:22 +0200
From: David Hildenbrand <david@...hat.com>
To: Anshuman Khandual <anshuman.khandual@....com>, linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Ryan Roberts <ryan.roberts@....com>, "Mike Rapoport (IBM)"
<rppt@...nel.org>, Arnd Bergmann <arnd@...db.de>, x86@...nel.org,
linux-m68k@...ts.linux-m68k.org, linux-fsdevel@...r.kernel.org,
kasan-dev@...glegroups.com, linux-kernel@...r.kernel.org,
linux-perf-users@...r.kernel.org, Geert Uytterhoeven <geert@...ux-m68k.org>,
Guo Ren <guoren@...nel.org>, Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH V2 1/7] m68k/mm: Change pmd_val()
On 17.09.24 09:31, Anshuman Khandual wrote:
> This changes platform's pmd_val() to access the pmd_t element directly like
> other architectures rather than current pointer address based dereferencing
> that prevents transition into pmdp_get().
>
> Cc: Geert Uytterhoeven <geert@...ux-m68k.org>
> Cc: Guo Ren <guoren@...nel.org>
> Cc: Arnd Bergmann <arnd@...db.de>
> Cc: linux-m68k@...ts.linux-m68k.org
> Cc: linux-kernel@...r.kernel.org
> Signed-off-by: Anshuman Khandual <anshuman.khandual@....com>
> ---
> arch/m68k/include/asm/page.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/m68k/include/asm/page.h b/arch/m68k/include/asm/page.h
> index 8cfb84b49975..be3f2c2a656c 100644
> --- a/arch/m68k/include/asm/page.h
> +++ b/arch/m68k/include/asm/page.h
> @@ -19,7 +19,7 @@
> */
> #if !defined(CONFIG_MMU) || CONFIG_PGTABLE_LEVELS == 3
> typedef struct { unsigned long pmd; } pmd_t;
> -#define pmd_val(x) ((&x)->pmd)
> +#define pmd_val(x) ((x).pmd)
> #define __pmd(x) ((pmd_t) { (x) } )
> #endif
>
Trying to understand what's happening here, I stumbled over
commit ef22d8abd876e805b604e8f655127de2beee2869
Author: Peter Zijlstra <peterz@...radead.org>
Date: Fri Jan 31 13:45:36 2020 +0100
m68k: mm: Restructure Motorola MMU page-table layout
The Motorola 68xxx MMUs, 040 (and later) have a fixed 7,7,{5,6}
page-table setup, where the last depends on the page-size selected (8k
vs 4k resp.), and head.S selects 4K pages. For 030 (and earlier) we
explicitly program 7,7,6 and 4K pages in %tc.
However, the current code implements this mightily weird. What it does
is group 16 of those (6 bit) pte tables into one 4k page to not waste
space. The down-side is that that forces pmd_t to be a 16-tuple
pointing to consecutive pte tables.
This breaks the generic code which assumes READ_ONCE(*pmd) will be
word sized.
Where we did
#if !defined(CONFIG_MMU) || CONFIG_PGTABLE_LEVELS == 3
-typedef struct { unsigned long pmd[16]; } pmd_t;
-#define pmd_val(x) ((&x)->pmd[0])
-#define __pmd(x) ((pmd_t) { { (x) }, })
+typedef struct { unsigned long pmd; } pmd_t;
+#define pmd_val(x) ((&x)->pmd)
+#define __pmd(x) ((pmd_t) { (x) } )
#endif
So I assume this should be fine
Acked-by: David Hildenbrand <david@...hat.com>
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists