[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aR2dFUuHXTP8P1W2@asagi>
Date: Wed, 19 Nov 2025 19:33:57 +0900
From: Yohei Kojima <yohei.kojima@...y.com>
To: Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>
Cc: linux-kernel@...r.kernel.org, Yohei Kojima <yohei.kojima@...y.com>
Subject: Re: [PATCH] x86/mm: Fix {split,collapse}_page_count to use
PTRS_PER_PMD if necessary
On Wed, Nov 19, 2025 at 02:46:19PM +0900, Yohei Kojima wrote:
> Before this commit, split_page_count() and collapse_page_count() updated
> direct_pages_count using PTRS_PER_PTE constant. However, these functions
> should use PTRS_PER_PMD if 1G page is splitted into 2M pages and vice
> versa because 2M direct pages are managed by PMD.
>
> This commit fixes {split,collapse}_page_count() to use PTRS_PER_PMD in
> such cases. The basic behavior of these functions are unchanged because
> x86's 1G page split and collapse are currently only supported on 64-bit
> environment where PTRS_PER_PTE and PTRS_PER_PMD are both 512.
I'm sorry I forgot adding Signed-off-by line and I resent the patch with
Signed-off-by line. The new patch is as follows:
https://lore.kernel.org/all/1ed1400d08a3de2d14f944a36efc9b84f9ca6f42.1763546758.git.yohei.kojima@sony.com/
Thank you,
Yohei Kojima
> ---
> arch/x86/mm/pat/set_memory.c | 20 ++++++++++++--------
> 1 file changed, 12 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index 970981893c9b..aa6fa4894edb 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -97,25 +97,29 @@ static void split_page_count(int level)
> return;
>
> direct_pages_count[level]--;
> - if (system_state == SYSTEM_RUNNING) {
> - if (level == PG_LEVEL_2M)
> + if (level == PG_LEVEL_2M) {
> + if (system_state == SYSTEM_RUNNING)
> count_vm_event(DIRECT_MAP_LEVEL2_SPLIT);
> - else if (level == PG_LEVEL_1G)
> + direct_pages_count[PG_LEVEL_4K] += PTRS_PER_PTE;
> + } else if (level == PG_LEVEL_1G) {
> + if (system_state == SYSTEM_RUNNING)
> count_vm_event(DIRECT_MAP_LEVEL3_SPLIT);
> + direct_pages_count[PG_LEVEL_2M] += PTRS_PER_PMD;
> }
> - direct_pages_count[level - 1] += PTRS_PER_PTE;
> }
>
> static void collapse_page_count(int level)
> {
> direct_pages_count[level]++;
> - if (system_state == SYSTEM_RUNNING) {
> - if (level == PG_LEVEL_2M)
> + if (level == PG_LEVEL_2M) {
> + if (system_state == SYSTEM_RUNNING)
> count_vm_event(DIRECT_MAP_LEVEL2_COLLAPSE);
> - else if (level == PG_LEVEL_1G)
> + direct_pages_count[PG_LEVEL_4K] -= PTRS_PER_PTE;
> + } else if (level == PG_LEVEL_1G) {
> + if (system_state == SYSTEM_RUNNING)
> count_vm_event(DIRECT_MAP_LEVEL3_COLLAPSE);
> + direct_pages_count[PG_LEVEL_2M] -= PTRS_PER_PMD;
> }
> - direct_pages_count[level - 1] -= PTRS_PER_PTE;
> }
>
> void arch_report_meminfo(struct seq_file *m)
> --
> 2.43.0
>
Powered by blists - more mailing lists