[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250205122007.GH14028@noisy.programming.kicks-ass.net>
Date: Wed, 5 Feb 2025 13:20:07 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Rik van Riel <riel@...riel.com>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org, bp@...en8.de,
dave.hansen@...ux.intel.com, zhengqi.arch@...edance.com,
nadav.amit@...il.com, thomas.lendacky@....com, kernel-team@...a.com,
linux-mm@...ck.org, akpm@...ux-foundation.org, jannh@...gle.com,
mhklinux@...look.com, andrew.cooper3@...rix.com,
Dave Hansen <dave.hansen@...el.com>
Subject: Re: [PATCH v8 03/12] x86/mm: consolidate full flush threshold
decision
On Tue, Feb 04, 2025 at 08:39:52PM -0500, Rik van Riel wrote:
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 6cf881a942bb..02e1f5c5bca3 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -1000,8 +1000,13 @@ static struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm,
> BUG_ON(this_cpu_inc_return(flush_tlb_info_idx) != 1);
> #endif
>
> - info->start = start;
> - info->end = end;
> + /*
> + * Round the start and end addresses to the page size specified
> + * by the stride shift. This ensures partial pages at the end of
> + * a range get fully invalidated.
> + */
> + info->start = round_down(start, 1 << stride_shift);
> + info->end = round_up(end, 1 << stride_shift);
> info->mm = mm;
> info->stride_shift = stride_shift;
> info->freed_tables = freed_tables;
Rather than doing this; should we not fix whatever dodgy users are
feeding us non-page-aligned addresses for invalidation?
Powered by blists - more mailing lists