[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b333bc62c83126d862d73fe85488c90e07b0f0ef.camel@surriel.com>
Date: Mon, 20 Jan 2025 12:11:37 -0500
From: Rik van Riel <riel@...riel.com>
To: Nadav Amit <nadav.amit@...il.com>, x86@...nel.org
Cc: linux-kernel@...r.kernel.org, bp@...en8.de, peterz@...radead.org,
dave.hansen@...ux.intel.com, zhengqi.arch@...edance.com,
thomas.lendacky@....com, kernel-team@...a.com, linux-mm@...ck.org,
akpm@...ux-foundation.org, jannh@...gle.com, mhklinux@...look.com,
andrew.cooper3@...rix.com
Subject: Re: [PATCH v5 10/12] x86,tlb: do targeted broadcast flushing from
tlbbatch code
On Mon, 2025-01-20 at 19:09 +0200, Nadav Amit wrote:
>
> On 20/01/2025 18:11, Rik van Riel wrote:
> >
> > What guarantees that the page reclaim path won't free
> > the pages until after TLBSYNC has completed on the CPUs
> > that kicked off asynchronous flushes with INVPLGB?
>
> [ you make me lose my confidence, although I see nothing wrong ]
>
> Freeing the pages must be done after the TLBSYNC. I did not imply it
> needs to be changed.
>
> The page freeing (and reclaim) path is only initiated after
> arch_tlbbatch_flush() is completed. If no migration is initiated,
> since
> we did not remove any tlbsync, it should be fine.
>
> If migration was initiated, and some invlpgb's were already
> initiated,
> then we need for correctness to initiate tlbsync before the task
> might
> be scheduled on another core. That's exactly why adding tlbsync to
> switch_mm_irqs_off() is needed in such a case.
This is the page reclaim code.
The process that has those other pages mapped might be
running on other CPUs simultaneously with the page
reclaim code.
Even if we were invalidating one of our own pages this
way, there could be other threads in the same process,
running while we are in the page reclaim code.
--
All Rights Reversed.
Powered by blists - more mailing lists