lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c8068d0c8042e8f4e5de0e8af9cb3457ee795211.camel@surriel.com>
Date: Mon, 10 Feb 2025 22:07:14 -0500
From: Rik van Riel <riel@...riel.com>
To: Brendan Jackman <jackmanb@...gle.com>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org, bp@...en8.de, 
	peterz@...radead.org, dave.hansen@...ux.intel.com,
 zhengqi.arch@...edance.com, 	nadav.amit@...il.com, thomas.lendacky@....com,
 kernel-team@...a.com, 	linux-mm@...ck.org, akpm@...ux-foundation.org,
 jannh@...gle.com, 	mhklinux@...look.com, andrew.cooper3@...rix.com, Manali
 Shukla	 <Manali.Shukla@....com>
Subject: Re: [PATCH v9 09/12] x86/mm: enable broadcast TLB invalidation for
 multi-threaded processes

On Mon, 2025-02-10 at 15:15 +0100, Brendan Jackman wrote:
> On Thu, 6 Feb 2025 at 05:47, Rik van Riel <riel@...riel.com> wrote:
> > 
> > +       if (asid >= MAX_ASID_AVAILABLE) {
> > +               /* This should never happen. */
> > +               VM_WARN_ONCE(1, "Unable to allocate global ASID
> > despite %d available\n", global_asid_available);
> 
> If you'll forgive the nitpicking, please put the last arg on a new
> line or otherwise break this up, the rest of this file keeps below
> 100
> chars (this is 113).
> 

Nitpicks are great! Chances are I'll have to look at
this code again several times over the coming years,
so getting it in the best possible shape is in my
interest as much as anybody else's ;)

> > 
> > +static bool needs_global_asid_reload(struct mm_struct *next, u16
> > prev_asid)
> > +{
> > +       u16 global_asid = mm_global_asid(next);
> > +
> > +       if (global_asid && prev_asid != global_asid)
> > +               return true;
> > +
> > +       if (!global_asid && is_global_asid(prev_asid))
> > +               return true;
> 
> I think this needs clarification around when switches from
> global->nonglobal happen. Maybe commentary or maybe there's a way to
> just express the code that makes it obvious. Here's what I currently
> understand, please correct me if I'm wrong:
> 
> - Once a process gets a global ASID it keeps it forever. So within a
> process we never switch global->nonglobal.
> 
> - In flush_tlb_func() we are just calling this to check if the
> process
> has just been given a global ASID - there's no way loaded_mm_asid can
> be global yet !mm_global_asid(loaded_mm).
> 
> - When we call this from switch_mm_irqs_off() we are in the
> prev==next
> case. Is there something about lazy TLB that can cause the case above
> to happen here?
> 
In the current implementation, we never transition
from global->local ASID.

In a previous implementation, the code did do those
transitions, and they appeared to survive the testing
thrown at it.

If we implement more aggressive ASID reuse (which we
may need to), we may need to support that transition
again.

In short, while we do not need to support that
transition right now, I don't really want to remove
the two lines of code that make it work :)

I'll add comments.

> > +static bool meets_global_asid_threshold(struct mm_struct *mm)
> > +{
> > +       if (!global_asid_available)
> 
> I think we need READ_ONCE here.
> 
> Also - this doesn't really make sense in this function as it's
> currently named.
> 
> I think we could just inline this whole function into
> consider_global_asid(), it would still be nice and readable IMO.
> 
Done and done.

> > 
> > @@ -1058,9 +1375,12 @@ void flush_tlb_mm_range(struct mm_struct
> > *mm, unsigned long start,
> >          * a local TLB flush is needed. Optimize this use-case by
> > calling
> >          * flush_tlb_func_local() directly in this case.
> >          */
> > -       if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) {
> > +       if (mm_global_asid(mm)) {
> > +               broadcast_tlb_flush(info);
> > +       } else if (cpumask_any_but(mm_cpumask(mm), cpu) <
> > nr_cpu_ids) {
> >                 info->trim_cpumask = should_trim_cpumask(mm);
> >                 flush_tlb_multi(mm_cpumask(mm), info);
> > +               consider_global_asid(mm);
> 
> Why do we do this here instead of when the CPU enters the mm? Is the
> idea that in combination with the jiffies thing in
> consider_global_asid() we get a probability of getting a global ASID
> (within some time period) that scales with the amount of TLB flushing
> the process does? So then we avoid using up ASID space on processes
> that are multithreaded but just sit around with stable VMAs etc?
> 
You guessed right.

In the current x86 hardware, a global ASID is a scarce
resource, with about 4k available ASIDs (2k in a kernel
compiled with support for the KPTI mitigation), while
the largest available x86 systems have at least 8k CPUs.

We can either implement the much more aggressive ASID
reuse that ARM64 and RISC-V implement, though it is not
clear how to scale that to thousands of CPUs, or reserve
global ASIDs for the processes that are most likely to
benefit from them, continuing to use IPI-based flushing
for the processes that need it less.

I've added a comment to document that.

-- 
All Rights Reversed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ