[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YMsRgfKbcaW66/99@hirez.programming.kicks-ass.net>
Date: Thu, 17 Jun 2021 11:10:25 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Andy Lutomirski <luto@...nel.org>
Cc: Nicholas Piggin <npiggin@...il.com>,
Rik van Riel <riel@...riel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Hansen <dave.hansen@...el.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mm@...ck.org,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
the arch/x86 maintainers <x86@...nel.org>,
"Paul E. McKenney" <paulmck@...nel.org>
Subject: Re: [RFC][PATCH] sched: Use lightweight hazard pointers to grab lazy
mms
On Thu, Jun 17, 2021 at 11:08:03AM +0200, Peter Zijlstra wrote:
> On Wed, Jun 16, 2021 at 10:32:15PM -0700, Andy Lutomirski wrote:
> --- a/arch/x86/include/asm/mmu.h
> +++ b/arch/x86/include/asm/mmu.h
> @@ -66,4 +66,9 @@ typedef struct {
> void leave_mm(int cpu);
> #define leave_mm leave_mm
>
> +/* On x86, mm_cpumask(mm) contains all CPUs that might be lazily using mm */
> +#define for_each_possible_lazymm_cpu(cpu, mm) \
> + for_each_cpu((cpu), mm_cpumask((mm)))
> +
> +
> #endif /* _ASM_X86_MMU_H */
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 8ac693d542f6..e102ec53c2f6 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -19,6 +19,7 @@
>
> +
> +#ifndef for_each_possible_lazymm_cpu
> +#define for_each_possible_lazymm_cpu(cpu, mm) for_each_online_cpu((cpu))
> +#endif
> +
Why can't the x86 implementation be the default? IIRC the problem with
mm_cpumask() is that (some) architectures don't clear bits, but IIRC
they all should be setting bits, or were there archs that didn't even do
that?
Powered by blists - more mailing lists