[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <052e9e57-8f72-d005-f0f7-4060bc665ba4@intel.com>
Date: Fri, 19 Jul 2019 11:38:07 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Nadav Amit <namit@...are.com>, Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>
Subject: Re: [PATCH v3 5/9] x86/mm/tlb: Privatize cpu_tlbstate
On 7/18/19 5:58 PM, Nadav Amit wrote:
> +struct tlb_state_shared {
> + /*
> + * We can be in one of several states:
> + *
> + * - Actively using an mm. Our CPU's bit will be set in
> + * mm_cpumask(loaded_mm) and is_lazy == false;
> + *
> + * - Not using a real mm. loaded_mm == &init_mm. Our CPU's bit
> + * will not be set in mm_cpumask(&init_mm) and is_lazy == false.
> + *
> + * - Lazily using a real mm. loaded_mm != &init_mm, our bit
> + * is set in mm_cpumask(loaded_mm), but is_lazy == true.
> + * We're heuristically guessing that the CR3 load we
> + * skipped more than makes up for the overhead added by
> + * lazy mode.
> + */
> + bool is_lazy;
> +};
> +DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state_shared, cpu_tlbstate_shared);
Could we get a comment about what "shared" means and why we need shared
state?
Should we change 'tlb_state' to 'tlb_state_private'?
Powered by blists - more mailing lists