[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1702122025550.3734@nanos>
Date: Sun, 12 Feb 2017 20:37:57 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Dave Hansen <dave.hansen@...ux.intel.com>
cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, x86@...nel.org,
kirill.shutemov@...ux.intel.com
Subject: Re: [RFC][PATCH 4/7] x86, mpx: context-switch new MPX address size
MSR
On Wed, 1 Feb 2017, Dave Hansen wrote:
> +/*
> + * The MPX tables change sizes based on the size of the virtual
> + * (aka. linear) address space. There is an MSR to tell the CPU
> + * whether we want the legacy-style ones or the larger ones when
> + * we are running with an eXtended virtual address space.
> + */
> +static inline void switch_mpx_bd(struct mm_struct *prev, struct mm_struct *next)
> +{
> + /*
> + * Note: there is one and only one bit in use in the MSR
> + * at this time, so we do not have to be concerned with
> + * preserving any of the other bits. Just write 0 or 1.
> + */
> + u32 IA32_MPX_LAX_ENABLE_MASK = 0x00000001;
> +
> + /*
> + * Avoid the MSR on CPUs without MPX, obviously:
> + */
> + if (!cpu_feature_enabled(X86_FEATURE_MPX))
> + return;
> + /*
> + * FIXME: do we want a check here for the 5-level paging
> + * CR4 bit or CPUID bit, or is the mawa check below OK?
> + * It's not obvious what would be the fastest or if it
> + * matters.
> + */
Well, you could use a static key which is enabled when 5 level paging and
MPX is enabled.
> + /*
> + * Avoid the relatively costly MSR if we are not changing
> + * MAWA state. All processes not using MPX will have a
> + * mpx_mawa_shift()=0, so we do not need to check
> + * separately for whether MPX management is enabled.
> + */
> + if (likely(mpx_bd_size_shift(prev) == mpx_bd_size_shift(next)))
> + return;
So this switches back unconditionally if the previous task was using the
large tables even if the next task is not using MPX at all. It's probably a
non issue.
Thanks,
tglx
Powered by blists - more mailing lists