[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150327185745.GA9642@redhat.com>
Date: Fri, 27 Mar 2015 19:57:45 +0100
From: Oleg Nesterov <oleg@...hat.com>
To: Dave Hansen <dave@...1.net>
Cc: linux-kernel@...r.kernel.org, x86@...nel.org, tglx@...utronix.de,
dave.hansen@...ux.intel.com, bp@...en8.de, riel@...hat.com,
sbsiddha@...il.com, luto@...capital.net, mingo@...hat.com,
hpa@...or.com, fenghua.yu@...el.com
Subject: Re: [PATCH 01/17] x86, fpu: wrap get_xsave_addr() to make it safer
Just in case, 1, 2, and 9 looks good to me.
I didn't get the rest of this series, but I am sure it doesn't need
my review ;)
On 03/26, Dave Hansen wrote:
>
> From: Dave Hansen <dave.hansen@...ux.intel.com>
>
> The MPX code appears to be saving off the FPU in an unsafe
> way. It does not disable preemption or ensure that the
> FPU state has been allocated.
>
> This patch introduces a new helper which will do both of
> those things internally.
>
> Note that this requires a patch from Oleg in order to work
> properly. It is currently in tip/x86/fpu.
>
> > commit f893959b0898bd876673adbeb6798bdf25c034d7
> > Author: Oleg Nesterov <oleg@...hat.com>
> > Date: Fri Mar 13 18:30:30 2015 +0100
> >
> > x86/fpu: Don't abuse drop_init_fpu() in flush_thread()
>
> Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
> Cc: Oleg Nesterov <oleg@...hat.com>
> Cc: bp@...en8.de
> Cc: Rik van Riel <riel@...hat.com>
> Cc: Suresh Siddha <sbsiddha@...il.com>
> Cc: Andy Lutomirski <luto@...capital.net>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: "H. Peter Anvin" <hpa@...or.com>
> Cc: Fenghua Yu <fenghua.yu@...el.com>
> Cc: the arch/x86 maintainers <x86@...nel.org>
> ---
>
> b/arch/x86/include/asm/xsave.h | 1 +
> b/arch/x86/kernel/xsave.c | 32 ++++++++++++++++++++++++++++++++
> 2 files changed, 33 insertions(+)
>
> diff -puN arch/x86/include/asm/xsave.h~tsk_get_xsave_addr arch/x86/include/asm/xsave.h
> --- a/arch/x86/include/asm/xsave.h~tsk_get_xsave_addr 2015-03-26 11:27:04.738204327 -0700
> +++ b/arch/x86/include/asm/xsave.h 2015-03-26 11:27:04.743204552 -0700
> @@ -252,6 +252,7 @@ static inline int xrestore_user(struct x
> }
>
> void *get_xsave_addr(struct xsave_struct *xsave, int xstate);
> +void *tsk_get_xsave_field(struct task_struct *tsk, int xstate_field);
> void setup_xstate_comp(void);
>
> #endif
> diff -puN arch/x86/kernel/xsave.c~tsk_get_xsave_addr arch/x86/kernel/xsave.c
> --- a/arch/x86/kernel/xsave.c~tsk_get_xsave_addr 2015-03-26 11:27:04.740204417 -0700
> +++ b/arch/x86/kernel/xsave.c 2015-03-26 11:27:04.744204597 -0700
> @@ -740,3 +740,35 @@ void *get_xsave_addr(struct xsave_struct
> return (void *)xsave + xstate_comp_offsets[feature];
> }
> EXPORT_SYMBOL_GPL(get_xsave_addr);
> +
> +/*
> + * This wraps up the common operations that need to occur when retrieving
> + * data from an xsave struct. It first ensures that the task was actually
> + * using the FPU and retrieves the data in to a buffer. It then calculates
> + * the offset of the requested field in the buffer.
> + *
> + * This function is safe to call whether the FPU is in use or not.
> + *
> + * Inputs:
> + * tsk: the task from which we are fetching xsave state
> + * xstate: state which is defined in xsave.h (e.g. XSTATE_FP, XSTATE_SSE,
> + * etc.)
> + * Output:
> + * address of the state in the xsave area.
> + */
> +void *tsk_get_xsave_field(struct task_struct *tsk, int xsave_field)
> +{
> + union thread_xstate *xstate;
> +
> + if (!used_math())
> + return NULL;
> + /*
> + * unlazy_fpu() is poorly named and will actually
> + * save the xstate off in to the memory buffer.
> + */
> + unlazy_fpu(tsk);
> + xstate = tsk->thread.fpu.state;
> +
> + return get_xsave_addr(&xstate->xsave, xsave_field);
> +}
> +EXPORT_SYMBOL_GPL(tsk_get_xsave_field);
> _
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists