lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 5 Feb 2019 19:03:37 +0100
From:   Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To:     Borislav Petkov <bp@...en8.de>
Cc:     linux-kernel@...r.kernel.org, x86@...nel.org,
        Andy Lutomirski <luto@...nel.org>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        kvm@...r.kernel.org, "Jason A. Donenfeld" <Jason@...c4.com>,
        Rik van Riel <riel@...riel.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>
Subject: Re: [PATCH 07/22] x86/fpu: Remove fpu->initialized

On 2019-01-24 14:34:49 [+0100], Borislav Petkov wrote:
> > set it back to one) or don't return to userland.
> > 
> > The context switch code (switch_fpu_prepare() + switch_fpu_finish())
> > can't unconditionally save/restore registers for kernel threads. I have
> > no idea what will happen if we restore a zero FPU context for the kernel
> > thread (since it never was initialized).
> 
> Yeah, avoid those "author is wondering" statements.

So I am no longer unsure about certain thing. Understood.

> > Also it has been agreed that
> > for PKRU we don't want a random state (inherited from the previous task)
> > but a deterministic one.
> 
> Rewrite that to state what the PKRU state is going to be.
I dropped that part. It was part for this patch in an earlier version
but it was moved.

> > For kernel_fpu_begin() (+end) the situation is similar: The kernel test
> > bot told me, that EFI with runtime services uses this before
> > alternatives_patched is true. Which means that this function is used too
> > early and it wasn't the case before.
> > 
> > For those two cases current->mm is used to determine between user &
> > kernel thread.
> 
> Now that we start looking at ->mm, I think we should document this
> somewhere prominently, maybe
> 
>   arch/x86/include/asm/fpu/internal.h
> 
> or so along with all the logic this patchset changes wrt FPU handling.
> Then we wouldn't have to wonder in the future why stuff is being done
> the way it is done.

Well, nothing changes in regard to the logic. Earlier we had a variable
which helped us to distinguish between user & kernel thread. Now we have
a different one. 
I'm going to add a comment to switch_fpu_prepare() about ->mm since you
insist but I would like to avoid it.

> Like the FPU saving on the user stack frame or why this was needed:
> 
> -	/* Update the thread's fxstate to save the fsave header. */
> -	if (ia32_fxstate)
> -		copy_fxregs_to_kernel(fpu);
> 
> Some sort of a high-level invariants written down would save us a lot of
> head scratching in the future.

We have a comment, it is just not helping.

> > diff --git a/arch/x86/include/asm/trace/fpu.h b/arch/x86/include/asm/trace/fpu.h
> > index 069c04be15076..bd65f6ba950f8 100644
> > --- a/arch/x86/include/asm/trace/fpu.h
> > +++ b/arch/x86/include/asm/trace/fpu.h
> > @@ -13,22 +13,19 @@ DECLARE_EVENT_CLASS(x86_fpu,
> >  
> >  	TP_STRUCT__entry(
> >  		__field(struct fpu *, fpu)
> > -		__field(bool, initialized)
> >  		__field(u64, xfeatures)
> >  		__field(u64, xcomp_bv)
> >  		),
> 
> Yikes, can you do that?
> 
> rostedt has been preaching that adding members at the end of tracepoints
> is ok but not changing them in the middle as that breaks ABI.
> 
> Might wanna ping him about it first.

Steven said on IRC that it can be removed.

> > diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
> > index e43296854e379..3a4668c9d24f1 100644
> > --- a/arch/x86/kernel/fpu/core.c
> > +++ b/arch/x86/kernel/fpu/core.c
> > @@ -147,10 +147,9 @@ void fpu__save(struct fpu *fpu)
> >  
> >  	preempt_disable();
> >  	trace_x86_fpu_before_save(fpu);
> > -	if (fpu->initialized) {
> > -		if (!copy_fpregs_to_fpstate(fpu)) {
> > -			copy_kernel_to_fpregs(&fpu->state);
> > -		}
> > +
> > +	if (!copy_fpregs_to_fpstate(fpu)) {
> > +		copy_kernel_to_fpregs(&fpu->state);
> >  	}
> 
> WARNING: braces {} are not necessary for single statement blocks
> #217: FILE: arch/x86/kernel/fpu/core.c:151:
> +       if (!copy_fpregs_to_fpstate(fpu)) {
> +               copy_kernel_to_fpregs(&fpu->state);
>         }
removed.

> 
> ...
> 
> > diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
> > index 7888a41a03cdb..77d9eb43ccac8 100644
> > --- a/arch/x86/kernel/process_32.c
> > +++ b/arch/x86/kernel/process_32.c
> > @@ -288,10 +288,10 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
> >  	if (prev->gs | next->gs)
> >  		lazy_load_gs(next->gs);
> >  
> > -	switch_fpu_finish(next_fpu, cpu);
> > -
> >  	this_cpu_write(current_task, next_p);
> >  
> > +	switch_fpu_finish(next_fpu, cpu);
> > +
> >  	/* Load the Intel cache allocation PQR MSR. */
> >  	resctrl_sched_in();
> >  
> > diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
> > index e1983b3a16c43..ffea7c557963a 100644
> > --- a/arch/x86/kernel/process_64.c
> > +++ b/arch/x86/kernel/process_64.c
> > @@ -566,14 +566,14 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
> >  
> >  	x86_fsgsbase_load(prev, next);
> >  
> > -	switch_fpu_finish(next_fpu, cpu);
> > -
> >  	/*
> >  	 * Switch the PDA and FPU contexts.
> >  	 */
> >  	this_cpu_write(current_task, next_p);
> >  	this_cpu_write(cpu_current_top_of_stack, task_top_of_stack(next_p));
> >  
> > +	switch_fpu_finish(next_fpu, cpu);
> > +
> >  	/* Reload sp0. */
> >  	update_task_stack(next_p);
> >  
> 
> Those moves need at least a comment in the commit message or a separate
> patch.

This needs to be part of this patch. I add a note to the commit message.

Sebastian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ