[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1202201146450.4237@i5.linux-foundation.org>
Date: Mon, 20 Feb 2012 11:47:23 -0800 (PST)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>
cc: x86@...nel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: [PATCH v2 1/3] i387: fix up some fpu_counter confusion
From: Linus Torvalds <torvalds@...ux-foundation.org>
Date: Mon, 20 Feb 2012 10:24:09 -0800
Subject: [PATCH v2 1/3] i387: fix up some fpu_counter confusion
This makes sure we clear the FPU usage counter for newly created tasks,
just so that we start off in a known state (for example, don't try to
preload the FPU state on the first task switch etc).
It also fixes a thinko in when we increment the fpu_counter at task
switch time, introduced by commit 34ddc81a230b ("i387: re-introduce FPU
state preloading at context switch time"). We should increment the
*new* task fpu_counter, not the old task, and only if we decide to use
that state (whether lazily or preloaded).
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
---
That nonsensical "old->fpu_counter++" actually got the right result in
most common cases (because in the case of a preload and holding on to
the FPU for the whole timeslice - which is the normal case - it acted
the same way as if we had incremented the count at FPU load time), but
it really makes no sense.
And the "clear counters at fork time" don't really matter until you do
lazy restore, but it annoyed me that we started out a new process with
basically random fpu_counter values from the old one.
arch/x86/include/asm/i387.h | 3 ++-
arch/x86/kernel/process_32.c | 1 +
arch/x86/kernel/process_64.c | 1 +
3 files changed, 4 insertions(+), 1 deletions(-)
diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
index a850b4d8d14d..8df95849721d 100644
--- a/arch/x86/include/asm/i387.h
+++ b/arch/x86/include/asm/i387.h
@@ -348,10 +348,10 @@ static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct ta
if (__save_init_fpu(old))
fpu_lazy_state_intact(old);
__thread_clear_has_fpu(old);
- old->fpu_counter++;
/* Don't change CR0.TS if we just switch! */
if (fpu.preload) {
+ new->fpu_counter++;
__thread_set_has_fpu(new);
prefetch(new->thread.fpu.state);
} else
@@ -359,6 +359,7 @@ static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct ta
} else {
old->fpu_counter = 0;
if (fpu.preload) {
+ new->fpu_counter++;
if (fpu_lazy_restore(new))
fpu.preload = 0;
else
diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
index 80bfe1ab0031..bc32761bc27a 100644
--- a/arch/x86/kernel/process_32.c
+++ b/arch/x86/kernel/process_32.c
@@ -214,6 +214,7 @@ int copy_thread(unsigned long clone_flags, unsigned long sp,
task_user_gs(p) = get_user_gs(regs);
+ p->fpu_counter = 0;
p->thread.io_bitmap_ptr = NULL;
tsk = current;
err = -ENOMEM;
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 1fd94bc4279d..8ad880b3bc1c 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -286,6 +286,7 @@ int copy_thread(unsigned long clone_flags, unsigned long sp,
set_tsk_thread_flag(p, TIF_FORK);
+ p->fpu_counter = 0;
p->thread.io_bitmap_ptr = NULL;
savesegment(gs, p->thread.gsindex);
--
1.7.9.188.g12766.dirty
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists