[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1421012793-30106-9-git-send-email-riel@redhat.com>
Date: Sun, 11 Jan 2015 16:46:30 -0500
From: riel@...hat.com
To: linux-kernel@...r.kernel.org
Cc: mingo@...hat.com, hpa@...or.com, matt.fleming@...el.com,
bp@...e.de, oleg@...hat.com, pbonzini@...hat.com,
tglx@...utronix.de, luto@...capital.net
Subject: [RFC PATCH 08/11] x86,fpu: restore user FPU state lazily after __kernel_fpu_end
From: Rik van Riel <riel@...hat.com>
Tasks may have multiple invocations of kernel_fpu_start and kernel_fpu_end
in sequence without ever hitting userspace in-between.
Delaying the restore of the user FPU state until the task returns to
userspace means the kernel only has to save the user FPU state on the
first invocation of kernel_fpu_start, making the other invocations
cheaper.
This is especially true for KVM vcpu threads, which can handle lots
of guest events and exceptions entirely in kernel mode.
Signed-off-by: Rik van Riel <riel@...hat.com>
---
arch/x86/kernel/i387.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index c98f88d..cfbf325 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -89,13 +89,11 @@ void __kernel_fpu_end(void)
if (use_eager_fpu()) {
/*
* For eager fpu, most the time, tsk_used_math() is true.
- * Restore the user math as we are done with the kernel usage.
- * At few instances during thread exit, signal handling etc,
- * tsk_used_math() is false. Those few places will take proper
- * actions, so we don't need to restore the math here.
+ * Make sure the user math state is restored on return to
+ * userspace.
*/
if (likely(tsk_used_math(current)))
- math_state_restore();
+ set_thread_flag(TIF_LOAD_FPU);
} else {
stts();
}
--
1.9.3
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists