[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190824085300.GB16813@zn.tnic>
Date: Sat, 24 Aug 2019 10:53:00 +0200
From: Borislav Petkov <bp@...en8.de>
To: Mihai Carabas <mihai.carabas@...cle.com>
Cc: linux-kernel@...r.kernel.org, ashok.raj@...el.com,
boris.ostrovsky@...cle.com, konrad.wilk@...cle.com,
patrick.colp@...cle.com, kanth.ghatraju@...cle.com,
Jon.Grimm@....com, Thomas.Lendacky@....com
Subject: [PATCH 1/2] x86/microcode: Update late microcode in parallel
From: Ashok Raj <ashok.raj@...el.com>
Date: Thu, 22 Aug 2019 23:43:47 +0300
Microcode update was changed to be serialized due to restrictions after
Spectre days. Updating serially on a large multi-socket system can be
painful since it is being done on one CPU at a time.
Cloud customers have expressed discontent as services disappear for a
prolonged time. The restriction is that only one core goes through the
update while other cores are quiesced.
Do the microcode update only on the first thread of each core while
other siblings simply wait for this to complete.
[ bp: Simplify, massage, cleanup comments. ]
Signed-off-by: Ashok Raj <ashok.raj@...el.com>
Signed-off-by: Mihai Carabas <mihai.carabas@...cle.com>
Signed-off-by: Borislav Petkov <bp@...e.de>
Cc: Boris Ostrovsky <boris.ostrovsky@...cle.com>
Cc: "H. Peter Anvin" <hpa@...or.com>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Jon Grimm <Jon.Grimm@....com>
Cc: kanth.ghatraju@...cle.com
Cc: konrad.wilk@...cle.com
Cc: patrick.colp@...cle.com
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Tom Lendacky <thomas.lendacky@....com>
Cc: x86-ml <x86@...nel.org>
Link: https://lkml.kernel.org/r/1566506627-16536-2-git-send-email-mihai.carabas@oracle.com
---
arch/x86/kernel/cpu/microcode/core.c | 36 ++++++++++++++++------------
1 file changed, 21 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
index cb0fdcaf1415..7019d4b2df0c 100644
--- a/arch/x86/kernel/cpu/microcode/core.c
+++ b/arch/x86/kernel/cpu/microcode/core.c
@@ -63,11 +63,6 @@ LIST_HEAD(microcode_cache);
*/
static DEFINE_MUTEX(microcode_mutex);
-/*
- * Serialize late loading so that CPUs get updated one-by-one.
- */
-static DEFINE_RAW_SPINLOCK(update_lock);
-
struct ucode_cpu_info ucode_cpu_info[NR_CPUS];
struct cpu_info_ctx {
@@ -566,11 +561,18 @@ static int __reload_late(void *info)
if (__wait_for_cpus(&late_cpus_in, NSEC_PER_SEC))
return -1;
- raw_spin_lock(&update_lock);
- apply_microcode_local(&err);
- raw_spin_unlock(&update_lock);
+ /*
+ * On an SMT system, it suffices to load the microcode on one sibling of
+ * the core because the microcode engine is shared between the threads.
+ * Synchronization still needs to take place so that no concurrent
+ * loading attempts happen on multiple threads of an SMT core. See
+ * below.
+ */
+ if (cpumask_first(topology_sibling_cpumask(cpu)) == cpu)
+ apply_microcode_local(&err);
+ else
+ goto wait_for_siblings;
- /* siblings return UCODE_OK because their engine got updated already */
if (err > UCODE_NFOUND) {
pr_warn("Error reloading microcode on CPU %d\n", cpu);
ret = -1;
@@ -578,14 +580,18 @@ static int __reload_late(void *info)
ret = 1;
}
+wait_for_siblings:
+ if (__wait_for_cpus(&late_cpus_out, NSEC_PER_SEC))
+ panic("Timeout during microcode update!\n");
+
/*
- * Increase the wait timeout to a safe value here since we're
- * serializing the microcode update and that could take a while on a
- * large number of CPUs. And that is fine as the *actual* timeout will
- * be determined by the last CPU finished updating and thus cut short.
+ * At least one thread has completed update on each core.
+ * For others, simply call the update to make sure the
+ * per-cpu cpuinfo can be updated with right microcode
+ * revision.
*/
- if (__wait_for_cpus(&late_cpus_out, NSEC_PER_SEC * num_online_cpus()))
- panic("Timeout during microcode update!\n");
+ if (cpumask_first(topology_sibling_cpumask(cpu)) != cpu)
+ apply_microcode_local(&err);
return ret;
}
--
2.21.0
--
Regards/Gruss,
Boris.
Good mailing practices for 400: avoid top-posting and trim the reply.
Powered by blists - more mailing lists